Automatic chickpea classification using computer vision techniques

  1. Sajad Sabzi 1
  2. Víctor Manuel García-Amicis 2
  3. Yousef Abbaspour-Gilandeh 1
  4. Ginés García-Mateos 2
  5. José Miguel Molina-Martínez 3
  1. 1 University of Mohaghegh Ardabili
    info

    University of Mohaghegh Ardabili

    Ardabīl, Irán

    ROR https://ror.org/045zrcm98

  2. 2 Universidad de Murcia
    info

    Universidad de Murcia

    Murcia, España

    ROR https://ror.org/03p3aeb86

  3. 3 Technical University of Cartagena
Livre:
IX Congresso Ibérico de Agroengenharia: Livro de Atas
  1. José Carlos Barbosa (coord.)
  2. António Castro Ribeiro (coord.)

Éditorial: Instituto Politécnico de Bragança

ISBN: 978-972-745-247-7

Année de publication: 2018

Pages: 1167-1176

Congreso: Congreso Ibérico de Agroingeniería y Ciencias Hortícolas (9. 2017. Braganza)

Type: Communication dans un congrès

Résumé

In this research two different computer vision systems for chickpea classification are compared: the first method is based on feature extraction, while the second is based on color features. 1019 images corresponding to five different Iranian chickpea species were taken using an industrial camera DFK 23GM021, from a 10 cm fixed height above the samples level. Lighting was performed by white color LED lamps with an intensity of 327 lx. As a first approach, some predefined features were extracted from each object, 126 color-based and 80 texture features based on the gray level co-occurrence matrix (GLCM). After extracting these 206 features from each object, the most effective features were selected for the input of the classifier. Specifically, 6 effective features were selected: information measure of correlation related to neighborhood 135º angle, diagonal moment for 90º angle, sumvariance for 0º angle, inverse difference moment normalized for 0º angle, mean and normalized mean of the second component in CMY color space. A hybrid of artificial neural network and particle swarm optimization (ANNPSO) was used as a classifier. This method achieved a correct classification rate (CCR) of 97.0%. In the second approach, the stage of features extraction and selection was omitted. Thus, image patches (RGB pixel values) were directly used as input for a three layered backpropagation ANN. Prior segmentation was done to select significant chickpeas patches, dividing the images into smaller ones to focus on individual peas. Finally, the modal value of the predictions for the resulting sub-images was used to predict the class of the whole image. In this case, a CCR of99.3% was achieved. These results prove that visual classification of fruit varieties in agriculture can be done in a very precise way using a suitable method, which is also generic and extendable to other types of crops.