Research Article

Horticultural Science and Technology. 30 June 2024. 249-263
https://doi.org/10.7235/HORT.20240022

ABSTRACT


MAIN

  • Introduction

  • Materials and Methods

  •   Sample collection

  •   Hyperspectral imaging system

  •   Dataset extraction

  •   Machine learning model for classification

  •   Evaluation of machine learning models

  • Results and Discussion

  •   Spectral analysis results in hyperspectral data

  •   Model evaluation based on classification model evaluation metrics

  •   Potential developments and future applications

  • Conclusion

Introduction

Plants are increasingly being recognized for their medicinal value and are being consumed and processed specifically for the prevention and treatment of various diseases (Mocan et al., 2016; Li and Weng, 2017). Consequently, the market for plant-based raw materials intended to be used for making supplements and medicine is growing globally. Particularly, there is an increasing interest in berries, which can be added to beverages or processed as powders, because they contain high amounts of bioactive compounds (Llorent-Martínez et al., 2013).

Lycium barbarum, L. chinense, Cornus officinalis, and Schisandra chinensis have been used historically as food and medicine in East Asia (Llorent-Martínez et al., 2013; Li and Weng, 2017). L. barbarum and L. chinense are deciduous shrubs and bear bright orange-red oval berries measuring 1 to 2 cm long, with the corresponding common names of goji berry and wolfberry (Donno et al., 2015), known as “gugija” in Korean. These berries have great benefits for eye health because it contains bioactive substances such as zeaxanthin and lutein (Chien et al., 2018; Skenderidis et al., 2022; Teixeira et al., 2023). C. officinalis has red berries that are oblong and 1.5 to 2 cm long, known as “sansuyu” in Korean. This plant has pharmacological effects associated with digestive improvements and the enhancement of liver and kidney functions (Gao et al., 2021; Fan et al., 2022). S. chinensis produces red, spherical berries about 1 cm in diameter, known as “omija” in Korean. The consumption of S. chinensis berries has been reported to enhance the immune response (Li et al., 2018; Kortesoja et al., 2019).

Berries of medicinal plants, offered as over-the-counter remedies or herbal supplements, are utilized by many patients for self-prescribed therapeutic practices in their daily lives (Chen et al., 2010). However, interactions between various foods and medicinal plants can lead to unwanted side effects (Karimpour-Reihan et al., 2018). For example, cases in which patients with chronic cardiovascular disease experienced toxicity due to the inhibition of the CYP2D6 enzyme involved in flecainide metabolism after consuming L. barbarum as a supplement have been reported (Guzmán et al., 2021). Moreover, due to the similarities in physical traits such as the size, shape, and color, many medicinal plants and their fruits can be difficult to distinguish visually, leading to misidentification (Rajani and Veena, 2022).

Therefore, to identify medicinal plants accurately, recent research has focused on exploring methods for differentiating the color and shape more accurately (Fu et al., 2011; Karki et al., 2024). Plants with similar colors, textures, and geometric features have been accurately distinguished through imaging by means of optical microscopy, followed by feature extraction and the utilization of a decision tree model with accuracy rates exceeding 95% (Keivani et al., 2020). Wang et al. (1999) employed optical radiation measurements to differentiate rice grains by color. They measured the reflectance spectra log (1/R) in the 400 nm–2000 nm wavelength, achieving accuracy rates up to 98.5% by utilizing partial least squares (PLS) regression. However, the techniques mentioned are destructive and time-consuming (Ariana and Lu, 2010).

To eliminate time-consuming sample preparation and measurement processes and destructive testing, hyperspectral technology can be applied to medicinal plant identification (Liu et al., 2022). Hyperspectral imaging systems disperse incident light to acquire very large amounts of spectral information corresponding to each pixel in an image. Hyperspectral imaging enables high-resolution analyses of small areas and is particularly effective if used to analyze and monitor colors in specific parts of objects (Chlebda et al., 2017). Consequently, selecting an appropriate machine learning model is crucial for interpreting high-dimensional hyperspectral data containing a large amount of spectral information (Makantasis et al., 2018; Baek et al., 2023).

Few studies have used hyperspectral imaging to classify plants with similar colors, and an identification accuracy rate of 98.1% was achieved in one study that applied machine learning to hyperspectral images (Ruett et al., 2022). Another study used hyperspectral imaging to differentiate various plants with different colors (Salve et al., 2022). However, research on identifying the fruits of medicinal plants with similar colors through hyperspectral imaging is lacking.

In this study, a classification framework applying machine learning techniques to hyperspectral images of representative medicinal plants producing berries with similar colors and sizes (C. officinalis, L. chinense, L. barbarum, and S. chinensis) is developed and evaluated. Four supervised learning models were evaluated: logistic regression (LR), K-nearest neighbor (KNN), decision tree (DT), and random forest (RF) models. Considering the challenges posed by the high-dimensional and redundant data of hyperspectral images, the selected models offer complementary strengths. LR provides simplicity and efficiency in high-dimensional spaces, KNN captures local patterns, the DT model handles non-linear relationships and feature selection, and RF, given its ensemble characteristic, enhances robustness and copes with dimensionality effectively. The novel identification technology using hyperspectral images can be applied at the commercial distribution stage and thus contribute to controlling the product quality and ensuring the authenticity of medicinal fruits.

Materials and Methods

Sample collection

Dried fruits of four different medicinal plants, C. officinalis, L. chinense Miller, L. barbarum Linné, and S. chinensis, were purchased from local markets in Gangneung, Korea (Fig. 1). The red fruits of these four species are all oval-shaped with a diameter of 1–2 cm. The species were identified and verified by experts. The weights of each sample used in the experiment are as follows: C. officinalis 155 g, L. chinense 146 g, L. barbarum 179 g, and S. chinensis 160 g.

https://cdn.apub.kr/journalsite/sites/kshs/2024-042-03/N0130420301/images/HST_42_03_01_F1.jpg
Fig. 1.

Photographs of the collected samples: ((A) C. officinalis, (B) L. chinense, (C) L. barbarum, (D) S. chinensis).

Hyperspectral imaging system

Hyperspectral images were captured using a MicroHSI 410 SHARK (Corning Inc., Corning, NY, USA) camera with eight 20 W halogen lamps mounted on the side of the camera (four on each side) to illuminate the sample (Fig. 2). The camera setup can be moved along the X- and Y-axes using a conveyor belt and a motor, and the speed can be adjusted while capturing images in a line-scan manner. The images were obtained with 150 wavelength bands within the 404.52–1000.88 nm range at a rate of 100 mm·s-1 on the X-axis. Each sample was placed in the central part, where the light distribution was uniform, on a non-reflective black panel.

https://cdn.apub.kr/journalsite/sites/kshs/2024-042-03/N0130420301/images/HST_42_03_01_F2.jpg
Fig. 2.

Schematic of the hyperspectral measurement setup.

Dataset extraction

To generate training data for the classification model from the raw data of the hyperspectral images acquired as described above, we followed the process depicted in Fig. 3. First, for each hyperspectral image, each sample fruit was individually selected as a region of interest (ROI). Given the differing size and shape of each berry, the ROI size was set differently (Fig. 3B and 3C). The data shape of one ROI was assigned as 48×32×150 for C. officinalis, 50×35×150 for L. chinense, 50×28×150 for L. barbarum, and 28×18×150 for S. chinensis. As a result, there were correspondingly 327, 377, 424, and 1,070 multiple ROI counts.

https://cdn.apub.kr/journalsite/sites/kshs/2024-042-03/N0130420301/images/HST_42_03_01_F3.jpg
Fig. 3.

Process of extracting training data: ((A) Raw data captured with a hyperspectral camera, (B) selection of regions of interest (ROIs) for each berry sample, (C) separation of background and sample using a single berry crop and the application of NDVI, (D) reflectance for each pixel of the sample berry, and (E) average reflectance values across 150 spectral bands for a single sample berry).

Second, a thresholding technique using the normalized difference vegetation index (NDVI) was applied for background removal (Fig. 3C and 3D). NDVI is generally used for quantifying vegetation and is calculated from spectral data at the near-infrared (NIR) and red light (RED) bands (Hashim et al., 2019; Agilandeeswari et al., 2022):

(1)
NDVI=NIR-REDNIR+RED

In this study, NDVI values for each pixel were calculated from 812.76 and 652.67 nm as the NIR and RED bands, respectively. An NDVI-based segmentation mask was created with a threshold value of 0.3 (Fig. 4). The threshold was determined as the value that most strengthens the contrast between dried berries and the background. After background removal, the reflectance values were compared with the raw data to confirm the segmentation results (Fig. 3D). Despite the fact that this study did not target leaves, NDVI was used because it can effectively separate background materials as well as non-plant contaminants regardless of the plant species.

https://cdn.apub.kr/journalsite/sites/kshs/2024-042-03/N0130420301/images/HST_42_03_01_F4.jpg
Fig. 4.

Removing the background using NDVI: ((A) C. officinalis, (B) L. chinense, (C) L. barbarum, (D) S. chinensis).

Finally, the average reflectance per cropped berry for each sample was extracted. Some ROIs were calculated while including a single intact berry and a portion of an adjacent berry. Files in the npy format saved per sample were formed as two-dimensional data with one average data instance per band (Fig. 5). This process of generating the training data was conducted in the Python 3.8 environment using the numpy, pandas, and matplotlib libraries.

https://cdn.apub.kr/journalsite/sites/kshs/2024-042-03/N0130420301/images/HST_42_03_01_F5.jpg
Fig. 5.

Process of converting 3D data to 2D data and then extracting the mean reflectance for each band (Lycium chinense).

Machine learning model for classification

The generated training data were applied to four artificial intelligence models: LR, KNN, DT, and RF. These models were utilized to classify the training data into four classes: C. officinalis, L. chinense, L. barbarum, and S. chinensis.

LR is a generalized linear model with a standard link function (Golpour et al., 2020). It is widely used as a data processing method for analyzing and estimating relationships among various data in both binary classification and predictive aspects. This model predicts whether data belong to a specific category, assigning a continuous probability rate between 0 and 1 (Zou et al., 2019). The probability of the event of interest (pi) occurring is determined as follows:

(2)
pi=11+e(-ωTxi+b))

The probability of occurrence is a function of the explanatory variable vector (xi), a set of common underlying weights (𝜔), and a constant (b) shared across all observations. The subscript (i) represents one unique observation from a total of N observations (Robles-Velasco et al., 2020).

KNN is one of the most essential and effective algorithms for data classification. It is a method used to classify objects based on the closest training examples in feature space (Boateng et al., 2020; Bansal et al., 2022). Unlike typical machine learning models, it does not require a separate training phase to construct a predefined model (Wang et al., 2023). Instead, KNN relies on the entire training dataset during the classification phase to identify the nearest neighbors for a given sample. Therefore, it can classify new data based on the arrangement and similarity of its nearest neighbors, resulting in a straightforward and accurate prediction (Rajaguru and S R, 2019). The KNN algorithm uses the Euclidean distance measurement method to find the nearest neighbors. The distance d(x,y) between two points x and y is calculated as follows:

(3)
dx,y=i=1N(xi-yi)2

where N is the number of features, x = {x1, x2, x3, ..., xN}, and y = {y1, y2, y3, ..., yN} (Boateng et al., 2020; Shabani et al., 2020; Wang et al., 2023).

DT is a machine learning model that extracts data based on specific criteria from feature-based samples (Rajaguru and S R, 2019; Zhou et al., 2021). Additionally, it has relatively few parameters to estimate because it divides the variable space into two spaces at each branching step. Furthermore, the DT algorithm proceeds with learning in a way that minimizes uncertainty. Uncertainty refers to how different data are mixed within a category. Degrees of Uncertainty or disorder are quantified by a measure called entropy. When disorder is minimal, meaning there is only one data point in a category, the entropy is 0. In other words, by gaining information, the DT is constructed, and the data are split using the maximum entropy reduction attribute (Li et al., 2019).

RF is a machine learning approach that uses the DT learning algorithm. It is widely used for developing predictive models (Shabani et al., 2020). It is an ensemble learning approach, where the same problem is divided into many small sets of data through resampling to create a diverse dataset, followed by training. The idea is to combine multiple learning models to improve the accuracy. This allows for a reduced number of required variables, thus reducing the burden of data collection and improving the efficiency (Sheykhmousa et al., 2020). To construct each individual tree within the RF model, variables are randomly selected from the training set, and the best splitting criteria are explored within the subset of randomly chosen variables (Speiser et al., 2019). This randomness leads to greater diversity among the trees and reduces unrelated trees, thereby enhancing the overall performance. As a result, to make predictions for a given set of test observations, the individual trees predict classes, and the final prediction is made in classification problems by majority voting, where the most frequently predicted class is chosen (Han et al., 2021).

Multi-class classification to classify four medicinal plants using the models described earlier was conducted. In this process, the scikit-learn library in the Python environment was utilized, and multi-class classification was addressed by employing the One-vs-Rest strategy. One-vs-Rest involves training one binary classifier for each class, as some classifiers that perform well in binary classification may not easily extend to multi-class classification problems (Faris et al., 2020).

The ratio of the training data to the test data was divided into an 8:2 split, and a K-fold cross-validation process was conducted by dividing the training dataset into ten subsets and sequentially cross-validating them as test data. This was done to address potential issues that may arise when working with limited data, allowing for the repetitive evaluation of the model performance on the training set (Wong and Yeh, 2019).

Evaluation of machine learning models

Performance metrics were used to evaluate whether the classification of samples using the previously described models was appropriate (Adikari et al., 2021). Improving the predictive performance of artificial intelligence models is a crucial process. In this study, we utilized four classification model performance evaluation metrics: a confusion matrix, accuracy, the F1 score, and the ROC curve.

The confusion matrix is a visual evaluation tool for assessing the performance of a classification model in machine learning (Heydarian et al., 2022). It classifies items by comparing actual labels with the model's predicted results. In this case, columns represent the results of predicted classes, and rows represent the results of actual classes. Given that four classes are used for classification models, a total of 16 outcomes are produced. This allows the definition of a series of algorithm performance metrics (Theissler et al., 2022).

Accuracy values quantify the proportion of correct predictions out of all predictions. A higher accuracy value indicates higher prediction accuracy, aiding in the distinction of each class. Accuracy is relatively straightforward to compute and has low complexity. Additionally, it is readily applicable to both multi-class and multi-label problems (Bochkovskiy et al., 2020).

The F1 score measures the harmonic mean between recall and precision. Recall represents the number of truly positive data instances that the model correctly identifies as positive, while precision represents the ratio of instances the model correctly identified as positive to the total instances it classified as positive (Salman et al., 2020). There are three types of F1 scores, and their differences lie in how they calculate the average. Firstly, the macro-average is simply the average F1 score across labels. The weighted average assigns weights proportional to the number of labels and then averages the F1 scores, while the micro-average computes the F1 score by considering the entire sample as a whole (Zeng et al., 2023). In this study, to mitigate the impact of varying training sample sizes on the F1 scores, the macro-average method was employed.

The ROC curve plots sensitivity against 1-specificity and graphically shows how different levels of sensitivity affect specificity. It allows for a quick understanding of the utility of an algorithm and can be used to evaluate the level of classification (Handelman et al., 2019).

Results and Discussion

Spectral analysis results in hyperspectral data

After obtaining the average reflectance within each sample ROI (Fig. 6), the maximum and minimum reflectance values were examined. In the order of C. officinalis, L. chinense, L. barbarum, and S. chinensis, the maximum values were 58.3%, 77.7%, 86.9%, and 90.7%, respectively. The corresponding minimum values were 0.06%, 0.09%, 0.15%, and 0.08%. The overall shape was similar across all samples, but a sharp decrease was observed around 900 nm. This can be attributed to the sensitivity limitations of the hyperspectral camera, as indicated in the camera specifications. Similar decreases in reflectance around 900 nm have been observed in other studies that used the same camera (Choi et al., 2022). This suggests that the type of equipment used can impact the results.

https://cdn.apub.kr/journalsite/sites/kshs/2024-042-03/N0130420301/images/HST_42_03_01_F6.jpg
Fig. 6.

Average reflectance of 150 spectral bands per cropped specimen unit: ((A) C. officinalis, (B) L. chinense, (C) L. barbarum, (D) S. chinensis).

Model evaluation based on classification model evaluation metrics

Accuracy/F1 score

The accuracy and F1 scores for the classification models are summarized in Table 1. The accuracy values for the test data were all above 90%, indicating a high probability of correct classification. Likewise, the F1 scores were all above 0.9, approaching 1, demonstrating that most of the models exhibited effective performance. Notably, the values were consistently close to 1, indicating that overfitting did not occur during the data splitting process between the training and test datasets. Furthermore, in terms of the classification results for the test data, the LR model had the highest accuracy and F1 scores, both at 0.998, followed by RF, DT, and KNN in descending order of performance.

The experiments with the training and test data were validated using the K-fold cross-validation method, and Table 1 presents the averaged accuracy and F1 score results after ten rounds of performance evaluations. All four models consistently achieved high values around 0.9, demonstrating the effectiveness of the models in correctly classifying the samples. Furthermore, all datasets were used for training, resulting in improved accuracy and F1 score outcomes. This prevented underfitting due to data scarcity and reduced bias in the evaluation data. In this study, the processing times of the LR, KNN, DT, and RF models were approximately 2.1, 0.3, 2.2, and 12 seconds, respectively, during training, testing, and cross-validation. The processing time of a model is an important factor to consider with regard to computational efficiency, especially for field applications such as software or device development. In this study, where there was a significant difference in the number of data samples used, this approach proved effective in achieving more generalized results and contributed to the more effective application of the data to the models.

Table 1.

Classification model accuracy and F1 scores

Model Ten-fold cross-validation data Test data
Accuracy F1 score (macro) Accuracy F1 score (macro)
Logistic regression (LR) 0.999 0.998 0.998 0.998
K-nearest neighbor (KNN) 0.927 0.913 0.923 0.909
Decision tree (DT) 0.905 0.893 0.923 0.911
Random forest (RF) 0.956 0.949 0.955 0.946

Confusion matrix

The confusion matrices for the four classification models are presented in Fig. 7. Overall, all machine learning models used here showed appropriate classification with high probabilities. In particular, the LR model exhibits the best results, correctly classifying all samples. LR was expected to perform efficiently in hyperspectral image classification due to several specific features. Hyperspectral images are characterized by high-dimensional data and a large number of spectral bands, making feature extraction and classification challenging (Ham et al., 2005; Li et al., 2009; Qian et al., 2012; Kishore and Kulkarni, 2016).

https://cdn.apub.kr/journalsite/sites/kshs/2024-042-03/N0130420301/images/HST_42_03_01_F7.jpg
Fig. 7.

Confusion matrix for each classification model: ((A) Logistic Regression (LR), (B) K-Nearest Neighbor (KNN), (C) Decision Tree (DT), (D) Random Forest (RF)).

The LR model seeks linear decision boundaries to classify data into multiple categories. Linear decision boundaries help classify data in diverse and intuitive ways (Li et al., 2009; Golpour et al., 2020). Consequently, the LR model does not implement a complex classification process, similar to the KNN and DT models. This simplicity is one of its advantages. Therefore, the combination of hyperspectral data and the LR model provides an effective approach for data classification, allowing for straightforward modeling. As a result, this combination helps process extensive hyperspectral data and obtains accurate results (Kishore and Kulkarni, 2016).

LR also has the ability to learn class distributions (Shah et al., 2020). This feature is highly valued in the context of hyperspectral data. LR learns the distribution for each class and estimates the probability of each data point belonging to a specific class (Gautam and Nadda, 2022; Ye et al., 2022). This capability turns it into a powerful tool for modeling relationships among multiple classes. By utilizing this ability, not only can data be effectively classified, but also interactions and dependencies among classes can be taken into account. On the other hand, classification models such as KNN or DT do not directly learn class distributions but classify data based on factors such as neighboring points or splitting rules. This makes it challenging to consider relationships among classes or achieve proper classification in datasets with class imbalances.

In previous research related to actual hyperspectral image classification models, LR demonstrated an accuracy rate close to 100%, with a true positive rate (TPR) near 1 and a false positive rate (FPR) close to 0, making the ROC curve results highly satisfactory (Feng et al., 2019). Therefore, it is confirmed that LR exhibits outstanding capabilities for effective classification.

ROC curve

The classification models were evaluated using ROC curves and the corresponding AUC values (Fig. 8). It can be observed that, in general, the AUC values for all ROC curves are approximately 0.9, indicating that classification was carried out appropriately. For the LR model, all AUC values are close to 1, confirming that all samples were correctly classified. KNN and RF also achieved AUC values of 0.98 or higher, demonstrating effective classification. However, among the four models, DT has the lowest AUC value. This can be explained in comparison to RF, as both models utilize a tree-based structure for data analysis and prediction. However, RF combines multiple DTs to reduce overfitting tendencies, increase model diversity, and improve generalization performance.

https://cdn.apub.kr/journalsite/sites/kshs/2024-042-03/N0130420301/images/HST_42_03_01_F8.jpg
Fig. 8.

ROC curves and AUC values for each classification model: ((A) Logistic Regression (LR), (B) K-Nearest Neighbor (KNN), (C) Decision Tree (DT), (D) Random Forest (RF)).

In previous studies related to hyperspectral image classification using both models, the RF model demonstrated results with AUC values very close to 1. However, for classification using DT, the AUC values were between 0.6 and 0.8, indicating relatively low performance. Furthermore, other evaluation metrics also statistically favored the performance of RF over DT (Amirruddin et al., 2020). Therefore, when classifying hyperspectral images, the LR model is most effective, but when considering alternative models, applying the RF model is more effective compared to the use of DT. This is attributed to the capabilities of the RF model as an ensemble model to enhance the classification accuracy (Speiser et al., 2019).

Potential developments and future applications

Although this study focused on evaluating the performance of the four selected models, our approach has potential for a variety of applications and suggests potential developments in the field. From a practical application perspective, we particularly highlight its effectiveness during rapid and accurate classifications of hyperspectral images obtained from the LR model. The efficient performance of LR is especially important in field applications such as precision agriculture, environmental monitoring, and resource management (Rajeshwari et al., 2021; Sathiyamoorthi et al., 2022). Additionally, the fast processing times of LR, KNN, and DT highlight their potential for deployment in time-sensitive applications. Processing efficiency combined with high classification accuracy allows these models to be integrated into field devices or software to enable rapid decision-making in the field. For example, with regard to medicinal fruit distributed after drying, the model could be used to develop software to determine the authenticity and adulteration of the resulting products.

Certain limitations should be addressed to enhance the applicability of hyperspectral image classification further. To distinguish mixed berries, the priority should be to assign each berry to an ROI in the entire images, usually manually. Therefore, recognizing individual berries as distinct objects and implementing algorithms for precise ROI identification would be necessary, as these advances can contribute to system automation. In addition, when combined with prediction models for bioactive compounds, these models can be extended to quality control techniques for the functionality of medicinal plants beyond identification of authenticity or adulteration. Finally, field validation to evaluate model performance outcomes in a practical environment is crucial for the practical deployment of these models. Understanding the challenges and limitations in dynamic environments will contribute to refining the models for practical use. Overall, this study confirms the feasibility of a machine-learning-based hyperspectral image classification system and suggests its practical application in the medicinal plant industry.

Conclusion

This study acquired hyperspectral data and applied machine learning models to differentiate between medicinal berries with similar colors and spectra: C. officinalis, L. chinense, L. barbarum, and S. chinensis. From hyperspectral images with 150 wavelength bands within 400–1,000 nm, individual berries were separated after background removal. The average reflectance for each berry was used to for model development and validation. Four machine learning models, LR, KNN, DT, and RF, trained on the spectral data were assessed by classification evaluation metrics: accuracy, F1 score, confusion matrix, and ROC curve.

The accuracy and F1 score of all models were consistently high, above 90% and 0.9, respectively, indicating successful classification. These results were further validated through a 10-fold cross-validation process, which provided more generalized outcomes. Evaluation of the classification models using the confusion matrix revealed that the LR model performed the most accurate classification for all samples. This was attributed to the unique classification characteristics of LR. Additionally, the performance of the models was visualized and assessed using the ROC curve, and AUC values were computed. It was observed that all four models generally achieved AUC values around 0.9, confirming precise classification. However, the DT model yielded the lowest results compared to the other models. This difference in performance could be explained through a comparison with the RF model, which leverages randomness and employs multiple DTs. In conclusion, among the four classification models, LR was found to be the most effective model in classifying hyperspectral images of medicinal plants producing berries with similar sizes, shapes, and colors. The LR-based classification system developed in this study can be practically applied to large-scale agriculture and processing and can reduce the misidentification and misuse of medicinal plants and fruits.

Acknowledgements

This work was supported by the Korea Institute of Planning and Evaluation for Technology in Food, Agriculture and Forestry (IPET) and by the Korea Smart Farm R&D Foundation (KosFarm) through the Smart Farm Innovation Technology Development Program, funded by the Ministry of Agriculture, Food and Rural Affairs (MAFRA), the Ministry of Science and ICT (MSIT), and the Rural Development Administration (RDA) (Grant No. 421026-04).

References

1

Adikari KE, Shrestha S, Ratnayake DT, Budhathoki A, Mohanasundaram S, Dailey MN (2021) Evaluation of artificial intelligence models for flood and drought forecasting in arid and tropical regions. Environ Model Softw 144:105136. doi:10.1016/j.envsoft.2021.105136

10.1016/j.envsoft.2021.105136
2

Agilandeeswari L, Prabukumar M, Radhesyam V, Phaneendra KLB, Farhan A (2022) Crop classification for agricultural applications in hyperspectral remote sensing images. Appl Sci 12:1670. doi:10.3390/app12031670

10.3390/app12031670
3

Amirruddin AD, Muharam FM, Ismail MH, Ismail MF, Tan NP, Karam DS (2020) Hyperspectral remote sensing for assessment of chlorophyll sufficiency levels in mature oil palm (Elaeis guineensis) based on frond numbers: Analysis of decision tree and random forest. Comput Electron Agric 169:105221. doi:10.1016/j.compag.2020.105221

10.1016/j.compag.2020.105221
4

Ariana DP, Lu R (2010) Evaluation of internal defect and surface color of whole pickles using hyperspectral imaging. J Food Eng 96:583-590. doi:10.1016/j.jfoodeng.2009.09.005

10.1016/j.jfoodeng.2009.09.005
5

Baek Y, Sul S, Cho YY (2023) Estimation of Days after Transplanting using an Artificial Intelligence CNN (Convolutional Neural Network) Model in a Closed-type Plant Factory. Hortic Sci Technol 41:81-90. doi:10.7235/HORT.20230008

10.7235/HORT.20230008
6

Bansal M, Goyal A, Choudhary A (2022) A comparative analysis of K-nearest neighbor, genetic, support vector machine, decision tree, and long short term memory algorithms in machine learning. Decis Anal J 3:100071. doi:10.1016/j.dajour.2022.100071

10.1016/j.dajour.2022.100071
7

Boateng EY, Otoo J, Abaye DA (2020) Basic tenets of classification algorithms K-nearest-neighbor, support vector machine, random forest and neural network: a review. Journal of Data Analysis and Information Processing 8:341-357. doi:10.4236/jdaip.2020.84020

10.4236/jdaip.2020.84020
8

Bochkovskiy A, Wang CY, Liao HYM (2020) Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934.

10.48550/arXiv.2004.10934
9

Chen LC, Wang BR, Chen IC, Shao CH (2010) Use of Chinese herbal medicine among menopausal women in Taiwan. Int J Gynaecol Obstet 109:63-66. doi:10.1016/j.ijgo.2009.10.014

10.1016/j.ijgo.2009.10.01419969298
10

Chien KJ, Horng CT, Huang YS, Hsieh YH, Wang CJ, Yang JS, Lu CC, Chen F (2018) Effects of Lycium barbarum (goji berry) on dry eye disease in rats. Mol Med Rep 17:809-818. doi:10.3892/mmr.2017.7947

10.3892/mmr.2017.794729115477PMC5780158
11

Chlebda DK, Rogulska A, Lojewski T (2017) Assessment of hyperspectral imaging system for colour measurement. Spectrochim Acta A Mol Biomol Spectrosc 185:55-62. doi:10.1016/j.saa.2017.05.037

10.1016/j.saa.2017.05.03728538191
12

Choi JH, Park SH, Jung DH, Park YJ, Yang JS, Park JE, Lee H, Kim SM (2022) Hyperspectral imaging-based multiple predicting models for functional component contents in Brassica juncea. Agriculture 12:1515. doi:10.3390/agriculture12101515

10.3390/agriculture12101515
13

Donno D, Beccaro GL, Mellan MG, Cerutti AK, Bounous G (2015) Goji berry fruit (Lycium spp.): Antioxidant compound fingerprint and bioactivity evaluation. J Func Foods 18:1070-1085. doi:10.1016/j.jff.2014.05.020

10.1016/j.jff.2014.05.020
14

Fan S, Li J, Zhang X, Xu D, Liu X, Dias ACP, Zhang X, Chen C (2022) A study on the identification, quantification, and biological activity of compounds from Cornus officinalis before and after in vitro gastrointestinal digestion and simulated colonic fermentation. J Func Foods 98:105272. doi:10.1016/j.jff.2022.105272

10.1016/j.jff.2022.105272
15

Faris H, Habib M, Faris M, Alomari M, Alomari A (2020) Medical speciality classification system based on binary particle swarms and ensemble of one vs. rest support vector machines. J Biomed Inform 109:103525. doi:10.1016/j.jbi.2020.103525

10.1016/j.jbi.2020.10352532781030
16

Feng L, Zhu S, Zhou L, Zhao Y, Bao Y, Zhang C, He Y (2019) Detection of subtle bruises on winter jujube using hyperspectral imaging with pixel-wise deep learning method. IEEE Access 7:64494-64505. doi:10.1109/ACCESS.2019.2917267

10.1109/ACCESS.2019.2917267
17

Fu L, Okamoto H, Kataoka T, Shibata Y (2011) Color based classification for berries of Japanese Blue Honeysuckle. Int J Food Eng 7. doi:10.2202/1556-3758.2408

10.2202/1556-3758.2408
18

Gao X, Liu, Y, An, Ni, J (2021) Active components and pharmacological effects of Cornus officinalis: Literature review. Front Pharmacol 12:633447. doi:10.3389/fphar.2021.633447

10.3389/fphar.2021.63344733912050PMC8072387
19

Gautam RK, Nadda S (2022) Hyperspectral Image Prediction Using Logistic Regression Model. In Proceedings of Emerging Trends and Technologies on Intelligent Systems: ETTIS 2022. Singapore: Springer Nature Singapore, pp 283-293. doi:10.1007/978-981-19-4182-5_22

10.1007/978-981-19-4182-5_22
20

Golpour P, Ghayour-Mobarhan M, Saki A, Esmaily H, Taghipour A, Tajfard M, Ghazizadeh H, Moohebati M, Ferns GA (2020) Comparison of support vector machine, naïve Bayes and logistic regression for assessing the necessity for coronary angiography. Int J Environ Res Public Health 17:6449. doi:10.3390/ijerph17186449

10.3390/ijerph1718644932899733PMC7558963
21

Guzmán CE, Guzman-Moreno CG, Assad-Morell JL, Carrizales-Sepulveda EF (2021) Flecainide toxicity associated with the use of goji berries: A case report. Eur Heart J Case Rep 5:ytab204. doi:10.1093/ehjcr/ytab204

10.1093/ehjcr/ytab20434084998PMC8167332
22

Ham J, Chen Y, Crawford MM, Ghosh J (2005) Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans Geosci Remote Sens 43:492-501. doi:10.1109/TGRS.2004.842481

10.1109/TGRS.2004.842481
23

Han S, Williamson BD, Fong Y (2021) Improving random forest predictions in small datasets from two-phase sampling designs. BMC Medical Inform Decis Mak 21:1-9. doi:10.1186/s12911-021-01688-3

10.1186/s12911-021-01688-334809631PMC8607560
24

Handelman GS, Kok HK, Chandra, RV, Razavi AH, Huang S, Brooks M, Lee MJ, Asadi H (2019) Peering into the black box of artificial intelligence: evaluation metrics of machine learning methods. Am J Roentgenol 212:38-43. doi:10.2214/AJR.18.20224

10.2214/AJR.18.2022430332290
25

Hashim H, Abd Lati Z, Adnan NA (2019) Urban vegetation classification with NDVI threshold value method with very high resolution (VHR) Pleiades imagery. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 42:237-240. doi:10.5194/isprs-archives-XLII-4-W16-237-2019

10.5194/isprs-archives-XLII-4-W16-237-2019
26

Heydarian M, Doyle TE, Samavi R (2022) MLCM: Multi-label confusion matrix. IEEE Access 10:19083-19095. doi:10.1109/ACCESS.2022.3151048

10.1109/ACCESS.2022.3151048
27

Karimpour-Reihan S, Firuzei E, Khosravi M, Abbaszade M (2018) Coagulation disorder following red clover (Trifolium Pratense) misuse: A case report. Adv J Emerg Med 2:e20. doi:10.22114/ajem.v0i0.30

10.22114/ajem.v0i0.30
28

Karki S, Basak JK, Paudel B, Deb NC, Kim NE, Kook J, Kang MY, Kim HT (2024) Classification of strawberry ripeness stages using machine learning algorithms and colour spaces. Hortic Environ Biotechnol 65:337-354. doi:10.1007/s13580-023-00559-2

10.1007/s13580-023-00559-2
29

Keivani M, Mazloum J, Sedaghatfar E, Tavakoli MB (2020) Automated Analysis of Leaf Shape, Texture, and Color Features for Plant Classification. Trait Du Signal 37:17-28. doi:10.18280/ts.370103

10.18280/ts.370103
30

Kishore M, Kulkarni SB (2016) Approaches and challenges in classification for hyperspectral data: a review. In 2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT). IEEE pp 3418-3421. doi:10.1109/ICEEOT.2016.7755339

10.1109/ICEEOT.2016.7755339
31

Kortesoja M, Karhu E, Olafsdottir ES, Freysdottir J, Hanski L (2019) Impact of dibenzocyclooctadiene lignans from Schisandra chinensis on the redox status and activation of human innate immune system cells. Free Radic Biol Med 131:309-317. doi:10.1016/j.freeradbiomed.2018.12.019

10.1016/j.freeradbiomed.2018.12.01930578916
32

Li FS, Weng JK (2017) Demystifying traditional herbal medicine with modern approach. Nat Plants 3:1-7. doi:10.1038/nplants.2017.109

10.1038/nplants.2017.10928758992
33

Li J, Bioucas-Dias JM, Plaza A (2009) Semi-supervised hyperspectral image segmentation. In 2009 First Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing. IEEE pp 1-4. doi:10.1109/WHISPERS.2010.5594877

10.1109/WHISPERS.2010.5594877
34

Li M, Xu H, Deng Y (2019) Evidential decision tree based on belief entropy. Entropy 21:897. doi:10.3390/e21090897

10.3390/e21090897PMC7515420
35

Li Z, He X, Liu F, Wang J, Feng J (2018) A review of polysaccharides from Schisandra chinensis and Schisandra sphenanthera: Properties, functions and applications. Carbohydr Polym 184:178-190. doi:10.1016/j.carbpol.2017.12.058

10.1016/j.carbpol.2017.12.05829352909
36

Liu KH, Yang MH, Huang ST, Lin, C (2022) Plant species classification based on hyperspectral imaging via a lightweight convolutional neural network model. Front Plant Sci 13:855660. doi:10.3389/fpls.2022.855660

10.3389/fpls.2022.85566035498669PMC9044035
37

Llorent-Martínez EJ, Fernandez-de Cordova ML, Ortega-Barrales P, Ruiz-Medina A (2013) Characterization and comparison of the chemical composition of exotic superfoods. Microchem J 110:444-451. doi:10.1016/j.microc.2013.05.016

10.1016/j.microc.2013.05.016
38

Makantasis K, Doulamis AD, Doulamis ND, Nikitakis A (2018) Tensor-based classification models for hyperspectral data analysis. IEEE Trans Geosci Remote Sens 56:6884-6898. doi:10.1109/TGRS.2018.2845450

10.1109/TGRS.2018.2845450
39

Mocan A, Zengin G, Uysal A, Gunes E, Mollica A, Degirmenci NS, Alpsoy L, Aktumsek A (2016) Biological and chemical insights of Morina persica L.: A source of bioactive compounds with multifunctional properties. J Funct Foods 25:94-109. doi:10.1016/j.jff.2016.05.010

10.1016/j.jff.2016.05.010
40

Qian Y, Ye M, Zhou, J (2012) Hyperspectral image classification based on structured sparse logistic regression and three-dimensional wavelet texture features. IEEE Trans Geosci Remote Sens 51:2276-2291. doi:10.1109/TGRS.2012.2209657

10.1109/TGRS.2012.2209657
41

Rajaguru H, S R SC (2019) Analysis of decision tree and k-nearest neighbor algorithm in the classification of breast cancer. Asian Pac J Cancer Prev 20:3777. doi:10.31557/APJCP.2019.20.12.3777

10.31557/APJCP.2019.20.12.377731870121PMC7173366
42

Rajani S, Veena MN (2022) Ayurvedic Plants Identification based on Machine Learning and Deep Learning Technologies. In 2022 Fourth International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT). IEEE pp 1-6

43

Rajeshwari T, Vardhini PH, Reddy KMK, Priya KK, Sreeja K (2021) Smart Agriculture Implementation using IoT and Leaf Disease Detection using Logistic Regression. In 2021 4th International Conference on Recent Developments in Control, Automation & Power Engineering (RDCAPE). IEEE pp 619-623. doi:10.1109/RDCAPE52977.2021.9633608

10.1109/RDCAPE52977.2021.9633608
44

Robles-Velasco A, Cortes P, Munuzuri, J, Onieva L (2020) Prediction of pipe failures in water supply networks using logistic regression and support vector classification. Reliab Eng Syst Saf 196:106754. doi:10.1016/j.ress.2019.106754

10.1016/j.ress.2019.106754
45

Ruett M, Junker-Frohn LV, Siegmann B, Ellenberger J, Jaenicke H, Whitney C, Luedeling E, Tiede-Arlt P, Rascher U (2022) Hyperspectral imaging for high-throughput vitality monitoring in ornamental plant production. Sci Hortic 291:110546. doi:10.1016/j.scienta.2021.110546

10.1016/j.scienta.2021.110546
46

Salman T, Ghubaish A, Unal D, Jain R (2020) Safety score as an evaluation metric for machine learning models of security applications. IEEE Netw Lett 2:207-211. doi:10.1109/LNET.2020.3016583

10.1109/LNET.2020.3016583
47

Salve P, Yannawar P, Sardesai M (2022) Multimodal plant recognition through hybrid feature fusion technique using imaging and non-imaging hyper-spectral data. J King Saud Univ Comput Inf Sci 34:1361-1369. doi:10.1016/j.jksuci.2018.09.018

10.1016/j.jksuci.2018.09.018
48

Sathiyamoorthi V, Harshavardhanan P, Azath H, Senbagavalli M, Viswa Bharathy AM, Chokkalingam BS (2022) An effective model for predicting agricultural crop yield on remote sensing hyper-spectral images using adaptive logistic regression classifier. Concurrency Computat Pract Exper 34:e7242. doi:10.1002/cpe.7242

10.1002/cpe.7242
49

Shabani S, Samadianfard S, Sattari MT, Mosavi A, Shamshirband S, Kmet T, Várkonyi-Koczy AR (2020) Modeling pan evaporation using Gaussian process regression K-nearest neighbors random forest and support vector machines; comparative analysis. Atmosphere 11:66. doi:10.3390/atmos11010066

10.3390/atmos11010066
50

Shah K, Patel H, Sanghvi D, Shah M (2020) A comparative analysis of logistic regression, random forest and KNN models for the text classification. Augment Hum Res 5:1-16. doi:10.1007/s41133-020-00032-0

10.1007/s41133-020-00032-0
51

Sheykhmousa M, Mahdianpari M, Ghanbari H, Mohammadimanesh F, Ghamisi P, Homayouni S (2020) Support vector machine versus random forest for remote sensing image classification: A meta-analysis and systematic review. IEEE J Sel Top Appl Earth Obs Remot Sens 13:6308-6325. doi:10.1109/JSTARS.2020.3026724

10.1109/JSTARS.2020.3026724
52

Skenderidis P, Leontopoulos S, Lampakis D (2022) Goji berry: Health promoting properties. Nutraceuticals 2:32-48. doi:10.3390/nutraceuticals2010003

10.3390/nutraceuticals2010003
53

Speiser JL, Miller ME, Tooze J, Ip E (2019) A comparison of random forest variable selection methods for classification prediction modeling. Expert Syst Appl 134:93-101. doi:10.1016/j.eswa.2019.05.028

10.1016/j.eswa.2019.05.02832968335PMC7508310
54

Teixeira F, Silva AM, Delerue-Matos C, Rodrigues F (2023) Lycium barbarum Berries (Solanaceae) as Source of Bioactive Compounds for Healthy Purposes: A Review. Int J Mol Scie 24:4777. doi:10.3390/ijms24054777

10.3390/ijms2405477736902206PMC10003350
55

Theissler A, Thomas M, Burch M, Gerschner F (2022) ConfusionVis: Comparative evaluation and selection of multi-class classifiers based on confusion matrices. Knowl Based Syst 247:108651. doi:10.1016/j.knosys.2022.108651

10.1016/j.knosys.2022.108651
56

Wang AX, Chukova SS, Nguyen BP (2023) Ensemble k-nearest neighbors based on centroid displacement. Inf Sci 629:313-323. doi:10.1016/j.ins.2023.02.004

10.1016/j.ins.2023.02.004
57

Wang D, Dowell FE, Lacey RE (1999) Single Wheat Kernel Color Classification by Using Near‐Infrared Reflectance Spectra. Cereal Chem 76:30-33. doi:10.1094/CCHEM.1999.76.1.30

10.1094/CCHEM.1999.76.1.30
58

Wong TT, Yeh PY (2019) Reliable accuracy estimates from k-fold cross validation. IEEE Trans Knowl Data Eng 32:1586-1594. doi:10.1109/TKDE.2019.2912815

10.1109/TKDE.2019.2912815
59

Ye W, Yan T, Zhang C, Duan L, Chen W, Song H, Zhang Y, Xu W, Gao P (2022) Detection of Pesticide Residue Level in Grape Using Hyperspectral Imaging with Machine Learning. Foods 11:1609. doi:10.3390/foods11111609

10.3390/foods1111160935681359PMC9180647
60

Zeng L, Liu L, Chen D, Lu H, Xue Y, Bi H, Yang W (2023) The innovative model based on artificial intelligence algorithms to predict recurrence risk of patients with postoperative breast cancer. Front Oncol 13:1117420. doi:10.3389/fonc.2023.1117420

10.3389/fonc.2023.111742036959794PMC10029918
61

Zhou H, Zhang J, Zhou Y, Guo X, Ma Y (2021) A feature selection algorithm of decision tree based on feature weight. Expert Syst Appl 164:113842. doi:10.1016/j.eswa.2020.113842

10.1016/j.eswa.2020.113842
62

Zou X, Hu Y, Tian Z, Shen K (2019) Logistic regression model optimization and case analysis. In 2019 IEEE 7th international conference on computer science and network technology (ICCSNT). IEEE pp 135-139. doi:10.1109/ICCSNT47585.2019.8962457

10.1109/ICCSNT47585.2019.8962457PMC6483450
페이지 상단으로 이동하기