Subjects -> AGRICULTURE (Total: 981 journals)
    - AGRICULTURAL ECONOMICS (93 journals)
    - AGRICULTURE (680 journals)
    - CROP PRODUCTION AND SOIL (120 journals)
    - POULTRY AND LIVESTOCK (58 journals)

AGRICULTURE (680 journals)            First | 1 2 3 4     

Showing 601 - 263 of 263 Journals sorted by number of followers
Sustainability and Climate Change     Full-text available via subscription   (Followers: 18)
Annals of Arid Zone     Open Access   (Followers: 15)
The Journal of Research, PJTSAU     Open Access   (Followers: 14)
Journal of Sugarcane Research     Open Access   (Followers: 13)
International Journal of Food Science and Agriculture     Open Access   (Followers: 13)
Potato Journal     Open Access   (Followers: 13)
Journal of Cereal Research     Open Access   (Followers: 12)
Peer Community Journal     Open Access   (Followers: 12)
Indian Journal of Extension Education     Open Access   (Followers: 12)
Journal of the Indian Society of Soil Science     Open Access   (Followers: 11)
Future Foods     Open Access   (Followers: 11)
Magazín Ruralidades y Territorialidades     Full-text available via subscription   (Followers: 10)
Indian Journal of Horticulture     Open Access   (Followers: 10)
Journal of the Indian Society of Coastal Agricultural Research     Open Access   (Followers: 10)
Animal - Open Space     Open Access   (Followers: 9)
Indian Journal of Animal Nutrition     Open Access   (Followers: 8)
Agrivet : Jurnal Ilmu-Ilmu Pertanian dan Peternakan / Journal of Agricultural Sciences and Veteriner)     Open Access   (Followers: 5)
Revista Investigaciones Agropecuarias     Open Access   (Followers: 5)
aBIOTECH : An International Journal on Plant Biotechnology and Agricultural Sciences     Hybrid Journal   (Followers: 4)
Sustainability Agri Food and Environmental Research     Open Access   (Followers: 4)
Animal Microbiome     Open Access   (Followers: 3)
Asia-Pacific Journal of Rural Development     Hybrid Journal   (Followers: 2)
CABI Agriculture and Bioscience     Open Access   (Followers: 2)
Animal Diseases     Open Access   (Followers: 2)
Journal of Animal Science and Products     Open Access   (Followers: 2)
Journal of Rural and Community Development     Open Access   (Followers: 1)
Measurement : Food     Open Access   (Followers: 1)
International Journal of Agricultural and Life Sciences     Open Access   (Followers: 1)
Acta Scientiarum Polonorum Technica Agraria     Open Access   (Followers: 1)
Agriscience     Open Access   (Followers: 1)
Journal of Applied Communications     Open Access   (Followers: 1)
Journal of Environmental and Agricultural Studies     Open Access   (Followers: 1)
VITIS : Journal of Grapevine Research     Open Access   (Followers: 1)
Analytical Science Advances     Open Access   (Followers: 1)
Plant Phenomics     Open Access   (Followers: 1)
CSA News     Hybrid Journal   (Followers: 1)
Molecular Horticulture     Open Access   (Followers: 1)
Energy Nexus     Open Access  
International Journal on Food, Agriculture and Natural Resources : IJ-FANRES     Open Access  
Horticultural Studies     Full-text available via subscription  
Reproduction and Breeding     Open Access  
Archiva Zootehnica     Open Access  
Journal of Agriculture and Food Research     Open Access  
Phytopathology Research     Open Access  
Rekayasa     Open Access  
Turkish Journal of Agricultural Engineering Research     Open Access  
Mustafa Kemal Üniversitesi Tarım Bilimleri Dergisi     Open Access  
Viticulture Data Journal     Open Access  
Proceedings of the Vertebrate Pest Conference     Open Access  
Ethiopian Journal of Sciences and Sustainable Development     Open Access  
Nexo Agropecuario     Open Access  
Dissertationen aus dem Julius Kühn-Institut     Open Access  
Berichte aus dem Julius Kühn-Institut     Open Access  
Journal für Kulturpflanzen     Open Access  
Food and Ecological Systems Modelling Journal     Open Access  
Journal of Animal Science, Biology and Bioeconomy     Open Access  
Agrosains : Jurnal Penelitian Agronomi     Open Access  
Agrotechnology Research Journal     Open Access  
PRIMA : Journal of Community Empowering and Services     Open Access  
Dinamika Pertanian     Open Access  

  First | 1 2 3 4     

Similar Journals
Journal Cover
Plant Phenomics
Number of Followers: 1  

  This is an Open Access Journal Open Access journal
ISSN (Print) 2643-6515
Published by Science Partner Journals Homepage  [2 journals]
  • Moxa Wool in Different Purities and Different Growing Years Measured by
           Terahertz Spectroscopy

    • Abstract: Moxa wool is a traditional Chinese herbal medicine, which can warm channels to dispel coldness. At present, there is no unified index to evaluate the purity and growing years of moxa wool in the market. Terpineol is one of the effective substances in the volatile oil of moxa wool. Here, we characterize the purity and growing years of moxa wool by studying terpineol. Gas chromatography-mass spectrometry (GC-MS) and high-performance liquid chromatography (HPLC) are the methods for monitoring terpineol at present, all of which have defects of complicated procedures. We established linear fitting to distinguish the different purities of moxa wool through the intensities (areas) of terpineol, the characteristic peaks, and the consequence presented; the coefficient of determination () was higher than 0.90. Furthermore, based on the characteristic peak position of standard terpineol, the correlation model with the purity and growing year of moxa wool was set up, thereby differentiating the quality of moxa wool. We have built the partial least squares (PLS) model of the growing years of moxa wool with high accuracy, and the determination coefficient is greater than 0.98. In addition, we compare the quantitative accuracy of Raman spectroscopy with terahertz technology. Finally, a new method of terahertz spectroscopy to evaluate quality of moxa wool was found. It provides a new idea for the identification of inferior moxa wool in the market and a new method for identifying the quality of moxa wool in traditional Chinese medicine.
      PubDate: 31 May 2022
  • Prediction of the Maturity of Greenhouse Grapes Based on Imaging

    • Abstract: To predict grape maturity in solar greenhouses, a plant phenotype-monitoring platform (Phenofix, France) was used to obtain RGB images of grapes from expansion to maturity. Horizontal and longitudinal diameters, compactness, soluble solid content (SSC), titratable acid content, and the SSC/acid of grapes were measured and evaluated. The color values () of the grape skin were determined and subjected to a back-propagation neural network algorithm (BPNN) to predict grape maturity. The results showed that the physical and chemical properties (PCP) of the three varieties of grapes changed significantly during the berry expansion stage and the color-changing maturity stage. According to the normalized rate of change of the PCP indicators, the ripening process of the three varieties of grapes could be divided into two stages: an immature stage (maturity coefficient ) and a mature stage (after which color changes occurred) (). When predicting grape maturity based on the color values, the as well as performed well for Drunk Incense, Muscat Hamburg, and Xiang Yue grape maturity prediction. The GPI ranked in the top three (up to 0.87) when the above indicators were used in combination with BPNN to predict the grape Mc by single-factor and combined-factor analysis. The results showed that the prediction accuracy (RG and HI) of the two-factor combination was better for Drunk Incense, Muscat Hamburg, and Xiang Yue grapes (with recognition accuracies of 79.3%, 78.2%, and 79.4%, respectively), and all of the predictive values were higher than those of the single-factor predictions. Using a confusion matrix to compare the accuracy of the Mc’s predictive ability under the two-factor combination method, the prediction accuracies were in the following order: Xiang Yue (88%) > Muscat Hamburg (81.3%) > Drunk Incense (76%). The results of this study provide an effective way to predict the ripeness of grapes in the greenhouse.
      PubDate: 30 Mar 2022
  • Simultaneous Prediction of Wheat Yield and Grain Protein Content Using
           Multitask Deep Learning from Time-Series Proximal Sensing

    • Abstract: Wheat yield and grain protein content (GPC) are two main optimization targets for breeding and cultivation. Remote sensing provides nondestructive and early predictions of yield and GPC, respectively. However, whether it is possible to simultaneously predict yield and GPC in one model and the accuracy and influencing factors are still unclear. In this study, we made a systematic comparison of different deep learning models in terms of data fusion, time-series feature extraction, and multitask learning. The results showed that time-series data fusion significantly improved yield and GPC prediction accuracy with values of 0.817 and 0.809. Multitask learning achieved simultaneous prediction of yield and GPC with comparable accuracy to the single-task model. We further proposed a two-to-two model that combines data fusion (two kinds of data sources for input) and multitask learning (two outputs) and compared different feature extraction layers, including RNN (recurrent neural network), LSTM (long short-term memory), CNN (convolutional neural network), and attention module. The two-to-two model with the attention module achieved the best prediction accuracy for yield () and GPC (). The temporal distribution of feature importance was visualized based on the attention feature values. Although the temporal patterns of structural traits and spectral traits were inconsistent, the overall importance of both structural traits and spectral traits at the postanthesis stage was more important than that at the preanthesis stage. This study provides new insights into the simultaneous prediction of yield and GPC using deep learning from time-series proximal sensing, which may contribute to the accurate and efficient predictions of agricultural production.
      PubDate: 29 Mar 2022
  • Development and Validation of a Deep Learning Based Automated
           Minirhizotron Image Analysis Pipeline

    • Abstract: Root systems of crops play a significant role in agroecosystems. The root system is essential for water and nutrient uptake, plant stability, symbiosis with microbes, and a good soil structure. Minirhizotrons have shown to be effective to noninvasively investigate the root system. Root traits, like root length, can therefore be obtained throughout the crop growing season. Analyzing datasets from minirhizotrons using common manual annotation methods, with conventional software tools, is time-consuming and labor-intensive. Therefore, an objective method for high-throughput image analysis that provides data for field root phenotyping is necessary. In this study, we developed a pipeline combining state-of-the-art software tools, using deep neural networks and automated feature extraction. This pipeline consists of two major components and was applied to large root image datasets from minirhizotrons. First, a segmentation by a neural network model, trained with a small image sample, is performed. Training and segmentation are done using “RootPainter.” Then, an automated feature extraction from the segments is carried out by “RhizoVision Explorer.” To validate the results of our automated analysis pipeline, a comparison of root length between manually annotated and automatically processed data was realized with more than 36,500 images. Mainly the results show a high correlation () between manually and automatically determined root lengths. With respect to the processing time, our new pipeline outperforms manual annotation by 98.1-99.6%. Our pipeline, combining state-of-the-art software tools, significantly reduces the processing time for minirhizotron images. Thus, image analysis is no longer the bottle-neck in high-throughput phenotyping approaches.
      PubDate: 28 May 2022
  • Wheat Ear Segmentation Based on a Multisensor System and Superpixel

    • Abstract: The automatic segmentation of ears in wheat canopy images is an important step to measure ear density or extract relevant plant traits separately for the different organs. Recent deep learning algorithms appear as promising tools to accurately detect ears in a wide diversity of conditions. However, they remain complicated to implement and necessitate a huge training database. This paper is aimed at proposing an easy and quick to train and robust alternative to segment wheat ears from heading to maturity growth stage. The tested method was based on superpixel classification exploiting features from RGB and multispectral cameras. Three classifiers were trained with wheat images acquired from heading to maturity on two cultivars at different levels of fertilizer. The best classifier, the support vector machine (SVM), yielded satisfactory segmentation and reached 94% accuracy. However, the segmentation at the pixel level could not be assessed only by the superpixel classification accuracy. For this reason, a second assessment method was proposed to consider the entire process. A simple graphical tool was developed to annotate pixels. The strategy was to annotate a few pixels per image to be able to quickly annotate the entire image set, and thus account for very diverse conditions. Results showed a lesser segmentation score (F1-score) for the heading and flowering stages and for the zero nitrogen input object. The methodology appeared appropriate for further work on the growth dynamics of the different wheat organs and in the frame of other segmentation challenges.
      PubDate: 28 Jan 2022
  • Application of UAV Multisensor Data and Ensemble Approach for
           High-Throughput Estimation of Maize Phenotyping Traits

    • Abstract: High-throughput estimation of phenotypic traits from UAV (unmanned aerial vehicle) images is helpful to improve the screening efficiency of breeding maize. Accurately estimating phenotyping traits of breeding maize at plot scale helps to promote gene mining for specific traits and provides a guarantee for accelerating the breeding of superior varieties. Constructing an efficient and accurate estimation model is the key to the application of UAV-based multiple sensors data. This study aims to apply the ensemble learning model to improve the feasibility and accuracy of estimating maize phenotypic traits using UAV-based red-green-blue (RGB) and multispectral sensors. The UAV images of four growth stages were obtained, respectively. The reflectance of visible light bands, canopy coverage, plant height (PH), and texture information were extracted from RGB images, and the vegetation indices were calculated from multispectral images. We compared and analyzed the estimation accuracy of single-type feature and multiple features for LAI (leaf area index), fresh weight (FW), and dry weight (DW) of maize. The basic models included ridge regression (RR), support vector machine (SVM), random forest (RF), Gaussian process (GP), and K-neighbor network (K-NN). The ensemble learning models included stacking and Bayesian model averaging (BMA). The results showed that the ensemble learning model improved the accuracy and stability of maize phenotypic traits estimation. Among the features extracted from UAV RGB images, the highest accuracy was obtained by the combination of spectrum, structure, and texture features. The model had the best accuracy constructed using all features of two sensors. The estimation accuracies of ensemble learning models, including stacking and BMA, were higher than those of the basic models. The coefficient of determination () of the optimal validation results were 0.852, 0.888, and 0.929 for LAI, FW, and DW, respectively. Therefore, the combination of UAV-based multisource data and ensemble learning model could accurately estimate phenotyping traits of breeding maize at plot scale.
      PubDate: 28 Aug 2022
  • How Useful Is Image-Based Active Learning for Plant Organ

    • Abstract: Training deep learning models typically requires a huge amount of labeled data which is expensive to acquire, especially in dense prediction tasks such as semantic segmentation. Moreover, plant phenotyping datasets pose additional challenges of heavy occlusion and varied lighting conditions which makes annotations more time-consuming to obtain. Active learning helps in reducing the annotation cost by selecting samples for labeling which are most informative to the model, thus improving model performance with fewer annotations. Active learning for semantic segmentation has been well studied on datasets such as PASCAL VOC and Cityscapes. However, its effectiveness on plant datasets has not received much importance. To bridge this gap, we empirically study and benchmark the effectiveness of four uncertainty-based active learning strategies on three natural plant organ segmentation datasets. We also study their behaviour in response to variations in training configurations in terms of augmentations used, the scale of training images, active learning batch sizes, and train-validation set splits.
      PubDate: 24 Feb 2022
  • PSegNet: Simultaneous Semantic and Instance Segmentation for Point Clouds
           of Plants

    • Abstract: Phenotyping of plant growth improves the understanding of complex genetic traits and eventually expedites the development of modern breeding and intelligent agriculture. In phenotyping, segmentation of 3D point clouds of plant organs such as leaves and stems contributes to automatic growth monitoring and reflects the extent of stress received by the plant. In this work, we first proposed the Voxelized Farthest Point Sampling (VFPS), a novel point cloud downsampling strategy, to prepare our plant dataset for training of deep neural networks. Then, a deep learning network—PSegNet, was specially designed for segmenting point clouds of several species of plants. The effectiveness of PSegNet originates from three new modules including the Double-Neighborhood Feature Extraction Block (DNFEB), the Double-Granularity Feature Fusion Module (DGFFM), and the Attention Module (AM). After training on the plant dataset prepared with VFPS, the network can simultaneously realize the semantic segmentation and the leaf instance segmentation for three plant species. Comparing to several mainstream networks such as PointNet++, ASIS, SGPN, and PlantNet, the PSegNet obtained the best segmentation results quantitatively and qualitatively. In semantic segmentation, PSegNet achieved 95.23%, 93.85%, 94.52%, and 89.90% for the mean Prec, Rec, F1, and IoU, respectively. In instance segmentation, PSegNet achieved 88.13%, 79.28%, 83.35%, and 89.54% for the mPrec, mRec, mCov, and mWCov, respectively.
      PubDate: 23 May 2022
  • Enabling Breeding Selection for Biomass in Slash Pine Using UAV-Based

    • Abstract: Traditional methods used to monitor the aboveground biomass (AGB) and belowground biomass (BGB) of slash pine (Pinus elliottii) rely on on-ground measurements, which are time- and cost-consuming and suited only for small spatial scales. In this paper, we successfully applied unmanned aerial vehicle (UAV) integrated with structure from motion (UAV-SfM) data to estimate the tree height, crown area (CA), AGB, and BGB of slash pine for in slash pine breeding plantations sites. The CA of each tree was segmented by using marker-controlled watershed segmentation with a treetop and a set of minimum three meters heights. Moreover, the genetic variation of these traits has been analyzed and employed to estimate heritability (). The results showed a promising correlation between UAV and ground truth data with a range of from 0.58 to 0.85 at 70 m flying heights and a moderate estimate of for all traits ranges from 0.13 to 0.47, where site influenced the value of slash pine trees, where in site 1 ranged from 0.13~0.25 lower than that in site 2 (range: 0.38~0.47). Similar genetic gains were obtained with both UAV and ground truth data; thus, breeding selection is still possible. The method described in this paper provides faster, more high-throughput, and more cost-effective UAV-SfM surveys to monitor a larger area of breeding plantations than traditional ground surveys while maintaining data accuracy.
      PubDate: 22 Apr 2022
  • 3dCAP-Wheat: An Open-Source Comprehensive Computational Framework
           Precisely Quantifies Wheat Foliar, Nonfoliar, and Canopy Photosynthesis

    • Abstract: Canopy photosynthesis is the sum of photosynthesis of all above-ground photosynthetic tissues. Quantitative roles of nonfoliar tissues in canopy photosynthesis remain elusive due to methodology limitations. Here, we develop the first complete canopy photosynthesis model incorporating all above-ground photosynthetic tissues and validate this model on wheat with state-of-the-art gas exchange measurement facilities. The new model precisely predicts wheat canopy gas exchange rates at different growth stages, weather conditions, and canopy architectural perturbations. Using the model, we systematically study (1) the contribution of both foliar and nonfoliar tissues to wheat canopy photosynthesis and (2) the responses of wheat canopy photosynthesis to plant physiological and architectural changes. We found that (1) at tillering, heading, and milking stages, nonfoliar tissues can contribute ~4, ~32, and ~50% of daily gross canopy photosynthesis (; ~2, ~15, and ~-13% of daily net canopy photosynthesis, ) and absorb ~6, ~42, and ~60% of total light, respectively; (2) under favorable condition, increasing spike photosynthetic activity, rather than enlarging spike size or awn size, can enhance canopy photosynthesis; (3) covariation in tissue respiratory rate and photosynthetic rate may be a major factor responsible for less than expected increase in daily ; and (4) in general, erect leaves, lower spike position, shorter plant height, and proper plant densities can benefit daily . Overall, the model, together with the facilities for quantifying plant architecture and tissue gas exchange, provides an integrated platform to study canopy photosynthesis and support rational design of photosynthetically efficient wheat crops.
      PubDate: 21 Jul 2022
  • Corrigendum to “Automatic Fruit Morphology Phenome and Genetic Analysis:
           An Application in the Octoploid Strawberry”

    • PubDate: 20 Jan 2022
  • Robust High-Throughput Phenotyping with Deep Segmentation Enabled by a
           Web-Based Annotator

    • Abstract: The abilities of plant biologists and breeders to characterize the genetic basis of physiological traits are limited by their abilities to obtain quantitative data representing precise details of trait variation, and particularly to collect this data at a high-throughput scale with low cost. Although deep learning methods have demonstrated unprecedented potential to automate plant phenotyping, these methods commonly rely on large training sets that can be time-consuming to generate. Intelligent algorithms have therefore been proposed to enhance the productivity of these annotations and reduce human efforts. We propose a high-throughput phenotyping system which features a Graphical User Interface (GUI) and a novel interactive segmentation algorithm: Semantic-Guided Interactive Object Segmentation (SGIOS). By providing a user-friendly interface and intelligent assistance with annotation, this system offers potential to streamline and accelerate the generation of training sets, reducing the effort required by the user. Our evaluation shows that our proposed SGIOS model requires fewer user inputs compared to the state-of-art models for interactive segmentation. As a case study of the use of the GUI applied for genetic discovery in plants, we present an example of results from a preliminary genome-wide association study (GWAS) of in planta regeneration in Populus trichocarpa (poplar). We further demonstrate that the inclusion of a semantic prior map with SGIOS can accelerate the training process for future GWAS, using a sample of a dataset extracted from a poplar GWAS of in vitro regeneration. The capabilities of our phenotyping system surpass those of unassisted humans to rapidly and precisely phenotype our traits of interest. The scalability of this system enables large-scale phenomic screens that would otherwise be time-prohibitive, thereby providing increased power for GWAS, mutant screens, and other studies relying on large sample sizes to characterize the genetic basis of trait variation. Our user-friendly system can be used by researchers lacking a computational background, thus helping to democratize the use of deep segmentation as a tool for plant phenotyping.
      PubDate: 19 May 2022
  • Shortwave Radiation Calculation for Forest Plots Using Airborne LiDAR Data
           and Computer Graphics

    • Abstract: Forested environments feature a highly complex radiation regime, and solar radiation is hindered from penetrating into the forest by the 3D canopy structure; hence, canopy shortwave radiation varies spatiotemporally, seasonally, and meteorologically, making the radiant flux challenging to both measure and model. Here, we developed a synergetic method using airborne LiDAR data and computer graphics to model the forest canopy and calculate the radiant fluxes of three forest plots (conifer, broadleaf, and mixed). Directional incident solar beams were emitted according to the solar altitude and azimuth angles, and the forest canopy surface was decomposed into triangular elements. A ray tracing algorithm was utilized to simulate the propagation of reflected and transmitted beams within the forest canopy. Our method accurately modeled the solar radiant fluxes and demonstrated good agreement () with the plot-scale results of hemispherical photo-based HPEval software and pyranometer measurements. The maximum incident radiant flux appeared in the conifer plot at noon on June 15 due to the largest solar altitude angle (81.21°) and dense clustering of tree crowns; the conifer plot also received the maximum reflected radiant flux (10.91-324.65 kW) due to the higher reflectance of coniferous trees and the better absorption of reflected solar beams. However, the broadleaf plot received more transmitted radiant flux (37.7-226.71 kW) for the trees in the shaded area due to the larger transmittance of broadleaf species. Our method can directly simulate the detailed plot-scale distribution of canopy radiation and is valuable for researching light-dependent biophysiological processes.
      PubDate: 18 Jul 2022
  • A Review of High-Throughput Field Phenotyping Systems: Focusing on Ground

    • Abstract: Manual assessments of plant phenotypes in the field can be labor-intensive and inefficient. The high-throughput field phenotyping systems and in particular robotic systems play an important role to automate data collection and to measure novel and fine-scale phenotypic traits that were previously unattainable by humans. The main goal of this paper is to review the state-of-the-art of high-throughput field phenotyping systems with a focus on autonomous ground robotic systems. This paper first provides a brief review of nonautonomous ground phenotyping systems including tractors, manually pushed or motorized carts, gantries, and cable-driven systems. Then, a detailed review of autonomous ground phenotyping robots is provided with regard to the robot’s main components, including mobile platforms, sensors, manipulators, computing units, and software. It also reviews the navigation algorithms and simulation tools developed for phenotyping robots and the applications of phenotyping robots in measuring plant phenotypic traits and collecting phenotyping datasets. At the end of the review, this paper discusses current major challenges and future research directions.
      PubDate: 17 Jun 2022
  • Spectral Preprocessing Combined with Deep Transfer Learning to Evaluate
           Chlorophyll Content in Cotton Leaves

    • Abstract: Rapid determination of chlorophyll content is significant for evaluating cotton’s nutritional and physiological status. Hyperspectral technology equipped with multivariate analysis methods has been widely used for chlorophyll content detection. However, the model developed on one batch or variety cannot produce the same effect for another due to variations, such as samples and measurement conditions. Considering that it is costly to establish models for each batch or variety, the feasibility of using spectral preprocessing combined with deep transfer learning for model transfer was explored. Seven different spectral preprocessing methods were discussed, and a self-designed convolutional neural network (CNN) was developed to build models and conduct transfer tasks by fine-tuning. The approach combined first-derivative (FD) and standard normal variate transformation (SNV) was chosen as the best pretreatment. For the dataset of the target domain, fine-tuned CNN based on spectra processed by FD + SNV outperformed conventional partial least squares (PLS) and squares-support vector machine regression (SVR). Although the performance of fine-tuned CNN with a smaller dataset was slightly lower, it was still better than conventional models and achieved satisfactory results. Ensemble preprocessing combined with deep transfer learning could be an effective approach to estimate the chlorophyll content between different cotton varieties, offering a new possibility for evaluating the nutritional status of cotton in the field.
      PubDate: 17 Aug 2022
  • EasyDAM_V2: Efficient Data Labeling Method for Multishape, Cross-Species
           Fruit Detection

    • Abstract: In modern smart orchards, fruit detection models based on deep learning require expensive dataset labeling work to support the construction of detection models, resulting in high model application costs. Our previous work combined generative adversarial networks (GANs) and pseudolabeling methods to transfer labels from one specie to another to save labeling costs. However, only the color and texture features of images can be migrated, which still needs improvement in the accuracy of the data labeling. Therefore, this study proposes an EasyDAM_V2 model as an improved data labeling method for multishape and cross-species fruit detection. First, an image translation network named the Across-CycleGAN is proposed to generate fruit images from the source domain (fruit image with labels) to the target domain (fruit image without labels) even with partial shape differences. Then, a pseudolabel adaptive threshold selection strategy was designed to adjust the confidence threshold of the fruit detection model adaptively and dynamically update the pseudolabel to generate labels for images from the unlabeled target domain. In this paper, we use a labeled orange dataset as the source domain, and a pitaya, a mango dataset as the target domain, to evaluate the performance of the proposed method. The results showed that the average labeling precision values of the pitaya and mango datasets were 82.1% and 85.0%, respectively. Therefore, the proposed EasyDAM_V2 model is proven to be used for label transfer of cross-species fruit even with partial shape differences to reduce the cost of data labeling.
      PubDate: 12 Sep 2022
  • Spectrometric Prediction of Nitrogen Content in Different Tissues of Slash
           Pine Trees

    • Abstract: The internal cycling of nitrogen (N) storage and consumption in trees is an important physiological mechanism associated with tree growth. Here, we examined the capability of near-infrared spectroscopy (NIR) to quantify the concentration across tissue types (needle, trunk, branch, and root) without time and cost-consuming. The NIR spectral data of different tissues from slash pine trees were collected, and the concentration in each tissue was determined using standard analytical method in laboratory. Partial least squares regression (PLSR) models were performed on a set of training data randomly selected. The full-length spectra and the significant multivariate correlation (sMC) variable selected spectra were used for model calibration. Branch, needle, and trunk PLSR models performed well for the concentration using both full length and sMC selected NIR spectra. The generic model preformatted a reliable accuracy with R2C and R2CV of 0.62 and 0.66 using the full-length spectra, and 0.61 and 0.65 using sMC-selected spectra, respectively. Individual tissue models did not perform well when being used in other tissues. Five significantly important regions, i.e., 1480, 1650, 1744, 2170, and 2390 nm, were found highly related to the content in plant tissues. This study evaluates a rapid and efficient method for the estimation of content in different tissues that can help to serve as a tool for tree storage and recompilation study.
      PubDate: 12 Jan 2022
  • Evaluation of Postharvest Senescence of Broccoli via Hyperspectral Imaging

    • Abstract: Fresh fruit and vegetables are invaluable for human health; however, their quality often deteriorates before reaching consumers due to ongoing biochemical processes and compositional changes. We currently lack any objective indices which indicate the freshness of fruit or vegetables resulting in limited capacity to improve product quality eventually leading to food loss and waste. In this conducted study, we hypothesized that certain proteins and compounds, such as glucosinolates, could be used as one potential indicator to monitor the freshness of broccoli following harvest. To support our study, glucosinolate contents in broccoli based on HPLC measurement and transcript expression of glucosinolate biosynthetic genes in response to postharvest stresses were evaluated. We found that the glucosinolate biosynthetic pathway coincided with the progression of senescence in postharvest broccoli during storage. Additionally, we applied machine learning-based hyperspectral image (HSI) analysis, unmixing, and subpixel target detection approaches to evaluate glucosinolate level to detect postharvest senescence in broccoli. This study provides an accessible approach to precisely estimate freshness in broccoli through machine learning-based hyperspectral image analysis. Such a tool would further allow significant advancement in postharvest logistics and bolster the availability of high-quality, nutritious fresh produce.
      PubDate: 09 May 2022
  • Estimating Photosynthetic Attributes from High-Throughput Canopy
           Hyperspectral Sensing in Sorghum

    • Abstract: Sorghum, a genetically diverse C4 cereal, is an ideal model to study natural variation in photosynthetic capacity. Specific leaf nitrogen (SLN) and leaf mass per leaf area (LMA), as well as, maximal rates of Rubisco carboxylation (), phosphoenolpyruvate (PEP) carboxylation (), and electron transport (), quantified using a C4 photosynthesis model, were evaluated in two field-grown training sets ( plots including 124 genotypes) in 2019 and 2020. Partial least square regression (PLSR) was used to predict (), (), (), SLN (), and LMA () from tractor-based hyperspectral sensing. Further assessments of the capability of the PLSR models for , , , SLN, and LMA were conducted by extrapolating these models to two trials of genome-wide association studies adjacent to the training sets in 2019 ( plots including 650 genotypes) and 2020 ( plots with 634 genotypes). The predicted traits showed medium to high heritability and genome-wide association studies using the predicted values identified four QTL for and two QTL for . Candidate genes within 200 kb of the QTL were involved in nitrogen storage, which is closely associated with Rubisco, while not directly associated with Rubisco activity per se. QTL was enriched for candidate genes involved in electron transport. These outcomes suggest the methods here are of great promise to effectively screen large germplasm collections for enhanced photosynthetic capacity.
      PubDate: 08 Apr 2022
  • Objective Phenotyping of Root System Architecture Using Image Augmentation
           and Machine Learning in Alfalfa (Medicago sativa L.)

    • Abstract: Active breeding programs specifically for root system architecture (RSA) phenotypes remain rare; however, breeding for branch and taproot types in the perennial crop alfalfa is ongoing. Phenotyping in this and other crops for active RSA breeding has mostly used visual scoring of specific traits or subjective classification into different root types. While image-based methods have been developed, translation to applied breeding is limited. This research is aimed at developing and comparing image-based RSA phenotyping methods using machine and deep learning algorithms for objective classification of 617 root images from mature alfalfa plants collected from the field to support the ongoing breeding efforts. Our results show that unsupervised machine learning tends to incorrectly classify roots into a normal distribution with most lines predicted as the intermediate root type. Encouragingly, random forest and TensorFlow-based neural networks can classify the root types into branch-type, taproot-type, and an intermediate taproot-branch type with 86% accuracy. With image augmentation, the prediction accuracy was improved to 97%. Coupling the predicted root type with its prediction probability will give breeders a confidence level for better decisions to advance the best and exclude the worst lines from their breeding program. This machine and deep learning approach enables accurate classification of the RSA phenotypes for genomic breeding of climate-resilient alfalfa.
      PubDate: 07 Apr 2022
  • Unsupervised Plot-Scale LAI Phenotyping via UAV-Based Imaging, Modelling,
           and Machine Learning

    • Abstract: High-throughput phenotyping has become the frontier to accelerate breeding through linking genetics to crop growth estimation, which requires accurate estimation of leaf area index (LAI). This study developed a hybrid method to train the random forest regression (RFR) models with synthetic datasets generated by a radiative transfer model to estimate LAI from UAV-based multispectral images. The RFR models were evaluated on both (i) subsets from the synthetic datasets and (ii) observed data from two field experiments (i.e., Exp16, Exp19). Given the parameter ranges and soil reflectance are well calibrated in synthetic training data, RFR models can accurately predict LAI from canopy reflectance captured in field conditions, with systematic overestimation for LAI
      PubDate: 04 Jul 2022
  • End-to-End Fusion of Hyperspectral and Chlorophyll Fluorescence Imaging to
           Identify Rice Stresses

    • Abstract: Herbicides and heavy metals are hazardous substances of environmental pollution, resulting in plant stress and harming humans and animals. Identification of stress types can help trace stress sources, manage plant growth, and improve stress-resistant breeding. In this research, hyperspectral imaging (HSI) and chlorophyll fluorescence imaging (Chl-FI) were adopted to identify the rice plants under two types of herbicide stresses (butachlor (DCA) and quinclorac (ELK)) and two types of heavy metal stresses (cadmium (Cd) and copper (Cu)). Visible/near-infrared spectra of leaves (L-VIS/NIR) and stems (S-VIS/NIR) extracted from HSI and chlorophyll fluorescence kinetic curves of leaves (L-Chl-FKC) and stems (S-Chl-FKC) extracted from Chl-FI were fused to establish the models to detect the stress of the hazardous substances. Novel end-to-end deep fusion models were proposed for low-level, middle-level, and high-level information fusion to improve identification accuracy. Results showed that the high-level fusion-based convolutional neural network (CNN) models reached the highest detection accuracy (97.7%), outperforming the models using a single data source (
      PubDate: 02 Aug 2022
  • Dynamic Color Transform Networks for Wheat Head Detection

    • Abstract: Wheat head detection can measure wheat traits such as head density and head characteristics. Standard wheat breeding largely relies on manual observation to detect wheat heads, yielding a tedious and inefficient procedure. The emergence of affordable camera platforms provides opportunities for deploying computer vision (CV) algorithms in wheat head detection, enabling automated measurements of wheat traits. Accurate wheat head detection, however, is challenging due to the variability of observation circumstances and the uncertainty of wheat head appearances. In this work, we propose a simple but effective idea—dynamic color transform (DCT)—for accurate wheat head detection. This idea is based on an observation that modifying the color channel of an input image can significantly alleviate false negatives and therefore improve detection results. DCT follows a linear color transform and can be easily implemented as a dynamic network. A key property of DCT is that the transform parameters are data-dependent such that illumination variations can be corrected adaptively. The DCT network can be incorporated into any existing object detectors. Experimental results on the Global Wheat Detection Dataset (GWHD) 2021 show that DCT can achieve notable improvements with negligible overhead parameters. In addition, DCT plays an important role in our solution participating in the Global Wheat Challenge (GWC) 2021, where our solution ranks the first on the initial public leaderboard, with an Average Domain Accuracy (ADA) of , and obtains the runner-up reward on the final private testing set, with an ADA of .
      PubDate: 01 Feb 2022
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Tel: +00 44 (0)131 4513762

Your IP address:
Home (Search)
About JournalTOCs
News (blog, publications)
JournalTOCs on Twitter   JournalTOCs on Facebook

JournalTOCs © 2009-