Open access peer-reviewed chapter - ONLINE FIRST

Use Cases of Technologies in Precision Agriculture: Selected Abstracts Submitted to the 10th Asian-Australasian Conference on Precision Agriculture (ACPA10)

Written By

Redmond R. Shamshiri, Maryam Behjati, Siva K. Balasundram, Christopher Teh Boon Sung, Ibrahim A. Hameed, Ahmad Kamil Zolkafli, An Ho-Song, Arina Mohd Noh, Badril Hisham Abu Bakar, W.A. Balogun, Beom-Sun Kang, Cong-Chuan Pham, Dang Khanh Linh Le, Dong Hee Noh, Dongseok Kim, Eliezel Habineza, Farizal Kamaroddin, Gookhwan Kim, Heetae Kim, Hyunjung Hwang, Jaesung Park, Jisu Song, Joonjea Sung, Jusnaini Muslimin, Ka Young Lee, Kayoung Lee, Keong Do Lee, Keshinro Kazeem Kolawole, Kyeong Il Park, Longsheng Fu, Md Ashrafuzzaman Gulandaz, Md Asrakul Haque, Md Nasim Reza, Md Razob Ali, Md Rejaul Karim, Md Sazzadul Kabir, Md Shaha Nur Kabir, Minho Song, Mohamad Shukri Zainal Abidin, Mohammad Ali, Mohd Aufa Md Bookeri, Mohd Nadzim Nordin, Mohd Nadzri Md Reba, Mohd Nizam Zubir, Mohd Saiful Azimi Mahmud, Mohd Taufik Ahmad, Muhammad Hariz Musa, Muhammad Sharul Azwan Ramli, Musa Mohd Mokji, Naoto Yoshimoto, Nhu Tuong An Nguyen, Nur Khalidah Zakaria, Prince Kumar, P.K. Garg, Ramlan Ismail, Ren Kondo, Ryuta Kojo, Samsuzzaman, Seokcheol Yu, Seok-Ho Park, Shahriar Ahmed, Siti Noor Aliah Baharom, Sumaiya Islam, Sun-Ok Chung, Ten Sen Teik, Tinah Manduna Mutabazi, Wei-Chih Lin, Yeon Jin Cho and Young Ho Kang

Submitted: 08 April 2024 Reviewed: 13 May 2024 Published: 03 July 2024

DOI: 10.5772/intechopen.115091

Precision Agriculture - Emerging Technologies IntechOpen
Precision Agriculture - Emerging Technologies Edited by Redmond R. Shamshiri

From the Edited Volume

Precision Agriculture - Emerging Technologies [Working Title]

Dr. Redmond R. Shamshiri, Dr. Sanaz Shafian and Prof. Ibrahim A. A. Hameed

Chapter metrics overview

View Full Metrics

Abstract

This chapter is a collection of selected abstracts presented at the 10th Asian-Australasian Conference on Precision Agriculture, held from October 24th to 26th in Putrajaya, Malaysia. It aims to emphasize the transformative potential of technology in precision agriculture and smart farming. The featured studies highlight the transformative impact of technology and current improvements in agriculture, offering modern solutions including machine learning, robotics, remote sensing, and geographic information systems (GIS). From autonomous navigation for mobile robots to stress classification in crop production systems, and from phenotypic analysis with LiDAR technology to real-time sensor monitoring in greenhouse agriculture, the majority of abstracts underline the integration of digital tools in different fields of farming with the core objective of reshaping conventional farming techniques and eliminating dependency on manual works. Key examples include the development of a distributed sensing system (DSS) used for orchard robots, stress classification for tomato seedlings through image-based color features and machine learning, and the integration of remote sensing and AI in crop protection. Other solutions, such as automated spraying robots for cherry tomato greenhouses, active back exoskeletons for rice farm lifting tasks, and advancements in seedling transplanting techniques, have shown promising results for contributing to sustainable farming practices by providing accurate and timely information for decision-making amid climate change-induced uncertainties.

Keywords

  • digitalization
  • IoT
  • artificial intelligence
  • smart sensors
  • robotics
  • yield optimization

1. Introduction

Precision agriculture has witnessed a significant transformation driven by advancements in technology and data-driven approaches such as machine learning, robotics, remote sensing, and geographic information systems (GIS), emerging as a basis for revolutionizing traditional farming methods. This pattern shift is evident in various aspects of farming practices, including navigation systems and path planning algorithms for efficient operation of mobile robotics, stress classification in open field and closed field crop production systems such as plant factories, phenotypic analysis using LiDAR technology, and enhancement of crop yields through remote sensing and artificial intelligence. In addition, innovations such as algae cultivation monitoring, volumetric yield estimation using RGB cameras, intelligent decision support systems, and real-time sensor for early monitoring of greenhouse diseases are reshaping data collection and decision makings in modern agriculture.

Drawing from a diverse array of abstracts submitted to the 10th Asian-Australasian Conference on Precision Agriculture, this chapter presents 26 selected studies showcasing the transformative potential of technology in modern agriculture and highlighting cutting-edge advancements. The main critical challenges addressed by these advancements are the need for precision and efficiency in different farming operations. Examples such as a distributed sensing system (DSS) that aids in the precise navigation of an orchard robot through the integration of GPS, laser, and infrared sensors; methods for improving route optimization and mitigating crop damage; and an obstacle region segmentation approach that streamlines path planning in agricultural machinery operations, all serve to minimize redundancy and enhance efficiency. For crop protection, stress classification in tomato seedlings has been tested through image-based color features and machine learning, facilitating early stress detection for optimal plant growth. Additionally, the utilization of 3D LiDAR technology in fruit tree phenotyping has accelerated and refined orchard characteristic assessment, thereby assisting in informed decision-making. The integration of remote sensing, GIS, and AI in crop protection offers data-driven insights for mapping crop types, health assessment, and yield estimation, thereby revolutionizing risk management strategies in agriculture.

Traditional methods often fail to provide accurate and live information required for rapid decision-making, especially in the face of climate change-induced uncertainties and environmental risks. However, the integration of breakthrough technologies such as drones, multispectral imaging, machine learning, and sensor networks is revolutionizing how farmers monitor, analyze, and manage their crops. Moreover, the application of data fusion techniques and predictive modeling is empowering farmers with real-time insights and cloud-based decision support tools. By integrating data from diverse sources such as weather forecasts, satellite imagery, and soil sensors, intelligent systems can provide personalized recommendations for irrigation, fertilization, pest control, and harvest timing. These data-driven approaches not only optimize resource utilization but also enhance crop yields and reduce production costs. Through the utilization of environmental sensors and real-time data analysis, researchers are developing disaster-resistant structures for greenhouses, mitigating the impact of weather disasters like droughts, floods, and heat waves. These solutions facilitate continuous monitoring of crop conditions, ensuring stable and high-quality production even in adverse weather conditions.

The abstracts featured in this chapter demonstrate how innovative solutions and emerging technologies are enhancing agricultural efficiency, resilience to external factors, and reducing reliance on manual labor, while maximizing profitability.

Advertisement

2. A distributed sensing system for assisted navigation of mobile robots

Autonomous navigation of agricultural mobile robots through uneven terrains, avoiding obstacles, and optimizing routes are intricate challenges that demand innovative solutions. Conventional navigation methods, often reliant on GPS alone, face limitations in accuracy and reliability, especially in densely vegetated areas with poor satellite reception and disturbances of the outdoor environment. Data fusion and multiple perception solutions are usually employed to assist the existing RTK GPS-based navigation and to improve the reliability of the operation in unstructured agricultural fields. In this regard, a distributed sensing system (DSS) that leverages multiple data sources can mitigate these limitations and maintain reliable navigation even in challenging conditions. This study reports on the design, development, and evaluation of a DSS with CANBUS communication to assist the autonomous navigation of a four-wheel steering agricultural mobile robot inside berry orchards. The proposed DSS was responsible for detecting random obstacles and providing the navigation controller with real-time feedback for effective collision avoidance. For the proof of concept, the assisted navigation was expected to maintain the robot between the plants’ rows with an accuracy of 10–20 cm from the side with an ideal forward speed of 5–8 km/h. In the hardware layer, the DSS benefits from a Jetson Nano onboard computer along with a set of ROS-based multi-channel infrared and laser sensors for perception. In the software layer, a fuzzy knowledge-based algorithm was designed and simulated. Results demonstrated significant improvements in navigation accuracy through the fusion of GPS, laser, and infrared sensor data, which was essential to avoid crop damage and ensure optimal resource utilization. Results from simulation and field experiments also suggested that the exponential filter was necessary to be implemented on each sensor for removing noise and outliers. It was concluded that the development of such systems requires extensive validation tests in different orchards besides a more accurate dynamic model of the robot platform. This can be accelerated using digital models inside virtual replicas of the environments.

The base vehicle was a combustion engine four-wheel drive (4WD) and four-wheel steering (4WS) Quatrak manufactured by Irus (IRUS Motorgeräte GmbH, Burladingen, Germany) with a track width of 1.2 m, maximum forward and reverse speed of 10 km/h, and approximate weight of 475 kg that utilizes a 2-cylinder petrol engine with 20.1 kW (27 HP) output power. The 4WS mechanism of the vehicle allowed a minimum turning radius on the row-ends with increased precision and control over the robot’s movement, which is of great importance for making sharp turns and navigating in tight spaces inside orchards. The vehicle benefits from the HYDAC TTC 580 (HYDAC, Sulzbach, Germany) controller to translate and deliver the control steering signals to the actuators (including the electrohydraulic system) that are responsible for speed and turning angle. A GPS-based navigation toolbox was installed on the vehicle by Innok Robotic (Innok Robotics GmbH, Regenstauf, Germany) that benefits from a custom-built graphical user interface for uploading waypoints and following trajectory. In order to validate the proposed assisted navigation system, field visits were first carried out to collect preliminary data using high-accuracy RTK GPS. These data were used to create a virtual orchard inside CoppeliaSim that was interfaced with ROS for testing different sensors, hardware in the loop, and control algorithms on a full-scale simulated robot and orchard model. The main elements of the simulation scenes in this project were (i) mesh files representing plants, the robot, and obstacles, (ii) API and codes that created interfaces between different software environments, and (iii) algorithms and dynamic models including minimum distance calculation, Ackerman steering, path following, and obstacle avoidance algorithms. The simulation approach converted native data streams from various sensor inputs into usable information within the command-and-control system. The sensors used in front of the robot were TFmini Plus single-point short-range LiDAR sensor manufactured by Benewake (Benewake, China) with a distance detection range between 0.02 and 12 m, resolution of 1 cm, sampling rate of 100 Hz, frame rate between 1 and 1000 Hz, accuracy of ±1%, and field of view of 3.6°. In the communication layer, CANBUS (ISO 11898-2) was used for exchanging data between different nodes due to its scalability and high reliability for automation even in harsh environments with high electromagnetic interference.

The simulation scene provided a safe, fast, and low-cost experiment platform for the development, testing, and validating of the sensing and control strategies with different algorithms. It also enabled human-aware navigation by finding the best positions for each sensor and provided a flexible solution for attaching other implements and determining the optimum row-end turning patterns in the presence of random obstacles. It also accelerated complicated analysis to understand the behavior of the robot on uneven terrains. Results of field experiments showed the reliable functionality of these sensor arrays under outdoor light conditions to detect the bushes and random obstacles within the pre-programmed range of 5–60 cm. In addition to the detection task, each sensor module also transmits a state signal equal to 0 or 1 on the CANBUS line in the absence or presence of an obstacle that is georeferenced with the robot position for monitoring purposes. The average distance between the robot and the left bushes, as measured by the three sets of IR sensors mounted on the left of the robot were 17.5, 18.5, and 17.1 cm. These values for the right bushes were found to be 13.5, 18.9, and 16.6 cm. It was observed that the front IR sensor provided the collision avoidance controller with accurate feedback to stop the robot from nearly a 25 cm distance from the bushes. These results show the redundancy in the distance measurements that the multi-channel IR sensing setup are providing to make sure the controller always receives feedback and the robot continues its operation, even if one sensor module fails. Field experiments also confirmed that the analog output of these sensors with the exponential filter noise reduction allows for precise measurements, even in high-density bush conditions, and the multi-channel ensured that the final measurements were not easily affected by the disturbances of the environment such as occlusion by leaves and dust. From the perspective of exchanging sensor data with the controller, the CANBUS network was found to be robust and reliable with 100% of the data transmitted during the experiment without any interruption.

The system showcased consistent performance across different types of crops, soil conditions, and environmental factors. The hardware of the system was designed to be modular and could be replaced easily upon failure. This adaptability stems from the fusion of multi-modal sensor data, which allows the robot to gather comprehensive information and make informed decisions regardless of the surroundings. Comparative analysis with existing navigation methods highlighted the superiority of the DSS. The quantitative comparisons demonstrated the system’s significant advantages in terms of accuracy, efficiency, and adaptability. The system’s ability to generalize its navigation strategies to various contexts holds great promise for its widespread adoption in agriculture. The evaluation confirmed that the proposed solution is capable of adjusting the robot’s steering and speed steering the robot in the presence of different obstacles and in different scenarios. This was achieved by a control strategy that reduced the error distance between the sensor readings and a minimum allowed distance with left, right, and front obstacles.

Advertisement

3. Obstacle-driven path planning via region segmentation

The efficiency of agricultural machinery operation is improved to satisfy the needs of autonomous agricultural machinery operation in the field. Aimed at the challenges of farming operations such as obstacles, overlapping and redundant working paths, and a higher number of turns. This paper proposes a complete coverage path planning algorithm based on obstacle region segmentation. For obstacles in the environment map, the minimum rectangle algorithm is used to segment the obstacles and achieve the segmentation of obstacle areas and obstacle-free areas. The polygon segmentation algorithm was used to divide the obstacle-free areas into working sub-areas. The reciprocating algorithm was applied to cover the sub-areas and generate the coverage path. The Dijkstra algorithm was utilized to establish the connecting paths between adjacent sub-areas to achieve complete coverage path planning of the entire farmland. Several experiments were conducted to verify the feasibility and practicality of the proposed approach, which successfully avoids obstacles, reduces path redundancy, and improves coverage.

Advertisement

4. Tomato seedling stress classification in plant factory with machine learning

Controlling key environmental parameters, such as temperature, relative humidity, light intensity, and water quantity, is crucial for ensuring optimal plant growth. In a controlled environment, early detection and management of environmental stress play a vital role in achieving high-yield and high-quality tomato seedlings. However, conventional invasive techniques for detecting plant stress are impractical and susceptible to inaccuracies. This study aimed to employ image-based color feature characteristics and machine learning models for accurate classification of stress symptoms in tomato seedlings. Two-week-old tomato seedlings were placed into five chambers in a controlled plant factory, and the seedlings were grown at different temperatures (20, 25, and 30°C), light intensity (50, 250, 450 PPFD), day and night hours (8/16, 10/14, and 16/8), and water supply levels (1, 2, and 3 L/day). Humidity (60%), CO2 level (600–800 ppm), and nutrient treatment (0.8 dS/m) were constant for all the chambers. A microcontroller was utilized for automated image acquisition from each chamber and store images of tomato seedlings. Nine textures and 18 color features were extracted for each image and a two-way analysis of variance was used to determine the statistical significance of treatment effects. Duncan’s multiple range test, and sequential forward floating selection (SFFS) algorithms were used to select features and reduce feature overlaps. Support vector machine (SVM), naive Bayes classifier (NBC), K-nearest neighbor (KNN), and random forest classifier (RFC) were employed to identify the optimal classifier for stress and non-stress classification of tomato plants. The results indicated the impact of stress on seedlings became noticeable approximately 3 days after seedling placement. Morphological analysis demonstrated significant differences in stress conditions attributed to temperature and water content variations, whereas light intensity and day-night hours exhibited less significant differences (<5%) on Duncan’s multiple range test. The SVM, KNN, NBC, and RFC achieved average stress classification accuracies of 89.56%, 89.23%, 79.05%, and 63.75%, respectively. The combination of image-extracted features and machine learning models can improve environmental stress monitoring in plant factories, leading to better crop quality, yield, and resource utilization. Future research should explore additional factors and expand the dataset for further improvements.

Advertisement

5. Orchard fruit tree phenotyping with 3D LiDAR quantification

Phenotypic information plays a crucial role in effective fruit tree management in orchards. The utilization of LiDAR (Light detection and ranging)-based object recognition provides a rapid and precise assessment of the phenotypic traits of orchard fruit trees. This study aimed to quantify the fruit trees phenotypic and land characteristics (e.g., tree height, canopy area, tree spacing, and inter-row spacing) in an apple orchard using a 3D LiDAR. A LiDAR sensor (i.e., VLP-16) was employed to collect 3D point cloud data from an apple orchard. Six apple trees, exhibiting diverse shapes and sizes, were selected for testing purposes. Commercial software was utilized to process the collected data for visualization and measurements. The accuracy of the estimated outputs from the point cloud was evaluated by comparison with measured values. The estimated tree heights from the point cloud were 3.05 ± 0.34 m, while the measured heights were 3.13 ± 0.33 m. The root mean square error (RMSE) for tree height estimation was 0.09 m, and the simple linear coefficient of determination (r2) was 0.98. Regarding the canopy area, point cloud estimations were 5.97 ± 1.4 m2, compared to the measured area of 6.19 ± 1.38 m2. The RMSE for canopy area estimation was 0.24 m2, with r2 of 0.99, indicating a high level of accuracy. For tree spacing and row distance, the point cloud measurements were 3.04 ± 0.17 m, and 3.18 ± 0.24 m, respectively, while the measurements were 3.35 ± 0.08 m, and 3.40 ± 0.05 m, respectively. The RMSE and r2 for tree spacing were 0.12 m and 0.92, respectively, and for row distance, 0.07 m and 0.94, respectively. Despite minor differences, there was a strong relationship and close agreement between the estimation using point cloud data and measurements. The findings highlight the reliability and efficiency of the 3D LiDAR technology for accurately detecting and measuring phenotypic traits in fruit trees. This research would help in quantification of fruit trees phenotypic and land characteristics for maximizing fruit cultivation.

Advertisement

6. Crop insurance enhancement with RS, GIS, GEE, and AI

This paper discusses remote sensing applications, the Google Earth Engine (GEE), and the use of artificial intelligence (AI) in crop insurance. The global crop insurance market is vast and diverse. However, remote sensing applications for crop insurance are already in use. There is a possibility of combining remote sensing, GEE, and AI by insurance companies considering the insured amount of the crop. This study explores the capability of remote sensing for payout for the insured crops. Geospatial tools are capable of mapping and giving the basic requirements of the crop insurance assistance system. The process of the crop insurance assistance system requires the agro-climate, crop calendar, crop yield predictions, a record of crop rotations, soil productivity, crop classification, crop monitoring, types of damage to the crop, crop damage assessment, and finally the crop insurance settlement amount. In the past, traditional methods of ground surveying could be relied upon up to a certain point to obtain this type of information. Remote sensing combined with the GEE and AI provides a more dependable and efficient method of gathering the necessary data for mapping crop types and estimating the acreage, and crop loss. This information can be very helpful for agricultural-based insurance/finance companies as well as farmers. By utilizing multispectral sensors and GEE, it is possible to analyze the spectral reflectance of crops over time and derive meaningful information about their health and vigor as well as damage. Different spectral bands of the sensors provide valuable data for calculating various vegetation indices, such as the normalized difference vegetation index (NDVI) or the enhanced vegetation index (EVI). The spectral reflectance curves may vary during the growth and health conditions of crops, and thus measurement and monitoring can be done by multispectral sensors. Radar data are sensitive to the structure of crops, crop moisture content, and alignment. It would provide information regardless of the time or weather, and thus offer additional information to optical sensors. Combining both the sensors’ data would expand the available information for making a distinction between each crop class and its spectral signature. As a result, there seems to be a greater chance of achieving higher classification accuracy of yield, and crop damage assessment required for payout for crop insurance.

Advertisement

7. Spatial-temporal analysis and prediction in Ugandan coffee farming

Uganda contributes 3% of global production of both the Arabica and Robusta coffee varieties, uncontested ranking second in Africa for coffee export. Due to climate change coffee is highly vulnerable. Rising temperatures and increasingly erratic rainfall have exposed trees to pests and diseases, decreasing both the quantity and quality of the crop. Uganda Coffee Development Authority (UCDA) lacks a reliable and fast mechanism for monitoring coffee crops to quickly identify crop challenges and hence has a low chance of combating the risks of depletion of potential yields in a timely manner. Drones have proved to be precise as compared to the conventional method of assessments that are time-consuming and impossible to examine all the crops deriving to area sampling. Therefore, Uganda Flying Labs (UFL) collaborated with UCDA to provide support turnaround solutions. Multispectral and RGB drone flights were significant in capturing UAV images in support of coffee crop health problems ranging from plants under stress, due to a water or fertilizer shortage, or pest attacks, inexorably saving coffee plants from premature death and ensuring high coffee production. The study enabled yield projections, damage assessment, flood impact, and NDVI through the use of UAV images for data analysis. Successive progress has been made from UAV coffee mapping results intersecting with terrain formation data affecting crop growth. Use of GIS tools such as deep learning in ArcGIS Pro managed to track crop health, inventory, and preharvest projections. The study gave an experience in using the RCNN mask deep learning technique in modeling for object identification of intercropped coffee with banana plantations. The artificial intelligence application assisted in crop inventory ultimately giving a snippet of coffee beans anticipated from the mature crops. Spatial and temporal variability and prediction analysis aspects to mitigate the gaps caused by erosion and unfavorable rain patterns influenced effective response. Crop health, statistics, and harvest estimates were the standard in the whole process.

A coffee farm (Ventura) of 44.034 ha (0.44 km2) was identified. Its location was on a hillside in a moderately hot central region, namely Mukono. The coffee plantation holds different ages of robust species intercropped with banana plants in some acreage to give shade. The data input was achieved through drone flights of a Phantom 4 for RGB images and a DJI Phantom 4 Multispectral with six sensors 1/2.9″ CMOS, including one RGB sensor for visible light imaging and five monochrome sensors for multispectral imaging. An RGB-dedicated drone was important to use because of its high resolution compared to multispectral to facilitate accurate predictions and modeling through algorithms. The flight plans were PIX4D Capture and DJI GS Pro for Phantom 4 and Multispectral respectively at flight height of 75 m with a GSD of 4 cm/pix ultimately acquiring 10,722 images. The task took 2 days to be accomplished with four flights from the multispectral while onboarding RTK enabled GPS for georeferencing. The RGB drone completed its work with two flights. Data processing was carried out with Pix4Dmapper and PIX4D fields to develop Orthomosaic, DSM, and reflectance images for individual bands. Furthermore, analysis was executed with ArcGIS Pro and PIX4D Survey to produce Coffee plants (shapefile) with attributes of age and health status, DTM, and slope map including NDVI map. The final output shared with the stakeholders was a map in PDF format of the Orthophoto, Terrain (contours), Slope, NDVI, Crop Health & Harvest Estimates. The coffee farm was allotted according to coffee age and crops were automatically detected using the RCNN mask deep learning technique in ArcGIS Pro for health status ranging from very poor to good crop. Estimates were in density per hectare and harvest tonnage in kilograms.

UCDA was keen on a comparison of both methods of cropping which is intercropped coffee with a shading of banana plants and the coffee trees that were planted independently. This would help them to use the results to decide on best practices for different terrains. With the time not on our side, we were given the option of a commercial farmer who had both types of farming methods on a rocky and hilly piece of land. Reporting figures of yield steered interest in analyzing how technology could help in the estimation of yields nationwide and efficiently address factors limiting improvements in those yields, subsequently for exports. While the bank looked forward to the potential to strengthen relationships with their clients by facilitating informed decisions making with timely and targeted interventions to maximize return on investments (loans given to farmers). The generated digital surface model, reflectance maps for the various bands, and a 3-D Point Cloud with further processing, more results such as the Digital Terrain Model slope map, NDVI maps an indicator of crop health presented a focus to the farmer, UCDA of best practices. Highly acclaimed deep learning techniques for automatic object identification in an image and sampling were evident. With a few iterations to the model, the model was able to detect over 7000 stands. The tool proved useful for inventorying and helped in coming up with further analysis focused on the identified stands. Identified stands, overlayed with the NDVI map, the status of each crop was assessed, which enabled the indication of areas with stress. Therefore, a known number of crop stands and harvest estimates were calculated based on the expected yield per year of mature coffee trees. An estimate of the harvest was 8500 kg of green beans (dried coffee beans without husks) in the range of approximately 7.5 Ha was the generated results. The use of a Digital Terrain Model, assessment of erosion and flood risk data, and contour maps empowered the farmer to make a decision on drip line construction and terracing as some of the solutions.

The outcomes of this study revealed that spatial characteristics are relational to weather patterns consequently affecting crop health. Drone technology and data complimented by RCNN Masks deep learning model automatically detected coffee crops proving to be a quick solution to mitigate risks in coffee farming. Harvest prediction analysis presented through NDVI, slope maps, and crop health index, reflected the volume of yields that were expected conclusively reflecting a number of trees to be replaced. The risk of erosion enabled the farmer to mitigate through soil retaining techniques on slopes to prevent the lose of soil and nutrients. With the evidence of spatial and temporal variability, modeling, and prediction analysis in coffee farming there is room for UCDA to scale up drone data applications to improve coffee production in Uganda. It can be concluded that: “aerial platforms data provides an efficient, rapid and nondestructive assessment through the spectral response of plants and its relationship with various environmental factors [1].”

Advertisement

8. Algae cultivation and harvesting

This work describes a novel approach for cultivating and assessing algae growth in various aquatic systems. The study utilizes an unmanned aerial vehicle equipped with a multispectral camera to detect and monitor the presence and growth of the algal community. The aim is to provide an efficient and non-invasive solution for evaluating algae growth using vegetative and other indices. The research also focuses on developing a predictive model for algae growth through correlation analysis, which is particularly effective in image analysis tasks. The model demonstrates high precision in measuring and predicting algal growth stages across different environments, compensating for limitations associated with traditional data collection methods. The findings of this study have the potential to significantly impact algae growth measurement and prediction, contributing to the advancement of sustainable and economically viable practices. The results will be compared with alternative measurement methods for further evaluation.

Advertisement

9. Volumetric yield estimation of Chinese cabbage using RGB camera

Yield assessment helps farmers to make efficient use of their resources for an accurate estimation of crop production. The objective of this research was to determine the volume of Chinese cabbage using a low-resolution RGB camera on an application board (i.e., Raspberry Pi). The system consists of two 9-W fluorescent LED light sources, two ultrasonic sensors, three low-resolution cameras, and an HP Core i7 laptop to capture RGB images of 30 cabbage samples. The camera and LED lighting were mounted 1.08 meters above the harvester conveyor belt. The speeds of the conveyor were 0.55, 0.70, and 0.85 m/s, respectively. Two ultrasonic sensors triggered the camera at a rate of 5 Hz to capture RGB images of the samples. The actual volumes of the 30 cabbages were calculated using the standard method of Archimedes law, and a combination of RGB image processing and volume estimation techniques for cabbage were developed and compared. The volume of cabbage with an ellipsoidal shape was estimated using the box method. In this method, a Python-based program was developed to calculate the area of the RGB images by background subtraction, edge detection, area calculation, and pixel value count, and the height of each cabbage was determined by subtracting each cabbage sample point from the background point. The linear regression method and t-test analyses at a 5% level were conducted to compare the estimated and measured cabbage volumes. These conventional and box estimation approaches provided volumes in the range of 0.003–0.007 m3. R2 values were 0.92, 0.84, and 0.77, while RMSE values were 0.00015 m3, 0.00012 m3, and 0.00008 m3, respectively. The developed methods can be used to estimate the yield of cabbages under actual field and real-time harvesting conditions.

Advertisement

10. Intelligent crop production decision support with data fusion

Integration of data from various sources provides farmers with useful information and recommendations for managing their crops. Data fusion techniques can be used to combine data from different sources such as weather forecasts, satellite imagery, soil sensors, and crop growth models to create a holistic view of the crop’s growing conditions. An intelligent system can use this information to provide farmers with real-time recommendations on things like irrigation, fertilization, pest control, and harvest timing. By using data fusion, the system can improve the accuracy of its predictions and recommendations, resulting in better crop yields and reduced costs for farmers. The data can also be analyzed by agronomists and researchers to improve crop management practices and develop new crop varieties. Toward this end, a system is being developed to fuse data from multiple sensors and multiple sites for two crops, namely pineapple in the open field and leafy vegetables planted indoors. It is hoped that the developed system can help farmers make more informed decisions, improve crop yields, and increase their profits while helping to conserve resources such as water and fertilizer.

11. Real-time sensor monitoring for greenhouse disaster resistance

Greenhouse agriculture mitigates external risks and ensures stable high-quality crop production. However, weather disasters frequently damage greenhouses, leading to significant economic losses. According to the Korea Meteorological Administration’s Climate Change Assessment Report, the rate of change in climate factors (temperature) in Korea is twice the global average, expanding weather disasters such as drought, floods, and heat waves in the agricultural sector, and affecting crop quality, food safety, and productivity. The escalating impact of global warming and abnormal climatic events intensifies these damages. As the area of facility cultivation increases and cultivation techniques improve, ensuring the safety of facility structures becomes even more crucial. To minimize such losses, this study focuses on evaluating greenhouse safety and planning disaster-resistant structures. The research adopts lightweight and high-performance Inertial Measurement Unit (IMU) sensors to collect real-time data on rotation, displacement, and acceleration during diverse weather conditions. Automated data acquisition through these sensors is vital due to the challenges of manual monitoring. Because we have limitations of continuous data acquisition through personnel in the field. Utilizing open-source libraries, the study analyzes the dynamic data in three-dimensional space, enabling comprehensive insights into greenhouse behavior. Preliminary investigations assess the practicality and sensitivity of sensors under actual outdoor conditions. Moreover, the research compares greenhouse meteorological data with external weather station records, with a particular focus on wind speed and direction, significant factors in greenhouse damage. Understanding their impact enhances the resilience of greenhouse foundations. In conclusion, this study aims to fortify greenhouse agriculture against weather disasters through real-time sensor monitoring, contributing to long-term food security and agricultural competitiveness.

The most important sensor in this study is the nine-axis motion tracking sensor. In other words, it is an MPU6050 sensor called an inertial measuring device. The sensor incorporates a three-axis accelerometer, a three-axis gyroscope, and a three-axis geomagnetic system. The three-axis accelerometer represents acceleration, speed, and position data in a three-dimensional coordinate system. The three-axis gyroscope measures the angular velocity calculated by the Euler angle. It also supports an I2C serial interface called an inter-integrated circuit. It is very small in size, so it can make a miniaturized system. In addition, a microelectromechanical system technology called MEMS is applied. There is a 1024-byte FIFO buffer, which enables low power consumption. It also has a digital motion processor engine called DMP and a built-in thermometer capable of digital output. The microcontroller board, the core of the embedded system, used the WeMos D1 R1 board. This is an Arduino Uno-compatible board with a built-in Wi-Fi module called esp8266. It is a very small board with a length of 68 mm, a width of 53 mm, and a weight of 25 g. A Wi-Fi weather station was installed to collect weather data near the greenhouse. It collects information on indoor and outdoor temperatures and humidity, air pressure, UV levels, rainfall, wind speed, and wind direction. It sends data over Wi-Fi to store the data in a web browser.

In the case of gyroscopes, the MPU6050 has a drift phenomenon in which data are pushed over time within a low-frequency area. In the case of accelerometers, there is also a problem that a large number of noises are mixed in the high-frequency region by reacting sensitively to external vibration or environment. In order to increase the accuracy of low-cost sensors, a process of filtering data is essential. There are a wide variety of algorithms for the calibration of data. In this study, algorithms related to Complementary Filter and Kalman Filter were used. In addition, using Quaternion, an extended type of vector of complex numbers, the initial data of the IMU sensor was processed to obtain stable results. The DMP function of MPU6050 supplied by a company called Invensense was used. The value calculated by Quaternion in the FIFO buffer in the sensor is converted back to the Euler value to obtain Pitch, Roll, and Yaw values. As mentioned earlier, errors often occur between the low-frequency band data of the gyroscope and the high-frequency band data of the accelerometer. Therefore, the complementary filter extracts and corrects the posture angle using gyroscope data in the high-pass filter and acceleration data in the low-pass filter. In addition, the Kalman filter can correct data more accurately than the complementary filter, but the delay is large due to the complicated operation process, and it is too heavy to use in the MCU provided by Arduino. As a result, the data accuracy was improved by using the complementary filter and the MPU6050’s DMP function, which has a simpler calculation process than the Kalman filter and allows smooth operation on a low-performance board (Arduino). Data were transmitted over Wi-Fi using a WeMos D1 R1 board with a built-in module called ESP8266. The transmitted data was stored in a web browser called Thingspeak and visualized as a graph. A weather station was installed directly to compare weather forecast data with weather data around the greenhouse. Like the embedded system board, a weather station capable of wireless Wi-Fi connection was used for data transmission. Graphs and data of weather data were stored in detail in units of days, months, and weeks.

Experiments were conducted mainly on wind speeds related to typhoons and strong winds, which have the greatest influence on weather phenomena that damage agricultural facilities. In this study, an experiment was conducted indoors at a wind speed lower than the standard of strong winds to determine the sensitivity and accuracy of the sensor. As the wind speed increases, the frequency and amplitude tend to increase almost proportionally, and in particular, there is a clear difference in the x-axis and z-axis. There was an error in which the amplitude instantly splashed, which was estimated to have been caused by a delay between data recognition or accumulation of data values as time accumulated in the process of processing data with a sensor. In the future, these sensors will be attached to the foundation of greenhouse structures to quickly detect the actual movement of the structure and use them to derive correlations by comparing and analyzing them with external weather conditions.

12. Boom height and pressure effects on spray droplet distribution

Excessive use of pesticides increases crop production costs and negatively impacts the environment. The efficiency of spray uniformity needs to be enhanced through the use of precision variable rate technology. The quality and effectiveness of the sprayer are greatly influenced by the technical performance of the nozzle. The objective of the research was to analyze the effects of operating pressure and height of the sprayer on the dispersal and uniformity of the spray droplets. To conduct the experiment, a test setup was constructed, consisting of four nozzles (NN D-35) and a single-cylinder motor with a four-stroke capacity of 0.72 kW. The applied speed was 2 km/h on the sprayer machine, while the test was conducted on the field platform. Experiments were conducted both in lab and field conditions with flat fan spraying nozzles and using water as the test liquid. The tested liquid outflow pressure ranged from 280 to 520 kPa. Depending on the spraying target surface, 35, 45, and 55 cm of working spray boom height were adjusted. The nozzle spacing was 30 cm, and the spray angle of the nozzles was 110°. In both laboratory and field conditions, the sprayer nozzle widths with boom heights of 35, 45, and 55 cm resulted in overlapping percentages of 22.38%, 23.43%, 24.76%, and 24.11%, 26.52%, and 29.59%, respectively. Under laboratory conditions, when the speed was set to 2 km/h, the average droplet density levels were 155.38, 159.20, and 171.91 spots/cm2 at boom heights of 35, 45, and 55 cm, leading to spray coverage levels of 23.21%, 26.38%, and 28.35%, respectively. Similarly, in the field condition, at the same speed and boom heights, the average droplet densities were 138.62, 151.22, and 165.31 spots/cm2, resulting in spray coverage levels of 23.88%, 25.61%, and 27.92%, respectively. Notably, the 55 cm boom demonstrated less than 30% overlap and offered optimal average droplet density and spray coverage per unit area in both scenarios.

13. Kinematic simulation of automatic pepper transplanter’s seedling picker

The selection of an appropriate picking mechanism is crucial for the successful transplanting of seedlings. A clamp-type picking mechanism offers fast and efficient operations, reducing both the time and labor required for transplanting. The aim of this study was to conduct a kinematic simulation of a clamp-type picking mechanism for determining the optimal component dimensions. The clamp pin and bar picking device consisted of a manipulator with five grippers, a picking stand, gears, and a clamp bar. Among them, the picking stand, and manipulator with grippers dimensions are variables while the dimensions of gears and clamp are constant. A virtual simulation model with 10 trials of length combinations for the picking stand and manipulator with grippers was used to determine the kinematic characteristics such as position, velocity, and acceleration of the picking device variables to meet the trajectory requirements within the picking workspace and the azimuth angle of the seedling tray to prevent the seedling from breaking or falling. The length combinations considered range from 250 to 450 mm for the picking stand and 300 to 480 mm for the manipulator with grippers. Based on the kinematic simulation results, a 350-mm picking stand and a 380-mm manipulator were selected from the considered combinations. Under operating conditions of 30 rpm, the grippers exhibited maximum velocities of 0.63 m/s and 0.14 m/s and accelerations of 0.40 m/s2 and 0.65 m/s2 along the x-and y-axes, respectively at 900. The power required to complete the picking mechanism cycle was calculated to be 37.86 watts at a picking rate of 75 seedlings per minute. By assessing the suitability of the picking device’s working trajectory, velocity, and acceleration of the grippers, a dimension suitable for the picking device was identified and will be validated with a field test. This study shows that by minimizing operating time, gripper velocity, acceleration, and damage to mechanical components, the picking system under development would improve pepper seedling picking accuracy and motion safety.

14. Proximal body temperature sensing in pigs using thermal imaging and deep learning

Pig body temperature is a crucial indicator of physiological and health status. Changes in pig body temperature can occur due to disease, different physiological stages, or external environmental conditions. Measuring pig body temperature allows comprehensive analysis of health conditions and physiological changes in growing conditions. Contact methods possess certain drawbacks for temperature detection, such as the difficulty of obtaining accurate and consistent readings, induce stress in pigs, and risk of contamination. Infrared thermography (IRT) is a non-contact measurement technology that can effectively measure pig body temperature without causing stress to the pig. A deep learning-based algorithm for the recognition and detection of pig body multi-parts temperatures using thermal images was proposed in this study. Data were collected in a lab-scale pig farm using a thermal camera with a camera angle of 45° from the top view. Videos were recorded from 4 pig sessions, and a total of 645 images were extracted with an interval of 25 frames from the video files. Then the data were split for training (580 images), validation (45 images), and test (20 images) datasets. A YOLOv5 model was trained to detect relevant pig body parts, followed by background segmentation to generate precise mean, median, and maximum temperatures for each detected body area. The results showed that the averaged precision and recall rate of the model for pig individual identification were 86.23% and 89.61%, respectively. The mean average precision (mAP) for body parts recognition was 91.32% when the threshold of the intersection over union (IOU) was set as 0.9. The study revealed that the lowest temperatures were observed on the pig nose, averaging 34.2°C, while higher temperatures were observed on the tail, ear, and head, with mean temperatures of 37.4°C, 36.8°C, and 36.9°C, respectively. The errors may be caused by surrounding temperature and humidity. Therefore, the proposed method showed optimal performance for pig body parts identification and temperature measurement in individual pigs provides a basis for automatic non-contact methods, and could be used for real-time temperature monitoring in the pig farm.

15. Automated greenhouse pesticide spraying with ultra-wideband technology

Greenhouse farming has gained considerable attention recently due to its role in ensuring food security and optimizing crop yields. However, the effective and correct application of pesticides in greenhouses remains challenging, including the potential risk to farmers from prolonged pesticide exposure [2]. This paper proposes a new method using Ultra-Wideband (UWB) [3] technology to develop an automated pesticide spraying vehicle for the greenhouse environment. The primary objective is to design and implement a UWB-based navigation and control system that enables the vehicle to move accurately throughout the greenhouse while also providing the capability to halt operations when encountering obstacles temporarily and automatically return to the refill station to resume interrupted spraying tasks. The pesticide spraying vehicle is entirely electric and powered by Po-Li batteries, ensuring continuous operation for a minimum of 2 hours to cover the entirety of the greenhouse for pesticide application. The wheels are equipped with high-traction rubber tracks suitable for the moist environment inside the greenhouse. Testing was conducted on a papaya field in Liu-Gui, Kaohsiung, Taiwan. The test results demonstrate the vehicle’s precise adherence to the predefined path, successfully completing pesticide spraying across the entire 32 × 25 m2 garden (Figure 1) [4].

Figure 1.

UWB-based car control diagram.

The spray truck uses the UWB module for precise positioning in the greenhouse. Incorporate an additional IMU module to ensure accuracy when running straight and turning left and right in bends. All data are processed through a Jetson Nano computer. The map data, which stores the location information of the greenhouse needed for vehicle control, will be saved in a file and automatically loaded at system startup. The vehicle performs spraying according to the route from the starting position to the ending position. The vehicle uses a level sensor to measure the amount of liquid remaining in the tank. If water or pesticides run out during spraying, the car will save that position and automatically return to the refill position to add more liquid. After charging enough pesticide, the vehicle returns to the previous post and continues until the entire greenhouse area is sprayed. The test was conducted in the papaya garden in Liu-Gui Kaohsiung, Taiwan. As a result, the vehicle can complete the spraying stably and move accurately in the greenhouse area of 32 m × 25 m. Deviation from the center of the road is about ±20 cm but still within the allowable range. The time to complete spraying the entire area is 13 minutes. Tests for the vehicle’s battery life are also tested separately. The car can operate continuously for 2 hours. The safety function that pauses when encountering an obstacle is tested and works correctly (Figure 2).

Figure 2.

UWB-based car testing in papaya greenhouse.

The research results demonstrate that developing an automated pesticide spraying vehicle for greenhouses using UWB technology offers a promising solution for precise and efficient pesticide spraying. Integrated UWB navigation, obstacle detection, and auto-return function ensure precise and uninterrupted spraying while reducing chemical use and minimizing farmer hazards. This advancement in greenhouse insecticide spraying technology has the potential to revolutionize agricultural practices, promote sustainable farming, and enhance crop protection strategies.

16. Lonicera caerulea fruits detection model

These days, digital twins, a digital model of an actual real-world physical asset, are considered for agriculture [5]. To achieve a digital twin for agriculture, there is a need to detect the spatial position of crops and their fruits. Drones are used for agriculture to sense crops [6] and to take images by drones [7, 8]. Deep learning is also used for the blueberry fruit detection algorithm [9], but often the position of the camera is close to the fruits, and the number of pixels of fruits was big. In this study, the aim is to evaluate the performance of a small (about 3–7 pixels to 3–10 pixels) Lonicera caerulea (the fruits similar to blueberry) detection model using deep learning. A drone camera (Mavic 3, DJI, China) was used to take Lonicera caerulea images. Images were taken in the Lonicera caerulea farm located in Chitose town in June 2023. The original image size was 5280 × 3956 pixels, but it was too big, so reduced the image size. Seventy-five images were annotated: the fruits of Lonicera caerulea were bordered as bounding boxes. Additionally, this image set was separated into 60 training and validation images and 15 test images. At the evaluation, precision, recall, and mean average precision of the test were calculated. The precision was 70.5%, and the recall was 71.5%. In addition, the mean average precision (mAP) was 65.5%. Regarding the images, the color of Lonicera caerulea was similar to shadows, and some predicted bounding boxes show the shade of Lonicera caerulea leaves. This study did image annotation, and the targets in images were only a few pixels to a few pixels. The result of detecting mini-size Lonicera caerulea fruits was 65.5% mean average precision. There is potential for tiny area object detection, even without no numerous image data. In the future, it would be used for digital twin research.

17. Automated spraying robot for cherry tomato greenhouse

The agricultural technology industry’s demand for various materials and labor is extreme. Cherry tomatoes are grown in greenhouses using manual farming techniques, with labor representing the primary cost element and accounting for 41.6% of the overall production cost [10]. Taiwanese greenhouses employ manual spraying techniques like knapsack sprayers and automated irrigation systems. While automated spraying systems have significant installation and maintenance costs, using a knapsack sprayer harms users’ health [11]. Consequently, introducing robots into this industry to automate specific farming tasks has been explored to optimize resource utilization and decrease the heavy dependence on human labor [12], ultimately leading to improved productivity and cost-effectiveness. During the spraying process, traditional manual spraying methods and even contemporary automated systems frequently ignore crucial elements like crop development phases and coverage density, leading to water waste and environmental contamination due to uncontrolled water application. By employing sensors [13] and camera [14] to identify the targeted irrigation regions, several attempts have been made to study how to maximize the irrigation water amount for crops. A line-based navigation system and an automated spraying system based on an estimate leaf area system (ELAS) have been combined to create an automated spraying robot (ASR). This research predominantly uses the ELAS technique to activate and deactivate relevant spray nozzles based on the determined leaf density zones. The utterly autonomous spraying vehicle may successfully use manual spraying techniques, lowering the danger to workers while providing a more consistent spray pattern and increasing the effectiveness of pesticide treatment in greenhouses for cherry tomatoes.

The control system comprises a microcontroller for the Vehicle Control System (VCS) and an embedded Spraying Control System (SCS) computer. The Arduino Mega 2560 controls the Autonomous Spraying Vehicle (ASR), receiving input from the remote and vision sensor, which is then transmitted to the vehicle’s driver for precise navigation. To detect objects, a camera captures the scene, and the output is continuously processed by an embedded board (Jetson Nano) to identify the region of interest. The computer system utilizes ELAS results to optimize control for each nozzle valve. ELAS is developed to detect and measure leaf density in three regions of interest (ROIs). The image processing techniques use OpenCV for tree and leaf recognition through color-based segmentation. The workflow in Figure 3c involves transforming the image to the HSV color space, determining key parameters for object detection, and applying image noise filtering methods to enhance segmentation accuracy. The 50th percentile (Dv0.5) of droplet diameter was determined at various height positions used in field experiments (Figure 4a). There was no significant difference in droplet diameter distribution for the vertically sprayed ASR, with an average measured droplet diameter of 415.56 μm. According to the international standard S572.1 (2009), the spraying system of the autonomous vehicle produces a Medium to Very Coarse spray suitable for most agrochemical applications. The ASR demonstrated the ability to achieve more uniform and stable droplet distribution compared to traditional spraying methods using knapsack sprayers (Figure 4b). Results indicated no significant difference (p > 0.05) in Droplet diameter and droplet density between the two methods, suggesting that the ASR can achieve spraying efficiency comparable to manually operated sprayers. Due to the automation capability and lack of constraints faced by traditional methods, the ASR exhibited more than double the coverage compared to traditional spraying methods.

Figure 3.

(a) System architecture, (b) prototype of ASR, and (c) block diagram of image processing.

Figure 4.

(a) Dv0.5 values (μm) per height position from 40 to 160 cm, and (b) comparing an ASR and a hand-operated sprayer. The symbol (*) indicates a statistical difference (p ≤ 0.05).

An ASR has been developed to distribute fertilizer and pesticides in greenhouses growing cherry tomatoes. A vertical spraying system with several nozzles and an ELAS based on machine vision may be used to distribute water droplets to specific leaf segments accurately and simultaneously. Due to accurate and reliable robotic operations that minimize human error and encourage the best possible growth of cherry tomatoes, comparative studies reveal that the ASR surpasses manual spraying techniques. This technical development offers hope for successful and sustainable greenhouse farming. It will take advances in image processing methods to incorporate automated spraying systems with traditional sprayers.

18. Active back exoskeleton for rice farm lifting tasks

Lifting and carrying activities generally cause low back pain (LBP) and musculoskeletal disorders (MSDs) [15]. Farmers, including rice farmers, were eight times more prone to making significant modifications to their work activities because of LBP [16]. Despite squat-lifting being a safer lifting posture compared to stoop-lifting, more individuals tend to opt for stoop-lifting when lifting heavy objects, potentially due to its convenience and labor-saving benefits. Exoskeletons have recently emerged as a potential solution to alleviate these risks by reducing the physical strain on the human body during farming tasks [17]. A functional exoskeleton was effectively utilized, enabling the adjustment of assistive forces through actuators, regardless of the user’s position. In contrast, passive exoskeletons lack the capability to adjust assistance or provide external power, relying solely on mechanical components such as springs and dampers. Due to the nonlinear nature of the dynamic system, which comprises both the user and the exoskeleton, a fuzzy controller proves to be appropriate for achieving torque response tailored to each back position [18]. This research aimed to design an active back exoskeleton to assist young rice farmers in their stand-lifting tasks. To regulate the exoskeleton’s torque, a fuzzy controller was employed, utilizing feedback signals obtained from the Motion Processing Unit (MPU) sensor, precisely the angle estimated from the Accelerometer and Gyroscope data. By collecting and analyzing electromyography (EMG) data, we successfully assessed the benefits of using exoskeletons for lifting tasks among farmers.

The electromyography was utilized to monitor the muscle activities of the lower erector spinae (LES) and upper erector spinae (TES) during work-related movements as shown in Figure 5. The EMG signals from four muscles were captured using a portable EMG system (Nexus 10, Mind Media BV, Netherlands) at a sampling frequency of 2048 Hz. TES and LES EMG data were expressed as a percentage of a reference voluntary contraction (%RVC) to obtain a significant comparative measurement among participants. For this experiment, we employed two RMDx10 servo motors as actuators, each capable of delivering a maximum assist torque of up to 50 Nm. These motors can regulate the torque output by controlling the supplied amperage to the motors. In total, the weight of the exoskeleton amounts to 4.8 kg, which includes a 22.2 V 4.2 Ah lipo battery pack. The average operational duration on a single charge is approximately 2 hours. To control the motors, an Arduino Uno microcontroller is utilized, along with an MPU6050 sensor, to gather and estimate the back angle. Additionally, a CANbus module converts the SPI signal to Canbus, facilitating seamless communication between components. About the controller, we use the fuzzy controller to adapt the torque support correctly to the back angle. The controller consists of one input: the angle of the back; the output is the motor’s torque. The input consists of seven trapezoidal membership functions. Similarly, the output also has seven trapezoidal membership functions.

Figure 5.

Illustration of the left-side and backside of the exoskeleton.

In Figure 6A, the participants performed lifting motions without holding any weights in their hands. We observed that wearing the exoskeleton resulted in reduced activation levels of the four muscles measured: the left-side TES muscle (TESL) decreased by 50.3%, the left-side LES muscle (LESL) decreased by 47%, the right-side TES muscle (TESR) decreased by 24.4%, and the right-side LES muscle (LESR) decreased by 32.6%. In Figure 6B, participants performed lifting tasks with a 5 kg load. We observed that wearing the exoskeleton resulted in reduced activation levels of the four muscles measured: TESL decreased by 48.7%, LESL decreased by 32.4%, TESR decreased by 30.8%, and LESR decreased by 32.6%. In Figure 6C, participants carried a 20 kg load while performing lifting tasks. We observed that wearing the exoskeleton led to decreased activation levels of the four muscles measured: TESL decreased by 36.9%, LESL decreased by 25%, TESR decreased by 16.6%, and LESR decreased by 31.2%. Providing support to users during lifting tasks is essential to minimize workplace-related risks. The use of exoskeletons leads to a reduction in muscle activation compared to lifting without them. This suggests that employing exoskeletons can help lower the risk of lower back pain (LBP), as assessed by the NIOSH lift equation. Moreover, the lifting equation is utilized to calculate the Recommended Weight Limit (RWL) based on task performance conditions, such as the horizontal and vertical position of the object relative to the body, vertical distance, frequency, and duration of the activity.

Figure 6.

Box chart of participants’ muscle activity, the lifting task without load (A), the muscle activity during lifting task with 5 kg load (B), the muscle activity during lifting task with 20 kg load (C).

This research presents a novel active back exoskeleton that boasts a simple structure, lightweight design, and remarkable assistive capacity, all developed based on ergonomic design principles and biomechanical analysis methods that adapt the stood-lifting tasks. For control algorithms, we implemented a robust fuzzy adaptive control algorithm, enabling the actuators to provide torque in sync with the back angle, ensuring dynamic matching of assistive torque with changes in lifting load and the wearer’s upper body’s center of gravity position. Rigorous tests were conducted to evaluate the exoskeleton’s performance, measuring reductions in EMG signals for both sides of LES and TES while lifting various loads ranging from 0 to 25 kg. As a result, this active back exoskeleton proves to be an effective solution for reducing the risk of wrist injuries and lower back pain during heavy object lifting tasks in rice farms.

19. Actuator failure detection for ICT-based orchard irrigation systems

Actuator in automated irrigation systems ensure irrigation scheduling and maintain optimum water content levels for better growth and yield of orchards. However, the event of actuator failures in automatic systems can lead to suboptimal irrigation practices and potential crop damage due to inadequate water distribution. The inherent complexity of actuator control in orchard irrigation systems necessitates effective troubleshooting to ensure optimal irrigation management. Appropriate and accurate detection of actuator failure is crucial for proper irrigation and water distribution. To address this issue, this study aimed to develop a machine learning algorithm (KNN) based methodology for detecting actuator failures for ICT-based orchard irrigation systems. A demonstration orchard consisting of four apple trees was established inside a greenhouse, with a soil test bench sized 3 m by 3 m. To enable time-based irrigation, the soil bin in the greenhouse was divided into two channels, where each channel accommodated two apple plants, allowing for separate irrigation schedules and management. The irrigation system comprised a single pump and two solenoid valves. The pump operated on a cycle of 60 seconds, alternating between ON and OFF states. The solenoid valves were operated on a cycle of 15 seconds, switching between ON and OFF states. These cycles are synchronized in different time phases over a period of 3 hours. Using a microcontroller, a Python program was coded to manage the irrigation pump and solenoid valves. This program facilitated the control of the irrigation components and storing and monitoring of all sensor data related to the irrigation system. Current sensors were employed to monitor the power consumption levels of the actuators, allowing for the assessment of their operational status. The current sensor signals were processed using Fourier analysis. By integrating data from control commands, current sensors, and water flow sensors, a KNN model was trained for the determination of the actuator status. The experimental results demonstrated that the proposed approach successfully detected the failure of solenoid valves, and the average power consumption was found to be 2.1 ± 0.02 W in the normal “ON” state, 0.024 ± 0.003 W in the normal “OFF” state, and 0.023 ± 0.003 W in the abnormal state, respectively. The proposed model achieved the highest accuracy of 93.6% for the value of k = 4 having the lowest mean error of 5% in detecting actuator failures. Our proposed model exhibited similar performance to comparable KNN architectures while utilizing fewer trainable parameters. A real-time fault detection and alerting feature will be added to KNN-based models in the future through internet-connected smartphone apps.

20. Water pressure influence on spatial and temporal soil water distribution

Effective water management is crucial for optimal crop irrigation in sandy soils where water pressure significantly influences water flow and distribution. Therefore, this study aims to investigate the effects of water pressure on spatial and temporal water distribution in sandy soils. The experiment was carried out in a plastic greenhouse facility at Chungnam National University, Republic of Korea. A subsurface drip irrigation system, with three rows (60 cm apart) and a 30 cm dripper interval in the pipeline, was designed and installed in a sandy soil testing bin (3 m × 3 m) inside the greenhouse for monitoring soil water movement and ambient conditions using soil water content, temperature, and humidity sensors. A total of 90 soil water content sensors were placed at different depths, ranging from 0 to 60 cm, to measure the flow of water in the sandy soil profile. Microcontrollers programmed with Python were utilized to communicate with the sensors, and data were collected for 24 hours per experiment and automatically saved as CSV files for further analysis. Different water pressures of 25, 50, and 75 kPa were applied at an emitter discharge rate of 3 L/h. The average soil water contents at depths of 10, 20, 30, 40, and 50 cm varied from 49.428 ± 8.62% to 27.60 ± 5.23% during the experiment, with the ambient temperature and humidity ranging from 45 ± 3°C to 24 ± 3°C and 58 ± 2% to 50 ± 4%, respectively. The results showed that increased water pressure had important effects on the spatial distribution of water inside the soil bin. The increased pressure facilitated better infiltration and distribution of water, allowing it to reach deeper depths. However, at a 10 cm depth, the soil water content percentage was relatively higher compared to the content observed at a 50 cm depth, where it gradually decreased. The variations in water content percentages at different depths of sandy soil can be related to changes in ambient temperature and humidity and the limited soil water retention capacity of sandy soil. These findings would contribute to sustainable agricultural practices and enhance the understanding of water movement in sandy soil, thus promoting efficient water use and crop growth.

21. Theoretical analysis of automatic pepper transplanter’s mechanism

Mechanized transplanting of pepper seedlings is an efficient and labor-saving method. The type of planting mechanism used in the transplanter has a significant impact on overall efficiency. Theoretical analysis allows for the determination of the kinematic parameters of transplanting components, such as appropriate dimensions, operating speed, power requirement, plant spacing, and device strength, for effective transplantation. The goal of this study was to conduct a theoretical analysis of a four-bar link-type transplanting mechanism using 3D modeling in order to determine the optimal seedling planting speed and interval. The planting trajectory and depth were considered in order to maintain inter-row distance and planting interval, which were influenced by both crank motion and transplanter forward speed. The link-type transplanting mechanism consisted of three major components: a crank, a coupler, and a transplanting hopper. Five kinematic simulation trials of crank length, trajectory height, and width were performed in static conditions to determine the appropriate crank and planting trajectory dimensions. To select the optimal operating speed and planting spacing, five kinematic simulation trials of forward speed, velocity, and acceleration were performed in dynamic conditions. To maintain a planting depth of 120 mm, the appropriate crank length, trajectory height, and width were determined to be 60 mm, 254.57 mm, and 106.05 mm, respectively. The appropriate transplanter forward speed was determined to be 0.15 m/s for a rate of 60 seedlings per minute. With a planting spacing of 150 mm, the velocity and acceleration along the x- and y-axes were 0.393 m/s, 0.950 m/s, 4.026 m/s2, and 7.048 m/s2, respectively. The power required to plant one seedling for the link-driven hopper-type dibbling mechanism made of steel alloy 1020 was calculated to be 11.65 W. The findings of this study would be useful for improving the link-type dibbling mechanism design, and the validation would require field tests.

22. Stress classification in controlled pepper seedling growth using image analysis and machine learning

Environmental stress impacts plant growth adversely under controlled environments, where crucial factors such as temperature, humidity, light intensity, and CO2 levels can profoundly affect plant development. These stresses exert direct effects on seedlings, leading to changes in leaf size, leaf count, overall plant size, and canopy area. Manual inspections are time-consuming, promoting the analysis of morphological features such as the shape, color, and texture of pepper seedlings for stress quantification. Thus, this study focuses on extracting features from images to classify pepper seedlings as stressed or non-stressed across four distinct environmental conditions. In a controlled plant factory, 2-week-old pepper seedlings were subjected to different temperatures (20, 25, and 30°C), light intensity levels (50, 250, and 450 PPFD), water supply levels (1, 2, and 3 L/day), and day and night hours (8/16, 10/14, and 16/8). Fifteen images of the seedling canopy were captured daily from each stress using a low-cost RGB camera over a 2-week period, resulting in a dataset of 2520 images from the five chambers of the plant factory. Color and texture features were extracted from each image using the gray-level co-occurrence matrix (GLCM). Statistical analyses, including the two-way analysis of variance, were performed to assess the significance of the treatment effects. Feature selection algorithms, including Duncan’s multiple range test, dominant color descriptor (DCD), color layout descriptor (CLD), and sequential forward floating selection (SFFS), were employed to reduce feature overlaps. Four classification models, such as support vector machine (SVM), naive Bayes classifier (NBC), K-nearest neighbor (KNN), and random forest classifier (RFC), were utilized to accurately classify the stress and non-stress states of pepper plants. The impact of stress on seedlings in the plant factory was observed after approximately 3 days of their placement. Morphological analysis revealed significant differences in stress conditions caused by temperature variations and water content levels. However, light intensity and night hours did not show significant differences at a 5% significance level and Duncan’s multiple range test analysis. The SVM, KNN, NBC, and RFC classifiers achieved average stress classification accuracies of 90.03%, 84.44%, 77.34%, and 61.5%, respectively. These findings provide valuable insights for real-time stress monitoring and optimizing environmental conditions for enhancing growth and effective monitoring in controlled cultivation.

23. Multispectral image enhancement techniques for vegetation index calculation

Multispectral imagery plays a crucial role in vegetation analysis, particularly in calculating the normalized difference vegetation index (NDVI). The field of image data enhancement continues to hold significant interest due to challenges such as atmospheric interference, sensor noise, and uneven illumination, all of which can adversely affect the quality of raw data. However, the effectiveness of image data enhancement techniques in improving NDVI accuracy remains an area of interest. This study aimed to evaluate the performance of field of view (FOV) alignment and histogram equalization methods as an enhancement technique for multispectral imagery in vegetation indices (VI) calculation. A multispectral sensor (MicaSense Rededge MX) was employed to capture raw imagery, while an active canopy sensor (Crop-circle, ACS-435) collected the reference data. Normalized differential vegetation index (NDVI), the most extensively practiced VI, was chosen for the analysis. Data were collected from a wheat field at four different growth stages (GS1: 10, GS2: 34, GS3: 70, and GS4: 84 days after sowing). The MicaSense, Crop-circle, and a GPS receiver were mounted on a handheld structure, and data were taken from a constant height of 90 cm from the canopy. GPS locations and pixel-based image segmentation, resembling the Crop-circle FOV was processed using a Python language-based program. NDVI was calculated using Red and NIR reflectance data extracted from the cropped portions of the images. Histogram equalization ensured balanced pixel intensity distribution. NDVI was calculated from the original images without FOV segmentation and histogram equalization, and data enhancement techniques were evaluated using the regression analysis for the assessment. Results indicated that the coefficient of determination (R2) and root mean square error (RMSE) between the reference and raw image value were (0.18, 0.1163), (0.17, 0.4017), (0.41, 0.3289), and (0.82, 0.2104) on GS1, GS2, GS3, and GS4, respectively. For the enhanced image dataset, the R2 and RMSE values exhibited GS1 (0.61, 0.0768), GS2 (0.69, 0.3121), GS3 (0.56, 0.2902), and GS4 (0.85, 0.1619), respectively. GS1 (43%) and GS2 (42%) provided significant accuracy improvements which showed the effectiveness of FOV alignment and histogram equalization techniques. In the early growth stages, plants experienced a sparse canopy with less distinct spectral signals affected by noise and atmospheric interference, resulting in lower R2 values and higher RMSE. However, enhancement techniques like FOV alignment and histogram equalization effectively mitigate errors by refining spatial alignment, noise reduction, and image contrast enhancement. Such enhancements empower researchers and practitioners to make reliable decisions, particularly in crop monitoring and management during initial development stages.

24. Biodegradable potted seedling transplanter

Plastic seedling pots have been widely used due to their lightweight and durable nature, but they hinder root establishment efficiency. However, recent studies have demonstrated that biodegradable potted seedlings could improve seedling resilience, while also being eco-friendly through natural decomposition. In this research, a transplanting mechanism for biodegradable potted vegetable seedlings was designed using commercial software, incorporating kinematic analysis to enhance the efficiency and performance of leafy vegetable transplantation. Theoretical analysis of the vegetable transplanting mechanism for biodegradable seedling pots was conducted, including calculations of position, velocity, acceleration, and input driving torque. Additionally, the selection of appropriate link combinations within the mechanism was explored to ensure the smooth transplantation of potted seedlings in optimum depths and spacings. The kinematic model of the transplanting mechanism was simulated using commercial mechanical design and simulation software. The transplanter was comprised of a 4-bar mechanism: a driving link, a driven link, a connecting link, and a supporting bar. To enable better hopper motion, a spring was affixed between the driven link and the ground. The movement of the mechanism was primarily controlled through a crank-rocker mechanism, where the arm lengths play a crucial role in determining the planting trajectory. Simulation trials were conducted by varying the main arm link length while maintaining fixed forward and rotating speeds. The following parameters were investigated: driving link (45–65 mm), connecting arm (130–170 mm), guide bar (115–135 mm), and end effector link (210–250 mm). Additionally, a dibbling hopper length of 153 mm was identified as ideal. The simulated velocities and accelerations of the end hopper in “X” and “Y” directions for suitable link combination were found to be 430 mm/s, 530 mm/s, and 975 mm/s2, 2091 mm/s2, respectively. The required driving torque was observed to be 603 N mm, and the vertical linear displacement of the hopper was 281 mm.

25. Distributed sensing for agricultural mobile robot navigation

Autonomous navigation of agricultural mobile robots through uneven terrains, avoiding obstacles, and optimizing routes are intricate challenges that demand innovative solutions. Conventional navigation methods, often reliant on GPS alone, face limitations in accuracy and reliability, especially in densely vegetated areas with poor satellite reception and disturbances of the outdoor environment. Data fusion and multiple perception solutions are usually employed to assist the existing RTK GPS-based navigations and to improve the reliability of the operation in unstructured agricultural fields. In this context, a distributed sensing system (DSS) that leverages multiple data sources can mitigate these limitations and maintain reliable navigation even in challenging conditions. This study reports on the design, development, and evaluation of a DSS with CANBUS communication to assist the autonomous navigation of a four-wheel steering agricultural mobile robot inside berry orchards. The proposed DSS was responsible for detecting random obstacles and providing the navigation controller with real-time feedback for effective collision avoidance. For the proof of concept, the assisted navigation was expected to maintain the robot between the plants’ rows with an accuracy of 10–20 cm from the side with an ideal forward speed of 5–8 km/h. In the hardware layer, the DSS benefits from a Jetson Nano onboard computer along with a set of ROS-based multi-channel infrared and laser sensors for perception. In the software layer, a fuzzy knowledge-based algorithm was designed and simulated. Results demonstrated significant improvements in navigation accuracy through the fusion of GPS, laser, and infrared sensor data which was essential to avoid crop damage and ensure optimal resource utilization. Results from simulation and field experiments also suggested that the exponential filter was necessary to be implemented on each sensor for removing noise and outliers. It was concluded that the development of such systems requires extensive validation tests in different orchards besides a more accurate dynamic model of the robot platform. This can be accelerated using digital models inside virtual replicas of the environments.

The base vehicle was a combustion engine four-wheel drive (4WD) and four-wheel steering (4WS) Quatrak manufactured by Irus (IRUS Motorgeräte GmbH, Burladingen, Germany) with a track width of 1.2 m, maximum forward and reverse speed of 10 km/h, and approximate weight of 475 kg that utilizes a 2-cylinder petrol engine with 20.1 kW (27 HP) output power. The 4WS mechanism of the vehicle allowed a minimum turning radius on the row-ends with increased precision and control over the robot’s movement, which is of great importance for making sharp turns and navigating in tight spaces inside orchards. The vehicle benefits from the HYDAC TTC 580 (HYDAC, Sulzbach, Germany) controller to translate and deliver the control steering signals to the actuators (including the electrohydraulic system) that are responsible for speed and turning angle. A GPS-based navigation toolbox was installed on the vehicle by Innok Robotics (Innok Robotics GmbH, Regenstauf, Germany) that benefits from a custom-built graphical user interface for uploading waypoints and following trajectory. To validate the proposed assisted navigation system, field visits were first carried out to collect preliminary data using high-accuracy RTK GPS. These data were used to create a virtual orchard inside CoppeliaSim that was interfaced with ROS for testing different sensors, hardware in the loop, and control algorithms on a full-scale simulated robot and orchard model. The main elements of the simulation scenes in this project were (i) mesh files representing plants, the robot, and obstacles, (ii) API and codes that created interfaces between different software environments, and (iii) algorithms and dynamic models including minimum distance calculation, Ackerman steering, path following, and obstacle avoidance algorithms. The simulation approach converted native data streams from various sensor inputs into usable information within the command-and-control system. The sensors used in front of the robot were TFmini Plus single-point short-range LiDAR sensor manufactured by Benewake (Benewake, China) with a distance detection range between 0.02 and 12 m, resolution of 1 cm, sampling rate of 100 Hz, frame rate between 1 and 1000 Hz, accuracy of ±1%, and field of view of 3.6°. In the communication layer, CANBUS (ISO 11898-2) was used for exchanging data between different nodes due to its scalability and high reliability for automation even in harsh environments with high electromagnetic interference.

The simulation scene provided a safe, fast, and low-cost experiment platform for developing, testing, and validating of the sensing and control strategies with different algorithms. It also enabled human-aware navigation by finding the best positions for each sensor and provided a flexible solution for attaching other implements and determining the optimum row-end turning patterns in the presence of random obstacles. It also accelerated complicated analysis to understand the behavior of the robot on uneven terrains. Results of field experiments showed the reliable functionality of these sensor arrays under outdoor light conditions to detect the bushes and random obstacles within the pre-programmed range of 5–60 cm. In addition to the detection task, each sensor module also transmits a state signal equal to 0 or 1 on the CANBUS line in the absence or presence of an obstacle that is georeferenced with the robot position for monitoring purposes. The average distance between the robot and the left bushes, as measured by the three sets of IR sensors mounted on the left of the robot were 17.5, 18.5, and 17.1 cm. These values for the right bushes were found to be 13.5, 18.9, and 16.6 cm. It was observed that the front IR sensor provided the collision avoidance controller with accurate feedback to stop the robot nearly 25 cm distance from the bushes. These results show the redundancy in the distance measurements that the multi-channel IR sensing setup is providing to make sure the controller always receives feedback, and the robot continues its operation, even if one sensor module fails. Field experiments also confirmed that the analog output of these sensors with the exponential filter noise reduction allows for precise measurements, even in high-density bush conditions, and the multi-channel ensured that the final measurements were not easily affected by the disturbances of the environment such as occlusion by leaves and dust. From the perspective of exchanging sensor data with the controller, the CANBUS network was found to be robust and reliable with 100% of the data transmitted during the experiment without any interruption.

The system showcased consistent performance across different types of crops, soil conditions, and environmental factors. The hardware of the system was designed to be modular and could be replaced easily upon failure. This adaptability stems from the fusion of multi-modal sensor data, which allows the robot to gather comprehensive information and make informed decisions regardless of the surroundings. Comparative analysis with existing navigation methods highlighted the superiority of the DSS. The quantitative comparisons demonstrated the system’s significant advantages in terms of accuracy, efficiency, and adaptability. The system’s ability to generalize its navigation strategies to various contexts holds great promise for its widespread adoption in agriculture. The evaluation confirmed that the proposed solution is capable of adjusting the robot’s steering and speed steering the robot in the presence of different obstacles and different scenarios. This was achieved by a control strategy that reduced the error distance between the sensor readings and a minimum allowed distance with left, right, and front obstacles.

26. Key technologies for intelligent kiwifruit production

Kiwifruit is popular all over the world because of its unique taste and high nutritional value. Traditional kiwifruit production approach almost relies on manual operations, which is not only labor-intensive but also easily affected by human factors, resulting in low production efficiency and unstable output. As the demand for kiwifruit continues to rise, inefficient manual operation is highly desired to be replaced by intelligent production with advanced technologies. Precision operation robots supported by artificial intelligence and automation technology have been proposed and made contribute to improving agriculture production efficiency and reducing labor usage. Therefore, this research is focused on key technologies for fully intelligent production of kiwifruit, with a view to replacing manual operations, reducing labor dependence, and improving overall productivity in bud thing, flower pollination, yield estimation, and robotic picking. Buds of kiwifruit are usually grown in clusters, consisting of a main bud and multiple side buds that grow around it, as shown in Figure 7. As an important part of nutrient management during the flowering period, bud thing refers to the removal redundant side buds that are not suitable for fruiting, which directly affects final yield and quality. An automated robotic precision bud thing robot based on machine vision and laser was developed to automatic detect side buds and destroy them to prevent their growth.

Figure 7.

Example of bud clusters with a main bud and surrounding side buds.

The key to the proposed method is to accurately and quickly detect side buds that need to be thinned out in the canopy. For this regard, a multi-class buds labeling, and detection strategy was proposed according to its different growth conditions and spatial location, as shown in Figure 8. All collected kiwifruit bud image dataset was labeled with the mentioned strategy, and YOLOv5l was trained and employed as bud detection model. Apart from this, a fixed-focus laser was selected as bud thing actuator for robots to destroy side buds by emitting high energy laser pulses.

Figure 8.

Labeling examples of multi-class kiwifruit buds with different colors. (a) Single bud; (b) double buds, (c) bud cluster, (d) main bud, (e) side buds.

As a typical cross-pollinated plant, kiwifruit does not have the ability to achieve autonomous self-pollination. Due to the strong subjectivity of manual work, artificial assistance pollination approaches have problems such as uncontrollable pollination, and even missed pollination, which affects fruit quality and yield. Robotic kiwifruit pollination has been proposed for saving people from heavy labor and reducing pollen losses by imprecise pollination in orchards. One of the key technologies for precision pollination of kiwifruit is the reliable detection of flowers suitable for pollination. As the flowering period of kiwifruit is not synchronized, not all flowers are blooming and able to receive pollen at the same time [19]. Therefore, a multi-class flower detection method (shown in Figure 9) based on YOLOv5l was employed for detecting and determining which flowers of the canopy were in the best pollination periods. After that, a further selection strategy based on Euclidean distance matching method was applied to obtain its distribution in the canopy for suitable flowers selection, which combined the agronomic characteristics of kiwifruit growth for optimal nutrients partition with quality and yield assurance [20].

Figure 9.

Labeling examples of multi-class kiwifruit flowers with different colors. (a) Bud; (b) early open; (c) half-open; (d) fresh pistil; (e) early ocher pistil; (f) ocher pistil; (g) petal fall; (h) occluded pistil of flower; (i) bright pollen; (j) dark pollen.

Pollination approach is another essential consideration for kiwifruit pollination robots to achieve precise operations. An air assisted liquid pollination approach was designed and built to collaborate with a robotic arm for kiwifruit flowers targeting pollination, which also could achieve quantified pollination by controlling the spraying time of prepared pollen liquid [21]. Comprehensive consideration of flower size and effective pollination area, a pre-experiment was carried out for pollination parameters optimization and selection, which determined parameters of the distance between nozzle and the selected flower was 25 cm, air pressure was 56 kPa, and liquid flow was 45 mL/min, respectively. Fruit yield estimation before the harvest is a crucial step to predict the required resources for workers such as packing and storage houses for the harvest and distribution resources for the marketing. For requirements of rapid yield estimation in small and medium-sized kiwifruit orchard, single shot multibox detector (SSD) with two lightweight backbones MobileNetV2 and InceptionV3 were employed to develop an Android APP for field kiwifruit detection [22]. However, difficulty in qualifying the planting area from smartphone acquired images and video frames makes yield estimates less credible. Therefore, a multi-target tracking and counting algorithm based on the trained YOLOv5 kiwifruit detection model and the ByteTrack tracking framework is proposed, which achieves accurate kiwifruit counting by restricting the single-row region based on the column positions fed back from the detection model, as shown in Figure 10.

Figure 10.

Kiwifruit yield estimation based on a multi-target tracking and counting algorithm.

Kiwifruits are commercially grown on sturdy support structures such as T-bars and pergolas. Automatic detection of kiwifruit in the orchard is challenging because illumination varies through the day and night and because of color similarity between kiwifruit and the complex background of leaves, branches, and stems [23]. Also, kiwifruits grow in clusters, which may result in having occluded and touching fruits [24]. Intelligent sensing and nondestructive picking of fruit are the two main key technologies for robotic harvesting. One of the hardest challenges in orchard automation is the harvesting robot especially for soft and delicate fruits. However, all the kiwifruits have been labeled and detected as only one class in most research for fruit robotic picking, where fruits occluded by branches or wires have been detected as pickable targets. End-effectors or robots may be damaged by the branches or wires when they are forced to pick those fruits [25]. Therefore, kiwifruits are labeled, trained, and detected in multi-classes based on their occlusions to avoid detecting fruits occluded by branches or wires as pickable targets [24]. Fruits are classified into five classes according to robotic picking strategy and field occlusions, as shown in Figure 11. Well-known YOLOv4 was employed to do transfer learning for multi-classes kiwifruit detection.

Figure 11.

Different classes of kiwifruit images. (a) Fruit not occluded (NO); (b) fruit occluded by leaves (OL); (c) fruit occluded by other fruits (OF); (d) fruit occluded by branches (OB); (e) fruit occluded by wires (OW).

Fruit nondestructive picking is the other key technology for developing picking robots. Firstly, based on the artificial way of kiwifruit picking and the biological characteristics of the kiwifruit stem, a fruit-picking method for the robot was proposed, which needed to separate the fruit from the stem and hold the fruit to prevent it from dropping. Then, the picking method was verified by a specially designed separation test of the fruit and its stem. After that, an end-effector was designed and manufactured based on the fruit-picking method, which approached a fruit from the bottom, enveloped and grabbed the fruit from two sides, and then rotated up to separate the fruit from the stem [26]. Furthermore, in order to investigate the drop distance of the kiwifruit end-effector to container without damage, a low-damage crash study on kiwifruit was conducted [27].

The above key technologies were integrated and verified in standardized commercial kiwifruit orchards. According to the agronomic analysis of different production requirements, a comprehensive operating robot with a mechanical arm as the main structure was determined and constructed. For this robot, it is only necessary to replace different end-effectors and corresponding control algorithms to achieve different tasks switch between bud thing, flower pollination, and robotic picking, as shown in Figure 12. Field experiments have shown that the average time for a constructed multifunctional robot to thin a bud, pollinate a flower, and pick a kiwifruit is 1.5 s, 1 s, and 2 s, respectively. For kiwifruit yield estimation, 90% of counting accuracy can be achieved in video frames with less jitter.

Figure 12.

Constructed multifunctional precision operation robot. (a) Bud thing; (b) flower pollination; (c) robotic picking.

Precision and sustainable intelligent production of agriculture are desired to meet the needs of the growing world population. This research proposed key technologies for the fully intelligent production of kiwifruit, focusing on bud thing, flower pollination, yield estimation, and robotic picking. By analyzing the agronomic characteristics of different tasks, a general-purposed precision operation robot was designed and adopted for kiwifruit intelligent production with a mechanical arm structure, which could flexibly switch between different tasks of bud thing, flower pollination, and fruit-picking by replacing different end effector and control algorithm. In addition, accurate preharvest yield estimation provides data support for labor assessment. However, the promotion of intelligent agricultural technologies still faces several challenges, such as equipment maintenance costs and the acceptance of new technologies by farmers. Therefore, future research can continue to explore more economically practical intelligent equipment, strengthen the training and awareness of farmers on the new technology, and promote the application of intelligent agricultural technologies in a wider range.

27. UAV imaging for estimating pineapple crop growth

Hormonal induction is an activity performed on pineapple plants to control flowering time or breaking the vegetative phase into generative using ethylene [28]. Based on farmers’ experience, ethylene is usually sprayed on pineapple plants that are 7–10 months old after planting. This practice is performed without considering the level of crop growth. Hormonal induction done on crops that are not mature enough may produce small fruit or may not produce fruit at all. On the other hand, a delay in the induction of hormones will produce over-size fruit. Fruit sizes that are too large have less export demand due to packaging and storage space limitations. Therefore, it is important to accurately estimate the hormone induction time-based on the crop growth stage so as to meet the local and foreign markets’ demand. The crop growth stage is determined based on the crop characteristics that include the length of the longest leaf (the D-leaf), the plant height, and the number of fresh leaves that emerge from each plant [29, 30]. A study by Usman et al. [29] reported that plant weight is an indicator parameter in determining the crop growth stage. This is done by removing several plant samples and weighing them. The manual practice of measuring these parameters is, however, laborious, time-consuming, tedious, and limited to several numbers of samples for large plantation areas. Moreover, the method of weighing some plant samples also reduces the yield because the plants do not grow well if replanted. The direct contact between operators and crops during the growth parameter measurement may also expose the crop to disease transmission [31]. A nondestructive method that can be done remotely is therefore required to replace the manual method of crop growth estimation, especially for large plantation areas. For this purpose, a study has been conducted to investigate the potential of the unmanned aerial imaging system (UAIS) for estimating pineapple crop growth.

The UAIS comprised a drone, a multispectral camera, and a control module. The multispectral camera that was mounted on a DJI Phantom 4 drone consists of six sensor bands namely visible (RGB), Red Edge (RE), Near-infrared (NIR), Red (R), Green (G), and Blue (B) with the resolution of each sensor being 2 MP. The drone was flown at 40 m from the ground with autopilot mode to capture multispectral images of pineapple crops at all six bands. The precise position location up to the centimeter level for each image taken was also acquired using the built-in global positioning system (GPS). All the captured images were then stitched and processed using Pix4Dmapper software to form an ortho-mosaicked image. The ortho-mosaicked image was further analyzed and formulated using Quantum GIS (QGIS) software to form vegetation indexes that include normalized difference vegetation index (NDVI), normalized difference red edge index (NDRE), and ratio vegetation index (RVI). Measurements of two pineapple crop growth parameters such as plant height and D-leaf length were then carried out on eight plant samples from each of the 24 subplots. The average value of each parameter for eight samples from each subplot was calculated to obtain one representative growth parameter value for each subplot. The average values of the crop growth parameters obtained were then correlated with the average values of the vegetation indexes obtained from the multispectral images.

The results showed that among the three vegetation indices, NDVI attained the highest correlation with plant height and leaf length-D, with correlation coefficient (R) were 0.88 and 0.90, respectively. This is followed by NDRE with R values of 0.86 and 0.87 for plant height and D-leaf length. The lowest correlation is the RVI index with R values for plant height and D-leaf length of 0.83 and 0.84 respectively. These results demonstrated that multispectral images taken using UAIS can be used to estimate the growth rate of the pineapple crop. Table 1 shows the correlation coefficient between the average crop growth parameters with the vegetation indexes.

Crop growth parameterCorrelation coefficient (R)
NDVINDRERVI
Plant height (cm)0.880.860.83
D-leaf length (cm)0.900.870.84

Table 1.

Correlation coefficient between the average crop growth parameters with the vegetation indexes.

The use of UAIS for estimating pineapple crop growth rate has been reported briefly in this paper. The good correlation between the crop growth parameters with the vegetation index formulated from the multispectral image taken using UAIS indicates that the UAIS can be potentially used to replace the manual practice of crop growth parameter measurement. The crop growth stage information can be uploaded into a farm management system based on the Internet-of-Things (IoT) which then can be used as a reference for farm managers in making agronomic decisions.

28. Conclusion

Emerging technologies in precision agriculture have demonstrated promising results in revolutionizing the industry. Through the integration of machine learning, robotics, IoT sensors, and drones, precision agriculture offers unique opportunities to optimize resource utilization, enhance crop yields, and mitigate environmental risks. The abstracts presented in this chapter showcased that the convergence of technology and agriculture not only increases operational efficiency and profitability but also promotes sustainability and resilience in the face of challenges such as climate change and food security. Advancements such as actuator failure detection for ICT-based orchard irrigation systems and the implementation of active back exoskeletons for rice farm lifting tasks highlight the industry’s commitment to efficiency and sustainability. Additionally, research into water pressure influence on spatial and temporal soil water distribution and the theoretical analysis of automatic pepper transplanter mechanisms underscore the importance of optimizing resource utilization and minimizing environmental impact. By fundamentally reshaping traditional farming methods, these innovations pave the way for sustainable and efficient practices within evolving challenges and uncertainties, providing insights into the future of agriculture, where data-driven decision-making and smart technologies play a central role in addressing the complex challenges facing the global food system.

References

  1. 1. Martello M et al. Assessing the temporal and spatial variability of coffee plantation using RPA-based RGB imaging. Drones. 2022;6(10):267
  2. 2. Binbin X, Jizhan L, Meng H, Jian W, Zhujie X. Research progress on autonomous navigation technology of agricultural robot. In: 2021 IEEE 11th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). Jiaxing, China; 2021. pp. 891-898. DOI: 10.1109/CYBER53097.2021.9588152
  3. 3. You W et al. Data fusion of UWB and IMU based on unscented Kalman filter for indoor localization of quadrotor UAV. IEEE Access. 2020;8:64971-64981
  4. 4. Liu Y, Sun R, Liu J, Fan Y, Li L, Zhang Q . Research on the positioning method of autonomous mobile robot in structure space based on UWB. In: 2019 International Conference on High Performance Big Data and Intelligent Systems (HPBD&IS). Shenzhen, China; 2019. pp. 278-282. DOI: 10.1109/HPBDIS.2019.8735462
  5. 5. Thapa A, Horanont T. Digital twins in farming with the implementation of agricultural technologies. In: Boonpook W, Lin Z, Meksangsouy P, Wetchayont P, editors. Applied Geography and Geoinformatics for Sustainable Development. Springer Geography. Cham: Springer; 2023. DOI: 10.1007/978-3-031-16217-6_9
  6. 6. de Queiroz DM et al. Digital Agriculture. Springer; 2022
  7. 7. Chen C-J et al. Identification of fruit tree pests with deep learning on embedded drone to achieve accurate pesticide spraying. IEEE Access. 2021;9:21986-21997
  8. 8. Kurbanov RK, Zakharova NI. Determination of spring barley lodging area with help of unmanned aerial vehicle. In: Ronzhin A, Berns K, Kostyaev A, editors. Agriculture Digitalization and Organic Production. Smart Innovation, Systems and Technologies. Vol. 245. Singapore: Springer; 2022. DOI: 10.1007/978-981-16-3349-2_21
  9. 9. Ni X, Li C, Jiang H, Takeda F. Deep learning image segmentation and extraction of blueberry fruit traits associated with harvestability and yield. Horticulture Research. 2020;7:110. DOI: 10.1038/s41438-020-0323-3
  10. 10. Testa R et al. Economic sustainability of Italian greenhouse cherry tomato. Sustainability. 2014;6(11):7967-7981
  11. 11. Walker-Bone K, Palmer K. Musculoskeletal disorders in farmers and farm workers. Occupational Medicine. 2002;52(8):441-450
  12. 12. Duncan E et al. New but for whom? Discourses of innovation in precision agriculture. Agriculture and Human Values. 2021;38:1181-1199
  13. 13. Balsari P, Marucco P, Tamagnone M. A crop identification system (CIS) to optimise pesticide applications in orchards. The Journal of Horticultural Science and Biotechnology. 2009;84(6):113-116
  14. 14. Raja R et al. Real-time control of high-resolution micro-jet sprayer integrated with machine vision for precision weed control. Biosystems Engineering. 2023;228:31-48
  15. 15. Marras WS et al. Biomechanical risk factors for occupationally related low back disorders. Ergonomics. 1995;38(2):377-410
  16. 16. Fethke NB et al. Musculoskeletal pain among Midwest farmers and associations with agricultural activities. American Journal of Industrial Medicine. 2015;58(3):319-330
  17. 17. Thamsuwan O et al. Potential exoskeleton uses for reducing low back muscular activity during farm tasks. American Journal of Industrial Medicine. 2020;63(11):1017-1028
  18. 18. Ji X et al. SIAT-WEXv2: A wearable exoskeleton for reducing lumbar load during lifting tasks. Complexity. 2020;2020:1-12
  19. 19. Li G et al. Real-time detection of kiwifruit flower and bud simultaneously in orchard using YOLOv4 for robotic pollination. Computers and Electronics in Agriculture. 2022;193:106641
  20. 20. Li G et al. Multi-class detection of kiwifruit flower and its distribution identification in orchard based on YOLOv5l and Euclidean distance. Computers and Electronics in Agriculture. 2022;201:107342
  21. 21. Gao C et al. A novel pollination robot for kiwifruit flower based on preferential flowers selection and precisely target. Computers and Electronics in Agriculture. 2023;207:107762
  22. 22. Zhou Z et al. Real-time kiwifruit detection in orchard using deep learning on Android™ smartphones for yield estimation. Computers and Electronics in Agriculture. 2020;179:105856
  23. 23. Fu L et al. Fast and accurate detection of kiwifruit in orchard using improved YOLOv3-tiny model. Precision Agriculture. 2021;22:754-776
  24. 24. Suo R et al. Improved multi-classes kiwifruit detection in orchard to avoid collisions during robotic picking. Computers and Electronics in Agriculture. 2021;182:106052
  25. 25. Song Z et al. Canopy segmentation and wire reconstruction for kiwifruit robotic harvesting. Computers and Electronics in Agriculture. 2021;181:105933
  26. 26. Longsheng F et al. Kiwifruit recognition at nighttime using artificial lighting based on machine vision. International Journal of Agricultural and Biological Engineering. 2015;8(4):52-59
  27. 27. Wu Z et al. Coefficient of restitution of kiwifruit without external interference. Journal of Food Engineering. 2022;327:111060
  28. 28. Putra AN et al. Pineapple biomass estimation using unmanned aerial vehicle in various forcing stage: Vegetation index approach from ultra-high-resolution image. Smart Agricultural Technology. 2021;1:100025
  29. 29. Usman M et al. Statistical model and prediction of pineapple plant weight. Science International. 2015;27(2):937-943
  30. 30. Nurul Shamimi A et al. Growth performance of selected pineapple hybrids on peat soil. Advances in Plant Science and Technology. 2017;24:64-67
  31. 31. Joy P, Sindhu G. Diseases of pineapple (Ananas comosus): Pathogen, symptoms, infection, spread and management. In: Consultado Agosto. 2012. Available from: https://www.researchgate.net/profile/Pp-Joy/publication/306017784_DISEASES_OF_PINEAPPLE_Ananas_comosus_Pathogen_symptoms_infection_spread_management/links/57aaeb7308ae42ba52ae66cb/DISEASES-OF-PINEAPPLE-Ananas-comosus-Pathogen-symptoms-infection-spread-management.pdf

Written By

Redmond R. Shamshiri, Maryam Behjati, Siva K. Balasundram, Christopher Teh Boon Sung, Ibrahim A. Hameed, Ahmad Kamil Zolkafli, An Ho-Song, Arina Mohd Noh, Badril Hisham Abu Bakar, W.A. Balogun, Beom-Sun Kang, Cong-Chuan Pham, Dang Khanh Linh Le, Dong Hee Noh, Dongseok Kim, Eliezel Habineza, Farizal Kamaroddin, Gookhwan Kim, Heetae Kim, Hyunjung Hwang, Jaesung Park, Jisu Song, Joonjea Sung, Jusnaini Muslimin, Ka Young Lee, Kayoung Lee, Keong Do Lee, Keshinro Kazeem Kolawole, Kyeong Il Park, Longsheng Fu, Md Ashrafuzzaman Gulandaz, Md Asrakul Haque, Md Nasim Reza, Md Razob Ali, Md Rejaul Karim, Md Sazzadul Kabir, Md Shaha Nur Kabir, Minho Song, Mohamad Shukri Zainal Abidin, Mohammad Ali, Mohd Aufa Md Bookeri, Mohd Nadzim Nordin, Mohd Nadzri Md Reba, Mohd Nizam Zubir, Mohd Saiful Azimi Mahmud, Mohd Taufik Ahmad, Muhammad Hariz Musa, Muhammad Sharul Azwan Ramli, Musa Mohd Mokji, Naoto Yoshimoto, Nhu Tuong An Nguyen, Nur Khalidah Zakaria, Prince Kumar, P.K. Garg, Ramlan Ismail, Ren Kondo, Ryuta Kojo, Samsuzzaman, Seokcheol Yu, Seok-Ho Park, Shahriar Ahmed, Siti Noor Aliah Baharom, Sumaiya Islam, Sun-Ok Chung, Ten Sen Teik, Tinah Manduna Mutabazi, Wei-Chih Lin, Yeon Jin Cho and Young Ho Kang

Submitted: 08 April 2024 Reviewed: 13 May 2024 Published: 03 July 2024