S. Mutti

and 2 more

The precise localization of mobile robots in unstructured environments is of utmost importance for many industrial and field applications, especially when the mobile robot is part of a more complex kinematic chain, such as a mobile manipulator. Being able to precisely localize affects the outcome of tasks that rely on an open-loop kinematic computation, such as work-station docking procedures. To achieve a repeatable and precise localization and positioning, mobile robots generally rely on onboard sensors, most commonly 2D laser scanners, whose readings are subjected to noise and numerous disturbing factors (e.g., materials reluctance). Problems arise when precise localization is needed in dynamic and unstructured environments where generally applicable methods won’t perform adequately or might be time-consuming to set up. In this work, we propose a cloud-edge computing architecture to deploy a recurrent neural network (RNN) based registration system, which uses a pair of consecutive LiDAR readings to estimate a fixed transformation. The capability of RNNs to process contiguous inputs will help neglect errors embedded in punctual laser scanner readings and output a more precise registration estimation. In such a way, the RNN can estimate a displacement error based on multiple consecutive readings and act as a sensor to be employed in a closed-loop control scheme. To tackle the dynamic and unstructured environments, the model is firstly tuned on synthetic LiDAR data to embed rigid transformations into the deep learning model, for then rapidly fine-tuned on local scenarios. After model architecture and optimization of hyperparameters, the devised model is tested in different scenarios, comparing the precise positioning capability of the AMR(autonomous mobile robot) with that of a classical registration algorithm. The results suggest that an RNN model can greatly improve the registration precision of laser scanner signals and, consequently, the precise positioning efficiency of AMRs.

S. Mutti

and 2 more

The precise localization of mobile robots is of utmost importance for many industrial applications, especially when the mobile robot is part of a more complex kinematic chain, such as in a mobile manipulator. Furthermore, precise localization hugely affects the outcome of tasks that rely on an open-loop kinematic computation, such as work-station docking procedures. To achieve a repeatable and precise localization and positioning, mobile robots generally rely on onboard sensors, most commonly 2D laser scanners, whose readings are subjected to noise and numerous disturbing factors ( e.g., materials reluctance). In this work, we propose a recurrent neural network (RNN) based registration system, which uses a pair of consecutive LiDAR readings and estimates a fixed transformation. The capability of RNNs to process contiguous inputs will help neglect errors embedded in punctual laser scanner reading and output a more precise registration estimation. In such a way, the RNN can estimate a displacement error based on multiple consecutive readings and act as a sensor to be employed in a closed-loop control scheme. After a model architecture and optimization of hyperparameters, the devised model is tested in different scenarios, comparing the AMR precise positioning capability with a classical registration algorithm. The results suggest that an RNN model can greatly improve the registration precision of laser scanner signals and, consequently, the precise positioning efficiency of AMRs.
Anemia is one of the global public health challenges that particularly affect children and pregnant women. A study by WHO indicates that 42% of children below 6 years and 40% of pregnant women worldwide are anemic. This affects the world’s total population by 33%, due to the cause of iron deficiency. The non-invasive technique, such as the use of machine learning algorithms, is one of the methods used in the diagnosing or detection of clinical diseases, which anemia detection cannot be overlooked in recent days. In this study, machine learning algorithms were used to detect iron-deficiency anemia with the application of Naïve Bayes, CNN, SVM, k-NN, and Decision Tree. This enabled us to compare the conjunctiva of the eyes, the palpable palm, and the colour of the fingernail images to justify which of them has a higher accuracy for detecting anemia in children. The technique utilized in this study was categorized into three different stages: collecting of datasets (conjunctiva of the eyes, fingernails and the palpable palm images), preprocessing the images; image extraction, segmentation of the Region of Interest of the images, obtained each component of the CIE L*a*b* colour space (CIELAB). The models were then developed for the detection of anemia using various algorithms. The CNN had an accuracy of 99.12% in the detection of anemia, followed by the Naïve Bayes with an accuracy of 98.96%, while Decision Tree and k-NN had 98.29% and 98.92% accuracy respectively. However, the SVM had the least accuracy of 95.4% on the palpable palm. The performance of the models justifies that the non-invasive approach is an effective mechanism for anemia detection. Keywords: Iron deficiency, anemia, non-invasive, machine learning, data augmentation, algorithms, region of interest.