Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Several methods exist to detect and distinguish collisions of robotic systems with their environment, since this information is a critical dependency of many tasks. These methods are prevalently based on thresholds in combination with filters, models, or offline trained machine learning models. To improve the adaptation and thereby enable a more autonomous operation of robots in new environments, this work evaluates the applicability of an incremental learning approach. The method addresses online learning and recognition of motion and contact episodes of robotic systems from proprioceptive sensor data using machine learning. The objective is to learn new category templates representing previously encountered situations of the actuators and improve them based on newly gathered similar data. This is achieved using an artificial neural network based on adaptive resonance theory (ART). The input samples from the robot’s actuator measurements are preprocessed into frequency spectra. This enables the ART neural network to learn incrementally recurring episodic patterns from these preprocessed data. An evaluation based on preliminary experimental data from a grasping motion of a humanoid robot’s arm encountering contacts is presented and suggests that this is a promising approach. PubDate: 2023-09-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Simultaneous localization and mapping (SLAM) is a critical technology in the field of robotics. Over the past decades, numerous SLAM algorithms based on 2D LiDAR have been proposed. In general, these algorithms achieve good results in indoor environments. However, for geometrically degenerated environments such as long hallways, robust localization of robots remains a challenging problem. In this study, we focus on the challenges faced by LiDAR SLAM in such conditions. We propose 2D LiDAR SLAM algorithm Lmapping that employs an IMU-centric data-processing pipeline. In the front-end, a point cloud is directly registered to a probabilistic map; environment recognition is accomplished using a new method that relies on LiDAR measurements. And this method is more suitable for front-end matching based on grid maps. LiDAR odometry and IMU pre-integration are then integrated to build a local factor graph in the sliding window of the submap. When the environment is degraded, an Error-State Kalman Filter (ESKF) is added as a constraint to correct the IMU bias. In the back-end, through mutual matching within and between submaps, and loop detection, accumulated errors from the front-end are reduced. To improve flexibility for different sensor combinations, Lmapping supports multiple LiDAR inputs and facilitates initialization with a common six-axis IMU. Extensive experiments have shown that Lmapping greatly outperforms the current mainstream 2D-SLAM algorithm (Cartographer) in terms of the mapping effect and localization accuracy, with high efficiency in degraded environments. PubDate: 2023-09-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Vision-based localization is susceptible to interference from dynamic objects in the environment, resulting in decreased localization accuracy and even tracking loss. Hence, sensor fusion with IMUs or motor encoders has been widely adopted to improve positioning accuracy and robustness in dynamic environments. Commonly used loose coupling fusion localization methods cannot completely eliminate the error introduced by dynamic objects. In this paper, we propose a novel adaptive visual inertial odometry via interference quantization, namely IQ-VIO. To quantify the confidence of pose estimation through vision frames analysis, we firstly introduce the feature coverage and the dynamic scene interference index based on image information entropy. Then, based on the interference index, we further establish the IQ-VIO multi-sensor fusion model, which can adaptively adjust the measurement error covariance matrix of an extended Kalman filter to suppress and eliminate the impact of dynamic objects on localization. We verify IQ-VIO algorithm on KAIST Urban dataset and actual scenes. Results show that our method achieves favorable performance against other algorithms. Especially under challenging scenes such as low texture, the RPE of our algorithm decreases at least twenty percent. Our approach can effectively eliminate the impact of dynamic objects in the scenes and obtain higher positioning accuracy and robustness than conventional methods. PubDate: 2023-08-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper presents a posture stabilization algorithm that can respond to slope changes caused by irregular ground obstacles and to unexpected impacts during walking of a quadruped robot. For this purpose, we devised a strategy to generate vertical ground reaction forces and horizontal ground reaction forces on the feet of a quadruped walking robot using a cart-inverted pendulum model with double-input double-output (DIDO). This strategy was devised using a linear quadratic regulator (LQR); stabilizing moment and stabilizing force were generated to control the robot’s torso inclination and left and right movement. The method of changing the quadratic programming (QP) constraint conditions according to whether an individual foot was in contact with the ground during walking was used to generate appropriate ground reaction forces for the support feet. In addition, vertical speed control of the swing feet was performed in the swing-down phase so that robot was able to land softly on the ground. Finally, in an environment in which external impact disturbance and irregular ground disturbance were applied together, we verified the performance of the proposed algorithm using the GAZEBO simulation and compared performances between proposed algorithm and our previous inverted pendulum model-based algorithm. PubDate: 2023-08-23
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract For pediatric rehabilitation, obtaining accurate coupled human-exoskeleton system models is challenging due to unknown model parameters caused by children’s dynamic growth and development. These factors make it difficult to establish precise and standardized models for exoskeleton control. Additionally, external disturbances, such as unpredictable movements or involuntary muscle contractions, further complicate the control process that must be addressed. This work presents the computed torque control (CTC) scheme compensated by a radial basis function neural network (RBFNN) for an uncertain lower-limb exoskeleton system. Primarily, the design, hardware architecture, and experimental procedure of a pediatric exoskeleton are briefly demonstrated. Thereafter, the proposed adaptive RBFNN-CTC (ARBFNN-CTC) is highlighted, where the adaptation of network weights depends on the Gaussian function and the Lyapunov equation. The adaptive RBFNN estimates the unknown model dynamics and compensates the CTC for the effective gait tracking of the coupled system in passive-assist mode. A Lyapunov stability is presented to ensure the convergence of error states into a significantly small domain. Finally, an experimental study with a pediatric subject (12 years) is carried out to investigate the effectiveness of the proposed control scheme. The gait tracking results show that the ARBFNN-CTC outperforms the traditional CTC by nearly 40 \(\%\) over three gait cycles. Furthermore, the proposed approach’s generalizability is validated across various gait cycles, especially at 3, 10, 20, and 30 cycles. The high correlation coefficients of 0.996, 0.997, and 0.999 for the hip, knee, and ankle joints, respectively, at thirty gait cycles, highlight the potential of the ARBFNN-CTC scheme in achieving effective and consistent gait training outcomes over extended periods. PubDate: 2023-08-22
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract To improve the therapeutic effect of robot-assisted rehabilitation for stroke patients, a novel ankle rehabilitation robot was developed based on a cable-actuated parallel mechanism and wheelchair, considering its mechanical design, actuation and control scheme. Due to the unidirectional force transmission property of the cable, the robot was redundantly actuated, and the stiffness could be regulated by adjusting the cable forces. The variable-stiffness property improved the compliance and safety of the ankle rehabilitation robot during human–robot interactions. In order to increase the stiffness variation range, a flexure-based variable-stiffness device (VSD) was proposed and placed in the cable actuation unit. The large deflection behavior of the VSD was modeled based on Euler-Bernoulli beam theory, and the stiffness-force relationship was analyzed by finite element methods. The resulting VSD had a simple and compact structure and yielded a low nonlinear stiffness-force relationship. Due to the redundant actuation, the pose and stiffness of the robot could be controlled simultaneously. In the stiffness control of the robot according to rehabilitation program of the patients, the cable force distribution problem for stiffness control should be solved. Since the stiffness model of the robot was nonlinear, it was difficult to find out the cable forces using analytical methods. Thus, the cable force distribution problem for stiffness control was formulated as an optimization model and solved using numerical methods. A simulation example was implemented to verify the effectiveness of the analysis and algorithm. PubDate: 2023-08-14
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract When a robot end-effector contacts human skin, it is difficult to adjust the contact force autonomously in an unknown environment. Therefore, a robot force control algorithm based on reinforcement learning with a state transition model is proposed. In this paper, the dynamic relationship between a robot end-effector and skin contact is established using an impedance control model. To solve the problem that the reference trajectory is difficult to obtain, a skin mechanical model is established to estimate the environmental boundary of impedance control. To address the problem that impedance control parameters are difficult to adjust, a reinforcement learning algorithm is constructed by combining a neural network and a cross-entropy method for control parameter search. The state transition model constructed using a BP neural network can be updated offline, accelerating the search for optimal control parameters, which optimizes the problem of slow reinforcement learning convergence. The uncertainty of the contact process is considered using a probabilistic statistics-based approach to strategy search. Experimental results show that the model-based reinforcement learning algorithm for force control can obtain a relatively smooth force compared to traditional PID algorithms, and the error is basically within ± 0.2 N during the online experiment. PubDate: 2023-08-07
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper provides simulation studies and experiment validation of cooperative coverage control of a multi-UAV system to quickly and initially assess the environment immediately after an earthquake. Quick response of disaster management team in the early hours after the earthquake can play a significant role in reducing the degree of damages and casualties. We consider an earthquake disaster scenario wherein four fixed-wing UAVs initially fly over the affected area and perform a quick general scan to identify critical areas wherein people have suffered severe injuries or buildings have been destroyed. The multi-rotors then fly over the critical areas identified by fixed-wing UAVs, as close as possible, to extract more detailed information and exactly localize the victims and survivors or also damaged infrastructure as stationary targets in a cooperative search and coverage control problem. For the first phase of the mission, we simulate a cooperative multi-agent system comprising four fixed-wing UAVs. For the second phase of the mission, we experimentally implement cooperative search and coverage control of a multi-UAV system including three DJI Tello quadcopters to gather more detailed information about the critical section identified by fixed-wing UAVs in the first phase of the mission and find the exact location of the victims or survivors as well. In cooperative coverage control, agents communicate with each other and exchange information. Therefore, at every moment of the execution of the mission, all agents have a unique cognitive map of the environment. The comparison of cooperative and non-cooperative methods shows that the uncertainty of the environment reaches the minimum acceptable value much earlier and the maximum coverage of the area will be achieved much faster in the collaborative distributed control method. PubDate: 2023-08-03
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract We present WCQR-III, an untethered bioinspired climbing robot capable of versatile locomotion, including ground walking, wall climbing and ground-to-wall transition. Inspired by gecko lizards, WCQR-III features a structure comprising four feet and one tail. The foot design incorporates a switching mechanism to seamlessly transition between walking and climbing modes. A spiny claw provides wall adhesion, while a rubber pad offers friction and cushioning for ground walking. Leveraging the screw theory, we establish a kinematic model to analyze the robot's mobility and transition ability. In the walking mode, a trotting gait is adopted, while the climbing mode introduces a detaching angle, pause, and backswing movement of spiny toes, facilitating easy detachment from surfaces. An offline search algorithm optimizes the motion trajectory. Mobility analysis of different configurations confirms that a crouched posture is necessary for successful ground-to-wall transition. Experimental verification on WCQR-III demonstrates a maximum speed of 0.46 m/s on horizontal ground, 0.23 m/s on vertical walls, and successful achievement of ground-to-wall transition. PubDate: 2023-07-18
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Place recognition PR is a fundamental task in autonomous robot systems that is still actively being researched. In recent years, CNN-based place recognition methods have surpassed classical methods. However, place recognition in the thermal infrared (TIR) image domain has shown poor performance when applied to both traditional and CNN-based methods due to the appearance variation of the same place throughout the day caused by varying temperature differences. In this paper, we propose a GAN-based nighttime to daytime thermal image translation model that translates thermal images captured at different times of the day into contrast-consistent and detail-preserving images, thus achieving time-agnostic thermal image representations. By applying our GAN-based models to input thermal images for place recognition tasks, we achieved a top-1 accuracy of 80.69% on the STHeReO dataset, outperforming other baseline methods. PubDate: 2023-06-28 DOI: 10.1007/s11370-023-00473-7
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract The increasingly complex application environment has raised higher demands on the performance of ground mobile robots in terms of environmental adaptability, autonomous avoidance, and self-rescue. In addition to multi-sensor fusion and control strategies, novel locomotion systems are crucial research directions. Here we propose a novel hybrid locomotion ground mobile robot, called arm-wheel-track robot (AWTR). It combines two locomotion systems, wheeled and tracked locomotion. Two multiple-degree-of-freedom arms are mounted on the front and rear of its chassis. The arms can assist the robot in transforming locomotion modes, surmounting obstacles, fall recovery, etc. Two ultrasonic sensors and a tilt sensor are mounted on it to perceive the environment and self-posture. One of the ultrasonic sensors mounted on the forearm can achieve a more comprehensive perception of the environment ahead with the extra workspace provided by the forearm. We establish the relationship of terrains with sensor data and forearm posture and develop different locomotion strategies for different terrains, so that the robot can classify different terrain and accomplish the corresponding locomotion strategies autonomously. We have built a prototype and conducted experiments on different terrains. The results verified the robot’s movement performance, the effectiveness of the terrain perception method and the locomotion strategies for different terrains. PubDate: 2023-06-26 DOI: 10.1007/s11370-023-00472-8
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Continuum robots (CRs) are promising tools for safe interaction with a complex environment. The force caused by the contact of the CR with the obstacles in a cluttered environment can change its shape. To improve the robot’s performance in a constrained environment, an accurate estimation of the contact force and shape of the robot is necessary. To address this problem, this paper suggests an optimization-based method that simultaneously estimates the shape and the forces acting on the CR by employing the quasi-static Cosserat model in 3D. The position of multiple points on the robot, as an input, is determined utilizing magnetic localization without need to the line of sight. The proposed method is capable of estimation of the robot shape and force in wide range of conditions, from the case where the number and position of contact forces are known and their magnitude and direction are to be estimated to the case no force information is available. To evaluate the performance of the proposed method in realistic conditions, several experiments were conducted. Experiments for the two forces case show 4.8% and 10% error in the magnitude of the estimated force for magnetic and stereo vision based localization systems, respectively. PubDate: 2023-06-22 DOI: 10.1007/s11370-023-00469-3
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract With the expansive aging of global population, service robot with living assistance applied in indoor scenes will serve as a crucial role in the field of elderly care in the future. Service robots need to detect multiple targets when completing auxiliary tasks. However, indoor scenes are usually complex and there are many types of interference factors, leading to great challenges in the multiple targets detection. To overcome this technical difficulty, a novel improved Mask RCNN method for multiple targets detection in the indoor complex scenes is proposed in this paper. The improved model utilizes Mask RCNN as the network framework. On this basis, Convolutional Block Attention Module (CBAM) with channel mechanism and space mechanism is integrated, and the influence of different background, distance, angle and interference factors is comprehensively considered. Meanwhile, in order to evaluate the detection and identification effects of the established model, a comprehensive evaluation system based on loss function and Mean Average Precision (mAP) is established. For verification, experiments on the detection and identification effects under different distances, backgrounds, postures and interference factors were conducted. The results demonstrated that designed model improves the accuracy to a higher level and has a better anti-interference ability than other methods while the detection speed was nearly the same. This research will promote the application of intelligent service robots in the field of perception and target grasp. PubDate: 2023-06-21 DOI: 10.1007/s11370-023-00471-9
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Simultaneous object recognition and pose estimation are two key functionalities for robots to safely interact with humans as well as environments. Although both object recognition and pose estimation use visual input, most state of the art tackles them as two separate problems since the former needs a view-invariant representation, while object pose estimation necessitates a view-dependent description. Nowadays, multi-view convolutional neural network (MVCNN) approaches show state-of-the-art classification performance. Although MVCNN object recognition has been widely explored, there has been very little research on multi-view object pose estimation methods, and even less on addressing these two problems simultaneously. The pose of virtual cameras in MVCNN methods is often pre-defined in advance, leading to bound the application of such approaches. In this paper, we propose an approach capable of handling object recognition and pose estimation simultaneously. In particular, we develop a deep object-agnostic entropy estimation model, capable of predicting the best viewpoints of a given 3D object. The obtained views of the object are then fed to the network to simultaneously predict the pose and category label of the target object. Experimental results showed that the views obtained from such positions are descriptive enough to achieve a good accuracy score. Furthermore, we designed a real-life serve drink scenario to demonstrate how well the proposed approach worked in real robot tasks. Code is available online at: https://github.com/SubhadityaMukherjee/more_mvcnn. PubDate: 2023-06-20 DOI: 10.1007/s11370-023-00468-4
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract A source-seeking algorithm for mobile robots is a method for the robot to locate and move towards a specific source, such as a light or sound. Developing such algorithms typically involves using sensors, such as cameras or microphones, to detect the source and calculate its location. The algorithm may also incorporate feedback mechanisms to adjust the robot's movement in real time based on changes in the environment or the location of the source. In autonomous vehicles, the problem of seeking the source of a scalar signal as a non-holonomic unicycle is a control problem where the goal is for the vehicle to navigate to the location of the signal's source while maintaining a stable trajectory. In this paper, a control algorithm which was designed to take into account the non-holonomic constraints as well as the noise in the signal of the unicycle model was used to solve this problem and guide the vehicle to the source of the signal while maintaining stability. Additionally, this research suggests a source-seeking algorithm based on Extremum-seeking control (ESC) as an optimization technique to get beyond the limitations listed above. The simulation results show that the stability of the proposed system almost reached 100%. PubDate: 2023-06-08 DOI: 10.1007/s11370-023-00470-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Driven by the shortage of qualified nurses and the increasing average age of the population, the ambient assisted living style using intelligent service robots and smart home systems has become an excellent choice to free up caregiver time and energy and provide users with a sense of independence. However, users’ unique environments and differences in abilities to express themselves through different interaction modalities make intention recognition and interaction between user and service system very difficult, limiting the use of these new nursing technologies. This paper presents a multimodal domestic service robot interaction system and proposes a multimodal fusion algorithm for intention recognition to deal with these problems. The impacts of short-term and long-term changes were taken into account. Implemented interaction modalities include touch, voice, myoelectricity gesture, visual gesture, and haptics. Users could freely choose one or more modalities through which to express themselves. Virtual games and virtual activities of independent living were designed for pre-training and evaluating users’ abilities to use different interaction modalities in their unique environments. A domestic service robot interaction system was built, on which a set of experiments were carried out to test the system’s stability and intention recognition ability in different scenarios. The experiment results show that the system is stable and effective and can adapt to different scenarios. In addition, the intention recognition rate in the experiments was 93.62%. Older adults could master the system quickly and use it to provide some assistance for their independent living. PubDate: 2023-06-04 DOI: 10.1007/s11370-023-00466-6
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This paper introduces a new method to solve the collision-free path planning problem for industrial robots considering safe human–robot coexistence. The aim is to keep the robot at a safe distance away from the human and let the robot’s tip reach the target location through a smooth path. The proposed method is iterative, and each iteration provides random candidate waypoints for the robot’s tip. The waypoint that the robot will follow for each iteration is determined by solving the optimization problem. The objective function is formulated considering the distance between the human and the robot, as well as the criteria for the robot’s tip to reach the target by following a smooth path. The human and the robot in the environment are represented by the capsules, and the minimum distance calculation is performed between these capsules using the Gilbert–Johnson–Keerthi algorithm. The simulation results demonstrate the performance of the proposed method for different scenarios involving human–robot coexistence. PubDate: 2023-06-02 DOI: 10.1007/s11370-023-00465-7
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract Natural human–robot interaction requires social robots to have human-like perception of engagement intention. In the multi-person interaction scenario, it’s a vital task for social robots to rationally select the main interaction object. Existing studies mostly focus on analyzing whether a person has engagement intention before interaction. However, this qualitative analysis of engagement intention is only applicable in single-person interaction scenarios. When multiple people have the intention to engage with the robot, the robot needs to quantitatively analyze the engagement intention intensity (EII) of all people to make a reasonable interaction decision. In addition, for EII recognition, it is an ideal state that social robots can imitate human social thinking as much as possible. For these purposes, a method that can efficiently recognize the EII by fusing transient features and temporal features is proposed. First, the 3D pose extractor is used to extract the 3D skeleton information which can calculate the transient features including linear distance and body orientation. Second, an improved ConvLSTM network is proposed to effectively identify pedestrian motion states which can reflect temporal information. Finally, based on the proposed two states fusion fuzzy inference system (TSFFIS), the EII can be judged by the three features which are linear distance, body orientation and motion states. Comparative experiments show that our method can effectively identify the EII of different pedestrians relative to the robot. Compared with existing methods, the EII recognition method based on TSFFIS has better performance. PubDate: 2023-05-26 DOI: 10.1007/s11370-023-00464-8
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In the human-following task, the human detection, tracking and identification are fundamental steps to help the mobile robot to follow and maintain an appropriate distance and orientation to the selected target person (STP) without any threatenings. Recently, along with the widespread development of robots in general and service robots in particular, not only the safety, but the flexibility, the naturality and the sociality in applications of human-friendly services and collaborative tasks are also increasingly demanded with a higher level. This request poses more challenges in robustly detecting, tracking and identifying the STP since the human–robot cooperation is more complex and unpredictable. Obviously, the safe natural robot behavior cannot be ensured if the STP is lost or the robot misidentified its target. In this paper, a hierarchical approach is presented to update the states of the STP more robustly during the human-following task. This method is proposed with the goal of achieving good performance (robust, accurate and fast response) to serve safe natural robot behaviors, with modest hardware. The proposed system is verified by a set of experiments, and shown reasonable results. Graphical abstract PubDate: 2023-05-23 DOI: 10.1007/s11370-023-00463-9
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Abstract This work presents a computationally lightweight motion planner for over-actuated platforms. For this purpose, a general state-space model for mobile platforms with several kinematic chains is defined, which considers dynamics, nonlinearities and constraints. The proposed motion planner is based on a sequential multi-stage approach that takes advantage of the warm start on each step. Firstly, a globally optimal and smooth 2D/3D trajectory is generated using the Fast Marching Method. This trajectory is fed as a warm start to a sequential linear quadratic regulator that is able to generate an optimal motion plan without constraints for all the platform actuators. Finally, a feasible motion plan is generated considering the constraints defined in the model. In this respect, the sequential linear quadratic regulator is employed again, taking the previously generated unconstrained motion plan as a warm start. The motion planner has been deployed into the Exomars Testing Rover of the European Space Agency. This rover is an Ackermann-capable planetary exploration testbed that is equipped with a robotic arm. Several experiments were carried out demonstrating that the proposed approach speeds up the computation time and increases the success ratio for a martian sample retrieval mission, which can be considered as a representative use case of goal-constrained trajectory generation for an over-actuated mobile platform. PubDate: 2023-04-25 DOI: 10.1007/s11370-023-00461-x