Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Trajectory planning for multiple robots in shared environments is a challenging problem especially when there is limited communication available or no central entity. In this article, we present Real-time planning using Linear Spatial Separations, or RLSS: a real-time decentralized trajectory planning algorithm for cooperative multi-robot teams in static environments. The algorithm requires relatively few robot capabilities, namely sensing the positions of robots and obstacles without higher-order derivatives and the ability of distinguishing robots from obstacles. There is no communication requirement and the robots’ dynamic limits are taken into account. RLSS generates and solves convex quadratic optimization problems that are kinematically feasible and guarantees collision avoidance if the resulting problems are feasible. We demonstrate the algorithm’s performance in real-time in simulations and on physical robots. We compare RLSS to two state-of-the-art planners and show empirically that RLSS does avoid deadlocks and collisions in forest-like and maze-like environments, significantly improving prior work, which result in collisions and deadlocks in such environments. PubDate: 2023-05-30
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Trajectory generation for biped robots is very complex due to the challenge posed by real-world uneven terrain. To address this complexity, this paper proposes a data-driven Gait model that can handle continuously changing conditions. Data-driven approaches are used to incorporate the joint relationships. Therefore, the deep learning methods are employed to develop seven different data-driven models, namely DNN, LSTM, GRU, BiLSTM, BiGRU, LSTM+GRU, and BiLSTM+BiGRU. The dataset used for training the Gait model consists of walking data from 10 able subjects on continuously changing inclines and speeds. The objective function incorporates the standard error from the inter-subject mean trajectory to guide the Gait model to not accurately follow the high variance points in the gait cycle, which helps in providing a smooth and continuous gait cycle. The results show that the proposed Gait models outperform the traditional finite state machine (FSM) and Basis models in terms of mean and maximum error summary statistics. In particular, the LSTM+GRU-based Gait model provides the best performance compared to other data-driven models. PubDate: 2023-05-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Monitoring and controlling a large-scale spatiotemporal process can be costly and dangerous for human operators, which can delegate the task to mobile robots for improved efficiency at a lower cost. The complex evolution of the spatiotemporal process and limited onboard resources of the robots motivate a holistic design of the robots’ actions to complete the tasks efficiently. This paper describes a cooperative framework for estimating and controlling a spatiotemporal process using a team of mobile robots that have limited onboard resources. We model the spatiotemporal process as a 2D diffusion equation that can characterize the intrinsic dynamics of the process with a partial differential equation (PDE). Measurement and actuation of the diffusion process are performed by mobile robots carrying sensors and actuators. The core of the framework is a nonlinear optimization problem, that simultaneously seeks the actuation and guidance of the robots to control the spatiotemporal process subject to the PDE dynamics. The limited onboard resources are formulated as inequality constraints on the actuation and speed of the robots. Extensive numerical studies analyze and evaluate the proposed framework using nondimensionalization and compare the optimal strategy to baseline strategies. The framework is demonstrated on an outdoor multi-quadrotor testbed using hardware-in-the-loop simulations. PubDate: 2023-05-24
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Discharge of mine tailings significantly impacts the ecological status of the sea. Methods to efficiently monitor the extent of dispersion is essential to protect sensitive areas. By combining underwater robotic sampling with ocean models, we can choose informative sampling sites and adaptively change the robot’s path based on in situ measurements to optimally map the tailings distribution near a seafill. This paper creates a stochastic spatio-temporal proxy model of dispersal dynamics using training data from complex numerical models. The proxy model consists of a spatio-temporal Gaussian process model based on an advection–diffusion stochastic partial differential equation. Informative sampling sites are chosen based on predictions from the proxy model using an objective function favoring areas with high uncertainty and high expected tailings concentrations. A simulation study and data from real-life experiments are presented. PubDate: 2023-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: We present a biologically inspired design for swarm foraging based on ant’s pheromone deployment, where the swarm is assumed to have very restricted capabilities. The robots do not require global or relative position measurements and the swarm is fully decentralized and needs no infrastructure in place. Additionally, the system only requires one-hop communication over the robot network, we do not make any assumptions about the connectivity of the communication graph and the transmission of information and computation is scalable versus the number of agents. This is done by letting the agents in the swarm act as foragers or as guiding agents (beacons). We present experimental results computed for a swarm of Elisa-3 robots on a simulator, and show how the swarm self-organizes to solve a foraging problem over an unknown environment, converging to trajectories around the shortest path, and test the approach on a real swarm of Elisa-3 robots. At last, we discuss the limitations of such a system and propose how the foraging efficiency can be increased. PubDate: 2023-04-27
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper propose a hierarchical path planning algorithm that first captures the local crowd movement around the robot using RGB camera combined with LiDAR and predicts the movement of people nearby the robot, and then generates appropriate global path for the robot using the global path planner with the crowd information. After deciding the global path, the low-level control system receives the prediction results of the crowd and high-level global path, and generates the actual speed control commands for the robot after considering the social norms. With the high accuracy of computer vision for human recognition and the high precision of LiDAR, the system is able to accurately track the surrounding human locations. Through high-level path planning, the robot can use different movement strategies in different scenarios, while the crowd prediction allows the robot to generate more efficient and socially acceptable paths. With this system, even in a highly dynamic environment caused by the crowd, the robot can still plan an appropriate path reach the destination without causing psychological discomfort to others successfully. PubDate: 2023-04-25
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: This paper presents a framework to enable a team of heterogeneous mobile robots to model and sense a multiscale system. We propose a coupled strategy, where robots of one type collect high-fidelity measurements at a slow time scale and robots of another type collect low-fidelity measurements at a fast time scale, for the purpose of fusing measurements together. The multiscale measurements are fused to create a model of a complex, nonlinear spatiotemporal process. The model helps determine optimal sensing locations and predict the evolution of the process. Key contributions are: (i) consolidation of multiple types of data into one cohesive model, (ii) fast determination of optimal sensing locations for mobile robots, and (iii) adaptation of models online for various monitoring scenarios. We illustrate the proposed framework by modeling and predicting the evolution of an artificial plasma cloud. We test our approach using physical marine robots adaptively sampling a process in a water tank. PubDate: 2023-04-19
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Teleoperating robotic manipulators can be complicated and cognitively demanding for the human operator. Despite these difficulties, teleoperated robotic systems are still popular in several industrial applications, e.g., remote handling of hazardous material. In this context, we present a novel haptic shared control method for minimising the manipulator torque effort during remote manipulative actions in which an operator is assisted in selecting a suitable grasping pose for then displacing an object along a desired trajectory. Minimising torque is important because it reduces the system operating cost and extends the range of objects that can be manipulated. We demonstrate the effectiveness of the proposed approach in a series of representative real-world pick-and-place experiments as well as in a human subjects study. The reported results prove the effectiveness of our shared control vs. a standard teleoperation approach. We also find that haptic-only guidance performs better than visual-only guidance, although combining them together leads to the best overall results. PubDate: 2023-04-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Robotic cutting of soft materials is critical for applications such as food processing, household automation, and surgical manipulation. As in other areas of robotics, simulators can facilitate controller verification, policy learning, and dataset generation. Moreover, differentiable simulators can enable gradient-based optimization, which is invaluable for calibrating simulation parameters and optimizing controllers. In this work, we present DiSECt: the first differentiable simulator for cutting soft materials. The simulator augments the finite element method with a continuous contact model based on signed distance fields, as well as a continuous damage model that inserts springs on opposite sides of the cutting plane and allows them to weaken until zero stiffness, enabling crack formation. Through various experiments, we evaluate the performance of the simulator. We first show that the simulator can be calibrated to match resultant forces and deformation fields from a state-of-the-art commercial solver and real-world cutting datasets, with generality across cutting velocities and object instances. We then show that Bayesian inference can be performed efficiently by leveraging the differentiability of the simulator, estimating posteriors over hundreds of parameters in a fraction of the time of derivative-free methods. Next, we illustrate that control parameters in the simulation can be optimized to minimize cutting forces via lateral slicing motions. Finally, we conduct experiments on a real robot arm equipped with a slicing knife to infer simulation parameters from force measurements. By optimizing the slicing motion of the knife, we show on fruit cutting scenarios that the average knife force can be reduced by more than \(40\%\) compared to a vertical cutting motion. We publish code and additional materials on our project website at https://diff-cutting-sim.github.io. PubDate: 2023-04-12
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: We develop a conditional generative model to represent dexterous grasp postures of a robotic hand and use it to generate in-hand regrasp trajectories. Our model learns to encode the robotic grasp postures into a low-dimensional space, called Synergy Space, while taking into account additional information about the object such as its size and its shape category. We then generate regrasp trajectories through linear interpolation in this low-dimensional space. The result is that the hand configuration moves from one grasp type to another while keeping the object stable in the hand. We show that our model achieves higher success rate on in-hand regrasping compared to previous methods used for synergy extraction, by taking advantage of the grasp size conditional variable. PubDate: 2023-04-11
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Robots operating in everyday environments need to effectively perceive, model, and infer semantic properties of objects. Existing knowledge reasoning frameworks only model binary relations between an object’s class label and its semantic properties, unable to collectively reason about object properties detected by different perception algorithms and grounded in diverse sensory modalities. We bridge the gap between multimodal perception and knowledge reasoning by introducing an n-ary representation that models complex, inter-related object properties. To tackle the problem of collecting n-ary semantic knowledge at scale, we propose transformer neural networks that generalize knowledge from observations of object instances by learning to predict single missing properties or predict joint probabilities of all properties. The learned models can reason at different levels of abstraction, effectively predicting unknown properties of objects in different environmental contexts given different amounts of observed information. We quantitatively validate our approach against prior methods on LINK, a unique dataset we contribute that contains 1457 object instances in different situations, amounting to 15 multimodal properties types and 200 total properties. Compared to the top-performing baseline, a Markov Logic Network, our models obtain a 10% improvement in predicting unknown properties of novel object instances while reducing training and inference time by more than 150 times. Additionally, we apply our work to a mobile manipulation robot, demonstrating its ability to leverage n-ary reasoning to retrieve objects and actively detect object properties. The code and data are available at https://github.com/wliu88/LINK. PubDate: 2023-04-06
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Dynamic environments challenge existing robot navigation methods, and motivate either stringent assumptions on workspace variation or relinquishing of collision avoidance and convergence guarantees. This paper shows that the latter can be preserved even in the absence of knowledge of how the environment evolves, through a navigation function methodology applicable to sphere-worlds with moving obstacles and robot destinations. Assuming bounds on speeds of robot destination and obstacles, and sufficiently higher maximum robot speed, the navigation function gradient can be used produce robot feedback laws that guarantee obstacle avoidance, and theoretical guarantees of bounded tracking errors and asymptotic convergence to the target when the latter eventually stops moving. The efficacy of the gradient-based feedback controller derived from the new navigation function construction is demonstrated both in numerical simulations as well as experimentally. PubDate: 2023-04-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Collision avoidance is one of the most important topics in the robotics field. In this problem, the goal is to move the robots from initial locations to target locations such that they follow the shortest non-colliding paths in the shortest time and with the least amount of energy. Robot navigation among pedestrians is an example application of this problem which is the focus of this paper. This paper presents a distributed and real-time algorithm for solving collision avoidance problems in dense and complex 2D and 3D environments. This algorithm uses angular calculations to select the optimal direction for the movement of each robot and it has been shown that these separate calculations lead to a form of cooperative behavior among agents. We evaluated the proposed approach on various simulation and experimental scenarios and compared the results with ORCA one of the most important algorithms in this field. The results show that the proposed approach is at least 25% faster than ORCA while is also more reliable. The proposed method is shown to enable fully autonomous navigation of a swarm of Crazyflies. PubDate: 2023-04-01
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Robots frequently need to perceive object attributes, such as red, heavy, and empty, using multimodal exploratory behaviors, such as look, lift, and shake. One possible way for robots to do so is to learn a classifier for each perceivable attribute given an exploratory behavior. Once the attribute classifiers are learned, they can be used by robots to select actions and identify attributes of new objects, answering questions, such as “Is this object red and empty '” In this article, we introduce a robot interactive perception problem, called Multimodal Embodied Attribute Learning (meal), and explore solutions to this new problem. Under different assumptions, there are two classes of meal problems. offline-meal problems are defined in this article as learning attribute classifiers from pre-collected data, and sequencing actions towards attribute identification under the challenging trade-off between information gains and exploration action costs. For this purpose, we introduce Mixed Observability Robot Control (morc), an algorithm for offline-meal problems, that dynamically constructs both fully and partially observable components of the state for multimodal attribute identification of objects. We further investigate a more challenging class of meal problems, called online-meal, where the robot assumes no pre-collected data, and works on both attribute classification and attribute identification at the same time. Based on morc, we develop an algorithm called Information-Theoretic Reward Shaping (morc-itrs) that actively addresses the trade-off between exploration and exploitation in online-meal problems. morc and morc-itrs are evaluated in comparison with competitive meal baselines, and results demonstrate the superiority of our methods in learning efficiency and identification accuracy. PubDate: 2023-03-29
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Today, even the most compute-and-power constrained robots can measure complex, high data-rate video and LIDAR sensory streams. Often, such robots, ranging from low-power drones to space and subterranean rovers, need to transmit high-bitrate sensory data to a remote compute server if they are uncertain or cannot scalably run complex perception or mapping tasks locally. However, today’s representations for sensory data are mostly designed for human, not robotic, perception and thus often waste precious compute or wireless network resources to transmit unimportant parts of a scene that are unnecessary for a high-level robotic task. This paper presents an algorithm to learn task-relevant representations of sensory data that are co-designed with a pre-trained robotic perception model’s ultimate objective. Our algorithm aggressively compresses robotic sensory data by up to 11 \(\times \) more than competing methods. Further, it achieves high accuracy and robust generalization on diverse tasks including Mars terrain classification with low-power deep learning accelerators, neural motion planning, and environmental timeseries classification. PubDate: 2023-03-20 DOI: 10.1007/s10514-023-10093-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The current state of electronic component miniaturization coupled with the increasing efficiency in hardware and software allow the development of smaller and compact robotic systems. The convenience of using these small, simple, yet capable robots has gathered the research community’s attention towards practical applications of swarm robotics. This paper presents the design of a novel platform for swarm robotics applications that is low cost, easy to assemble using off-the-shelf components, and deeply integrated with the most used robotic framework available today: ROS (Robot Operating System). The robotic platform is entirely open, composed of a 3D printed body and open-source software. We describe its architecture, present its main features, and evaluate its functionalities executing experiments using a couple of robots. Results demonstrate that the proposed mobile robot is capable of performing different swarm tasks, given its small size and reduced cost, being suitable for swarm robotics research and education. PubDate: 2023-03-20 DOI: 10.1007/s10514-023-10100-0
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: The object perception capabilities of humans are impressive, and this becomes even more evident when trying to develop solutions with a similar proficiency in autonomous robots. While there have been notable advancements in the technologies for artificial vision and touch, the effective integration of these two sensory modalities in robotic applications still needs to be improved, and several open challenges exist. Taking inspiration from how humans combine visual and haptic perception to perceive object properties and drive the execution of manual tasks, this article summarises the current state of the art of visuo-haptic object perception in robots. Firstly, the biological basis of human multimodal object perception is outlined. Then, the latest advances in sensing technologies and data collection strategies for robots are discussed. Next, an overview of the main computational techniques is presented, highlighting the main challenges of multimodal machine learning and presenting a few representative articles in the areas of robotic object recognition, peripersonal space representation and manipulation. Finally, informed by the latest advancements and open challenges, this article outlines promising new research directions. PubDate: 2023-03-14 DOI: 10.1007/s10514-023-10091-y
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: In this paper, we present a method for solving the localisation of a ground lidar using overhead imagery only. Public overhead imagery such as Google satellite images are readily available resources. They can be used as the map proxy for robot localisation, relaxing the requirement for a prior traversal for mapping as in traditional approaches. While prior approaches have focused on the metric localisation between range sensors and overhead imagery, our method is the first to learn both place recognition and metric localisation of a ground lidar using overhead imagery, and also outperforms prior methods on metric localisation with large initial pose offsets. To bridge the drastic domain gap between lidar data and overhead imagery, our method learns to transform an overhead image into a collection of 2D points, emulating the resulting point-cloud scanned by a lidar sensor situated near the centre of the overhead image. After both modalities are expressed as point sets, point-based machine learning methods for localisation are applied. PubDate: 2023-03-02 DOI: 10.1007/s10514-023-10085-w
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.
Abstract: Maintaining stability while walking on arbitrary surfaces or dealing with external perturbations is of great interest in humanoid robotics research. Increasing the system’s autonomous robustness to a variety of postural threats during locomotion is the key despite the need to evaluate noisy sensor signals. The equations of motion are the foundation of all published approaches. In contrast, we propose a more adequate evaluation of the equations of motion with respect to an arbitrary moving reference point in a non-inertial reference frame. Conceptual advantages are, e.g., getting independent of global position and velocity vectors estimated by sensor fusions or calculating the imaginary zero-moment point walking on different inclined ground surfaces. Further, we improve the calculation results by reducing noise-amplifying methods in our algorithm and using specific characteristics of physical robots. We use simulation results to compare our algorithm with established approaches and test it with experimental robot data. PubDate: 2023-02-28 DOI: 10.1007/s10514-023-10092-x
Please help us test our new pre-print finding feature by giving the pre-print link a rating. A 5 star rating indicates the linked pre-print has the exact same content as the published article.