Update!H5支持摘要折叠,体验更佳!涵盖CS|物理|数学|经济|统计|金融|生物|电气领域,更有搜索、收藏等功能!
cs.RO机器人相关,共计16篇
【1】 Assistive Tele-op: Leveraging Transformers to Collect Robotic Task Demonstrations 标题:辅助Tele-op:利用Transformer收集机器人任务演示 链接:https://arxiv.org/abs/2112.05129
作者:Henry M. Clever,Ankur Handa,Hammad Mazhar,Kevin Parker,Omer Shapira,Qian Wan,Yashraj Narang,Iretiayo Akinola,Maya Cakmak,Dieter Fox 机构:NVIDIA, USA. ,Georgia Institute of Technology, Atlanta, GA, USA. 备注:9 pages, 4 figures, 1 table. NeurIPS 2021 Workshop on Robot Learning: Self-Supervised and Lifelong Learning, Virtual, Virtual
【2】 Neural Descriptor Fields: SE(3)-Equivariant Object Representations for Manipulation 标题:神经描述符域:SE(3)-用于操作的等变对象表示 链接:https://arxiv.org/abs/2112.05124
作者:Anthony Simeonov,Yilun Du,Andrea Tagliasacchi,Joshua B. Tenenbaum,Alberto Rodriguez,Pulkit Agrawal,Vincent Sitzmann 机构:Massachusetts Institute of Technology, Google Research, University of Toronto, ∗Authors contributed equally, order determined by coin flip. †Equal Advising., Small Handful (~,-,) of Demonstrations, Test-time executions: Unseen objects in out-of-distribution poses 备注:Website: this https URL First two authors contributed equally (order determined by coin flip), last two authors equal advising 摘要:我们提出了神经描述符字段(NDF),一种通过类别级描述符对对象和目标(如用于悬挂的机器人夹具或机架)之间的点和相对姿势进行编码的对象表示。我们将此表示用于对象操作,在给定任务演示的情况下,我们希望在同一类别的新对象实例上重复相同的任务。我们建议通过搜索(通过优化)描述符与演示中观察到的匹配的姿势来实现这一目标。NDF通过不依赖专家标记的关键点的3D自动编码任务以自我监督的方式方便地进行训练。此外,NDF是SE(3)-等变的,保证了在所有可能的3D对象平移和旋转中通用的性能。我们在模拟和真实机器人上演示了从少量(5-10)演示中学习操作任务。我们的性能概括了对象实例和6自由度对象姿势,并且显著优于最近依赖2D描述符的基线。项目网站:https://yilundu.github.io/ndf/. 摘要:We present Neural Descriptor Fields (NDFs), an object representation that encodes both points and relative poses between an object and a target (such as a robot gripper or a rack used for hanging) via category-level descriptors. We employ this representation for object manipulation, where given a task demonstration, we want to repeat the same task on a new object instance from the same category. We propose to achieve this objective by searching (via optimization) for the pose whose descriptor matches that observed in the demonstration. NDFs are conveniently trained in a self-supervised fashion via a 3D auto-encoding task that does not rely on expert-labeled keypoints. Further, NDFs are SE(3)-equivariant, guaranteeing performance that generalizes across all possible 3D object translations and rotations. We demonstrate learning of manipulation tasks from few (5-10) demonstrations both in simulation and on a real robot. Our performance generalizes across both object instances and 6-DoF object poses, and significantly outperforms a recent baseline that relies on 2D descriptors. Project website: https://yilundu.github.io/ndf/.
【3】 Generating Useful Accident-Prone Driving Scenarios via a Learned Traffic Prior 标题:通过学习的交通先验生成有用的易发生事故的驾驶场景 链接:https://arxiv.org/abs/2112.05077
作者:Davis Rempe,Jonah Philion,Leonidas J. Guibas,Sanja Fidler,Or Litany 机构:Stanford University, NVIDIA, University of Toronto, Vector Institute, nv-tlabs.github.ioSTRIVE 摘要:评估和改进自动驾驶车辆的规划需要可扩展的长尾交通场景生成。为了有用,这些场景必须现实且具有挑战性,但并非不可能安全驾驶通过。在这项工作中,我们介绍了STRIVE,一种自动生成具有挑战性的场景的方法,该场景会导致给定的计划人员产生不希望的行为,如碰撞。为了保持场景的合理性,关键思想是以基于图形的条件VAE的形式利用已学习的交通运动模型。场景生成是在该交通模型的潜在空间中进行的优化,通过扰动初始真实场景来产生与给定规划器碰撞的轨迹。随后的优化用于找到场景的“解决方案”,确保它有助于改进给定的计划器。进一步分析基于碰撞类型生成的场景。我们攻击了两个计划者,并证明在这两种情况下,STRIVE成功地生成了现实的、具有挑战性的场景。此外,我们还“关闭循环”,并使用这些场景优化基于规则的规划器的超参数。 摘要:Evaluating and improving planning for autonomous vehicles requires scalable generation of long-tail traffic scenarios. To be useful, these scenarios must be realistic and challenging, but not impossible to drive through safely. In this work, we introduce STRIVE, a method to automatically generate challenging scenarios that cause a given planner to produce undesirable behavior, like collisions. To maintain scenario plausibility, the key idea is to leverage a learned model of traffic motion in the form of a graph-based conditional VAE. Scenario generation is formulated as an optimization in the latent space of this traffic model, effected by perturbing an initial real-world scene to produce trajectories that collide with a given planner. A subsequent optimization is used to find a "solution" to the scenario, ensuring it is useful to improve the given planner. Further analysis clusters generated scenarios based on collision type. We attack two planners and show that STRIVE successfully generates realistic, challenging scenarios in both cases. We additionally "close the loop" and use these scenarios to optimize hyperparameters of a rule-based planner.
【4】 A Bayesian Treatment of Real-to-Sim for Deformable Object Manipulation 标题:用于可变形物体操纵的Real-to-Sim的贝叶斯处理 链接:https://arxiv.org/abs/2112.05068
作者:Rika Antonova,Jingyun Yang,Priya Sundaresan,Dieter Fox,Fabio Ramos,Jeannette Bohg 机构: The University of Sydney 摘要:可变形物体操纵仍然是机器人学研究中一项具有挑战性的任务。传统的参数推断和状态估计技术通常依赖于状态空间及其动力学的精确定义。虽然这适用于刚体对象和机器人状态,但定义可变形对象的状态空间及其随时间演化的方式仍具有挑战性。在这项工作中,我们提出的问题,推断物理参数的变形物体作为一个概率推理任务定义的模拟器。我们提出了一种从图像序列中提取状态信息的新方法,通过一种将可变形对象的状态表示为分布嵌入的技术。这允许以原则性的方式将噪声状态观测直接纳入现代基于贝叶斯模拟的推理工具中。我们的实验证实,我们可以估计物理特性的后验分布,例如高度可变形对象(如布料和绳索)的弹性、摩擦力和比例。总的来说,我们的方法从概率上解决了真实到模拟的问题,并有助于更好地表示可变形对象状态的演化。 摘要:Deformable object manipulation remains a challenging task in robotics research. Conventional techniques for parameter inference and state estimation typically rely on a precise definition of the state space and its dynamics. While this is appropriate for rigid objects and robot states, it is challenging to define the state space of a deformable object and how it evolves in time. In this work, we pose the problem of inferring physical parameters of deformable objects as a probabilistic inference task defined with a simulator. We propose a novel methodology for extracting state information from image sequences via a technique to represent the state of a deformable object as a distribution embedding. This allows to incorporate noisy state observations directly into modern Bayesian simulation-based inference tools in a principled manner. Our experiments confirm that we can estimate posterior distributions of physical properties, such as elasticity, friction and scale of highly deformable objects, such as cloth and ropes. Overall, our method addresses the real-to-sim problem probabilistically and helps to better represent the evolution of the state of deformable objects.
【5】 Learning Transferable Motor Skills with Hierarchical Latent Mixture Policies 标题:用分层潜在混合策略学习可迁移运动技能 链接:https://arxiv.org/abs/2112.05062
作者:Dushyant Rao,Fereshteh Sadeghi,Leonard Hasenclever,Markus Wulfmeier,Martina Zambelli,Giulia Vezzani,Dhruva Tirumala,Yusuf Aytar,Josh Merel,Nicolas Heess,Raia Hadsell 机构:DeepMind, London, UK 摘要:对于在现实世界中运行的机器人,需要学习可有效转移并适应多种任务和场景的可重用行为。我们提出了一种利用层次混合潜变量模型从数据中学习抽象运动技能的方法。与现有工作相比,我们的方法利用了离散和连续潜在变量的三级层次结构,以捕获一组高级行为,同时考虑执行方式的差异。我们在操作域中证明,该方法可以有效地将离线数据聚类成不同的可执行行为,同时保持连续潜在变量模型的灵活性。与现有的基于技能和模仿的方法相比,生成的技能可以根据新任务、看不见的对象以及从状态到基于视觉的策略进行转移和微调,从而产生更好的样本效率和渐进性能。我们进一步分析了技能如何以及何时最有益:它们鼓励定向探索,以覆盖与任务相关的状态空间的大区域,使它们在挑战稀疏奖励设置时最有效。 摘要:For robots operating in the real world, it is desirable to learn reusable behaviours that can effectively be transferred and adapted to numerous tasks and scenarios. We propose an approach to learn abstract motor skills from data using a hierarchical mixture latent variable model. In contrast to existing work, our method exploits a three-level hierarchy of both discrete and continuous latent variables, to capture a set of high-level behaviours while allowing for variance in how they are executed. We demonstrate in manipulation domains that the method can effectively cluster offline data into distinct, executable behaviours, while retaining the flexibility of a continuous latent variable model. The resulting skills can be transferred and fine-tuned on new tasks, unseen objects, and from state to vision-based policies, yielding better sample efficiency and asymptotic performance compared to existing skill- and imitation-based methods. We further analyse how and when the skills are most beneficial: they encourage directed exploration to cover large regions of the state space relevant to the task, making them most effective in challenging sparse-reward settings.
【6】 Few-Shot Keypoint Detection as Task Adaptation via Latent Embeddings 标题:基于潜在嵌入的任务自适应Few-Shot关键点检测 链接:https://arxiv.org/abs/2112.04910
作者:Mel Vecerik,Jackie Kay,Raia Hadsell,Lourdes Agapito,Jon Scholz 机构: UK 2University College London 备注:Supplementary material available at: this https URL 摘要:密集目标跟踪是一项重要的计算机视觉任务,它能够以像素级的精度定位特定的目标点,在机器人领域有着众多的下游应用。现有的方法要么在一次向前传递中计算密集的关键点嵌入,这意味着模型被训练成一次跟踪所有东西,要么将其全部容量分配给稀疏的预定义点集,以通用性换取准确性。在本文中,我们根据观察发现,在给定时间内相关点的数量通常相对较少(例如,目标物体上的抓取点)来探索一个中间地带。我们的主要贡献是一种新颖的体系结构,灵感来源于Few-Shot任务自适应,它允许稀疏式网络以指示跟踪哪个点的关键点嵌入为条件。我们的主要发现是,这种方法提供了密集嵌入模型的通用性,同时提供了显著接近稀疏关键点方法的精度。我们给出的结果说明了这种容量与精度的权衡,并演示了使用真实机器人拾取和放置任务将零炮转移到新对象实例(类内)的能力。 摘要:Dense object tracking, the ability to localize specific object points with pixel-level accuracy, is an important computer vision task with numerous downstream applications in robotics. Existing approaches either compute dense keypoint embeddings in a single forward pass, meaning the model is trained to track everything at once, or allocate their full capacity to a sparse predefined set of points, trading generality for accuracy. In this paper we explore a middle ground based on the observation that the number of relevant points at a given time are typically relatively few, e.g. grasp points on a target object. Our main contribution is a novel architecture, inspired by few-shot task adaptation, which allows a sparse-style network to condition on a keypoint embedding that indicates which point to track. Our central finding is that this approach provides the generality of dense-embedding models, while offering accuracy significantly closer to sparse-keypoint approaches. We present results illustrating this capacity vs. accuracy trade-off, and demonstrate the ability to zero-shot transfer to new object instances (within-class) using a real-robot pick-and-place task.
【7】 Real-World Dexterous Object Manipulation based Deep Reinforcement Learning 标题:基于深度强化学习的现实世界灵巧物体操作 链接:https://arxiv.org/abs/2112.04893
作者:Qingfeng Yao,Jilong Wang,Shuyu Yang 机构:REAL-WORLD DEXTEROUS OBJECT MANIPULATION BASEDDEEP REINFORCEMENT LEARNINGA PREPRINTWestlake UniversityQingfeng Yao 备注:Best Paper Award Runner Up winner submission for Real Robot Challenge 2021
【8】 Learning Neural Implicit Functions as Object Representations for Robotic Manipulation 标题:学习神经隐函数作为机器人操作的对象表示 链接:https://arxiv.org/abs/2112.04812
作者:Jung-Su Ha,Danny Driess,Marc Toussaint 机构:Learning & Intelligent Systems Lab, TU Berlin, Germany
【9】 Next Steps: Learning a Disentangled Gait Representation for Versatile Quadruped Locomotion 标题:下一步:学习多功能四足移动的解缠步态表示 链接:https://arxiv.org/abs/2112.04809
作者:Alexander L. Mitchell,Wolfgang Merkt,Mathieu Geisert,Siddhant Gangapurwala,Martin Engelcke,Oiwi Parker Jones,Ioannis Havoutis,Ingmar Posner 机构: contact schedules for discrete gaitsAll authors are with the Oxford Robotics Institute, University of OxfordEmail 备注:8 pages, 6 figures, under review at Robotics and Automation Letters (RA-L) 摘要:四足动物的运动正在迅速成熟到一定程度,机器人现在经常穿越各种非结构化地形。然而,虽然步态通常可以通过从一系列预先计算的样式中进行选择来改变,但当前的规划人员无法在机器人运动时连续改变关键步态参数。在飞行中,合成具有意外操作特征的步态,甚至混合动态动作,超出了当前最先进技术的能力。在这项工作中,我们通过学习一个潜在的空间,捕捉构成特定步态的关键站姿阶段,来解决这一限制。这是通过在单步小跑风格上训练的生成模型来实现的,该模型鼓励分离,从而将驱动信号应用于潜在状态的单个维度,从而诱导综合连续各种小跑风格的整体计划。我们证明了驱动信号的特定特性直接映射到步态参数,如步频、足步高度和全站姿持续时间。由于我们方法的性质,这些合成步态在机器人操作过程中在线不断变化,并有力地捕捉到丰富的运动,大大超过了训练过程中相对狭窄的行为。此外,生成模型的使用有助于检测和缓解干扰,从而提供一个通用且稳健的规划框架。我们在真实的ANYmal四足机器人上评估了我们的方法,并证明我们的方法实现了动态小跑风格的连续混合,同时对外部扰动具有鲁棒性和反应性。 摘要:Quadruped locomotion is rapidly maturing to a degree where robots now routinely traverse a variety of unstructured terrains. However, while gaits can be varied typically by selecting from a range of pre-computed styles, current planners are unable to vary key gait parameters continuously while the robot is in motion. The synthesis, on-the-fly, of gaits with unexpected operational characteristics or even the blending of dynamic manoeuvres lies beyond the capabilities of the current state-of-the-art. In this work we address this limitation by learning a latent space capturing the key stance phases constituting a particular gait. This is achieved via a generative model trained on a single trot style, which encourages disentanglement such that application of a drive signal to a single dimension of the latent state induces holistic plans synthesising a continuous variety of trot styles. We demonstrate that specific properties of the drive signal map directly to gait parameters such as cadence, foot step height and full stance duration. Due to the nature of our approach these synthesised gaits are continuously variable online during robot operation and robustly capture a richness of movement significantly exceeding the relatively narrow behaviour seen during training. In addition, the use of a generative model facilitates the detection and mitigation of disturbances to provide a versatile and robust planning framework. We evaluate our approach on a real ANYmal quadruped robot and demonstrate that our method achieves a continuous blend of dynamic trot styles whilst being robust and reactive to external perturbations.
【10】 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D Object Detection 标题:3D-Vfield:学习逆变形点云以实现稳健的3D对象检测 链接:https://arxiv.org/abs/2112.04764
作者:Alexander Lehner,Stefano Gasperini,Alvaro Marcos-Ramiro,Michael Schmidt,Mohammad-Ali Nikouei Mahani,Nassir Navab,Benjamin Busam,Federico Tombari 机构: Technical University of Munich, BMW Group, Johns Hopkins University, Google
【11】 Learning multiple gaits of quadruped robot using hierarchical reinforcement learning 标题:基于分层强化学习的四足机器人多步态学习 链接:https://arxiv.org/abs/2112.04741
作者:Yunho Kim,Bukun Son,Dongjun Lee 机构: model based analyticDepartment of Mechanical Engineering, Seoul National University 摘要:由于其鲁棒性和可扩展性,使用强化学习学习四足机器人的速度指令跟踪控制器越来越引起人们的兴趣。然而,无论指令速度如何,端到端训练的单一策略通常显示单一步态。考虑到根据四足动物的速度存在最佳步态,这可能是次优解。在这项工作中,我们提出了一种四足机器人的分层控制器,可以在跟踪速度指令的同时产生多个步态(即步速、小跑、跳跃)。我们的控制器由两个策略组成,每个策略作为中央模式生成器和局部反馈控制器,并通过分层强化学习进行训练。实验结果表明:1)特定速度范围内存在最优步态;2)与通常显示单个步态的单一策略组成的控制器相比,我们的分层控制器的效率更高。代码是公开的。 摘要:There is a growing interest in learning a velocity command tracking controller of quadruped robot using reinforcement learning due to its robustness and scalability. However, a single policy, trained end-to-end, usually shows a single gait regardless of the command velocity. This could be a suboptimal solution considering the existence of optimal gait according to the velocity for quadruped animals. In this work, we propose a hierarchical controller for quadruped robot that could generate multiple gaits (i.e. pace, trot, bound) while tracking velocity command. Our controller is composed of two policies, each working as a central pattern generator and local feedback controller, and trained with hierarchical reinforcement learning. Experiment results show 1) the existence of optimal gait for specific velocity range 2) the efficiency of our hierarchical controller compared to a controller composed of a single policy, which usually shows a single gait. Codes are publicly available.
【12】 Kinematic Modeling of Handed Shearing Auxetics via Piecewise Constant Curvature 标题:基于分段常曲率的手动剪切辅助器运动学建模 链接:https://arxiv.org/abs/2112.04706
作者:Aman Garg,Ian Good,Daniel Revier,Kevin Airis,Jeffrey Lipton 机构: The University of Washington, Allen School of Computer Science and Engineering, TheUniversity of Washington 备注:7 pages, 10 figures, This paper has been submitted to International Conference on Soft Robotics 2022
【13】 Trajectory-Constrained Deep Latent Visual Attention for Improved Local Planning in Presence of Heterogeneous Terrain 标题:异质地形下改进局部规划的轨迹约束深度潜伏视觉注意 链接:https://arxiv.org/abs/2112.04684
作者:Stefan Wapnick,Travis Manderson,David Meger,Gregory Dudek 机构: School of Computer Science 备注:Published in International Conference on Intelligent Robots and Systems (IROS) 2021 proceedings. Project website: this https URL 摘要:我们提出了一种奖励预测、基于模型的深度学习方法,该方法具有轨迹约束视觉注意,可用于mapless局部视觉导航任务。我们的方法学习将视觉注意力放置在潜影空间中的位置,这些位置跟随车辆控制动作引起的轨迹,以提高规划过程中的预测精度。注意模型通过特定任务损失和额外的轨迹约束损失进行联合优化,允许适应性,但鼓励正则化结构以提高泛化和可靠性。重要的是,视觉注意被应用于潜在特征地图空间,而不是原始图像空间,以促进有效的规划。我们在视觉导航任务中验证了我们的模型,这些任务包括在越野环境中规划低湍流、无碰撞的轨迹,以及在湿滑地形下使用锁定差速器爬山。实验涉及随机程序生成的模拟和真实环境。我们发现,与不注意和自我注意相比,我们的方法提高了泛化和学习效率。 摘要:We present a reward-predictive, model-based deep learning method featuring trajectory-constrained visual attention for use in mapless, local visual navigation tasks. Our method learns to place visual attention at locations in latent image space which follow trajectories caused by vehicle control actions to enhance predictive accuracy during planning. The attention model is jointly optimized by the task-specific loss and an additional trajectory-constraint loss, allowing adaptability yet encouraging a regularized structure for improved generalization and reliability. Importantly, visual attention is applied in latent feature map space instead of raw image space to promote efficient planning. We validated our model in visual navigation tasks of planning low turbulence, collision-free trajectories in off-road settings and hill climbing with locking differentials in the presence of slippery terrain. Experiments involved randomized procedural generated simulation and real-world environments. We found our method improved generalization and learning efficiency when compared to no-attention and self-attention alternatives.
【14】 Safe Autonomous Navigation for Systems with Learned SE(3) Hamiltonian Dynamics 标题:学习SE(3)哈密顿动力学系统的安全自主导航 链接:https://arxiv.org/abs/2112.04639
作者:Zhichao Li,Thai Duong,Nikolay Atanasov 机构:Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA 摘要:未知环境下的安全自主导航是地面、空中和水下机器人面临的一个重要问题。该文提出了一种从轨迹数据中学习移动机器人动力学模型并合成具有安全性和稳定性保证的跟踪控制器的方法。移动机器人的状态通常包含其位置、方向和广义速度,并满足汉密尔顿运动方程。我们使用状态控制轨迹数据集来训练一个表示为神经常微分方程(ODE)网络的平移等变非线性哈密顿模型,而不是手动导出的动力学模型。所学习的哈密顿模型用于合成基于能量成形无源性的控制器,并导出保证安全调节到期望参考姿态的条件。最后,我们根据障碍物距离测量得到的安全约束,实现对所需路径的自适应跟踪。系统能量水平和违反安全约束的距离之间的权衡用于沿所需路径自适应地控制参考姿势。我们的安全自适应控制器在未知复杂环境中导航的模拟六旋翼机器人上进行了演示。 摘要:Safe autonomous navigation in unknown environments is an important problem for ground, aerial, and underwater robots. This paper proposes techniques to learn the dynamics models of a mobile robot from trajectory data and synthesize a tracking controller with safety and stability guarantees. The state of a mobile robot usually contains its position, orientation, and generalized velocity and satisfies Hamilton's equations of motion. Instead of a hand-derived dynamics model, we use a dataset of state-control trajectories to train a translation-equivariant nonlinear Hamiltonian model represented as a neural ordinary differential equation (ODE) network. The learned Hamiltonian model is used to synthesize an energy-shaping passivity-based controller and derive conditions which guarantee safe regulation to a desired reference pose. Finally, we enable adaptive tracking of a desired path, subject to safety constraints obtained from obstacle distance measurements. The trade-off between the system's energy level and the distance to safety constraint violation is used to adaptively govern the reference pose along the desired path. Our safe adaptive controller is demonstrated on a simulated hexarotor robot navigating in unknown complex environments.
【15】 Gaussian Process Constraint Learning for Scalable Chance-Constrained Motion Planning from Demonstrations 标题:基于演示的可伸缩机会约束运动规划的高斯过程约束学习 链接:https://arxiv.org/abs/2112.04612
作者:Glen Chou,Hao Wang,Dmitry Berenson 机构:All authors are affiliated with the University of Michigan 备注:Under review at RA-L ICRA 2022 摘要:我们提出了一种从局部最优演示中学习以高斯过程(GPs)表示的约束的方法。我们的方法使用Karush-Kuhn-Tucker(KKT)最优性条件来确定在演示中约束紧的位置,并在这些状态下缩放约束梯度。然后,我们训练一个约束的GP表示,它与此信息一致,并且概括了此信息。我们进一步证明,GP不确定性可以在kinodynamic RRT中用于规划概率安全的轨迹,并且我们可以利用规划器中的GP结构精确地实现指定的安全概率。我们证明了我们的方法可以学习复杂的非线性约束,这些约束在5D非完整汽车、12D四旋翼和3连杆平面臂上演示,同时需要最少的约束先验信息。我们的结果表明,所学习的GP约束是准确的,优于以前需要更多先验知识的约束学习方法。 摘要:We propose a method for learning constraints represented as Gaussian processes (GPs) from locally-optimal demonstrations. Our approach uses the Karush-Kuhn-Tucker (KKT) optimality conditions to determine where on the demonstrations the constraint is tight, and a scaling of the constraint gradient at those states. We then train a GP representation of the constraint which is consistent with and which generalizes this information. We further show that the GP uncertainty can be used within a kinodynamic RRT to plan probabilistically-safe trajectories, and that we can exploit the GP structure within the planner to exactly achieve a specified safety probability. We demonstrate our method can learn complex, nonlinear constraints demonstrated on a 5D nonholonomic car, a 12D quadrotor, and a 3-link planar arm, all while requiring minimal prior information on the constraint. Our results suggest the learned GP constraint is accurate, outperforming previous constraint learning methods that require more a priori knowledge.
【16】 Adaptive CLF-MPC With Application To Quadrupedal Robots 标题:自适应CLF-MPC及其在四足机器人中的应用 链接:https://arxiv.org/abs/2112.04536
作者:Maria Vittoria Minniti,Ruben Grandia,Farbod Farshidian,Marco Hutter 机构:©, IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including 备注:None 摘要:现代机器人系统具有优越的机动性和机械技能,因此适合在需要与重物交互和精确操作能力的真实场景中使用。例如,具有高有效载荷能力的腿式机器人可用于灾难场景中移除危险材料或运送受伤人员。因此,开发能够使复杂机器人精确执行运动和操作任务的规划算法至关重要。此外,还需要针对新的未知环境建立在线适应机制。在这项工作中,我们假设由模型预测控制(MPC)生成的最优状态输入轨迹满足机器人系统自适应控制中导出的Lyapunov函数准则。因此,我们将控制Lyapunov函数(CLFs)提供的稳定性保证和MPC提供的最优性结合在一个统一的自适应框架中,从而在机器人与未知对象的交互过程中提高了性能。我们在一个四足机器人上进行了仿真和硬件测试,该机器人携带未建模的有效载荷并拉重物。 摘要:Modern robotic systems are endowed with superior mobility and mechanical skills that make them suited to be employed in real-world scenarios, where interactions with heavy objects and precise manipulation capabilities are required. For instance, legged robots with high payload capacity can be used in disaster scenarios to remove dangerous material or carry injured people. It is thus essential to develop planning algorithms that can enable complex robots to perform motion and manipulation tasks accurately. In addition, online adaptation mechanisms with respect to new, unknown environments are needed. In this work, we impose that the optimal state-input trajectories generated by Model Predictive Control (MPC) satisfy the Lyapunov function criterion derived in adaptive control for robotic systems. As a result, we combine the stability guarantees provided by Control Lyapunov Functions (CLFs) and the optimality offered by MPC in a unified adaptive framework, yielding an improved performance during the robot's interaction with unknown objects. We validate the proposed approach in simulation and hardware tests on a quadrupedal robot carrying un-modeled payloads and pulling heavy boxes.
机器翻译,仅供参考