机器人相关学术速递[9.7]

2021-09-16 16:30:06 浏览数 (1)

Update!H5支持摘要折叠,体验更佳!点击阅读原文访问arxivdaily.com,涵盖CS|物理|数学|经济|统计|金融|生物|电气领域,更有搜索、收藏等功能!

cs.RO机器人相关,共计18篇

【1】 Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for Safety-Critical Applications 标题:安全关键应用中具有未知超参数的高斯过程一致误差界 链接:https://arxiv.org/abs/2109.02606

作者:Alexandre Capone,Armin Lederer,Sandra Hirche 机构:Chair of Information-Oriented Control, Department of Electrical and Computing Engineering, Technical University of Munich 摘要:由于后验方差可用于直接估计模型误差和量化风险,高斯过程已成为各种安全关键设置的一种有前途的工具。然而,用于安全关键设置的最新技术取决于内核超参数已知的假设,这在一般情况下并不适用。为了缓解这种情况,我们在具有未知超参数的设置中引入了鲁棒高斯过程一致误差界。我们的方法计算了超参数空间中的置信域,这使我们能够获得具有任意超参数的高斯过程的模型误差的概率上界。我们不需要事先知道超参数的任何界,这是相关工作中常见的假设。相反,我们能够以直观的方式从数据中导出边界。此外,我们还利用所提出的技术来推导一类基于学习的控制问题的性能保证。实验表明,该方法的性能明显优于一般的和完全贝叶斯高斯过程。 摘要:Gaussian processes have become a promising tool for various safety-critical settings, since the posterior variance can be used to directly estimate the model error and quantify risk. However, state-of-the-art techniques for safety-critical settings hinge on the assumption that the kernel hyperparameters are known, which does not apply in general. To mitigate this, we introduce robust Gaussian process uniform error bounds in settings with unknown hyperparameters. Our approach computes a confidence region in the space of hyperparameters, which enables us to obtain a probabilistic upper bound for the model error of a Gaussian process with arbitrary hyperparameters. We do not require to know any bounds for the hyperparameters a priori, which is an assumption commonly found in related work. Instead, we are able to derive bounds from data in an intuitive fashion. We additionally employ the proposed technique to derive performance guarantees for a class of learning-based control problems. Experiments show that the bound performs significantly better than vanilla and fully Bayesian Gaussian processes.

【2】 Crowd-Aware Robot Navigation for Pedestrians with Multiple Collision Avoidance Strategies via Map-based Deep Reinforcement Learning 标题:基于地图深度强化学习的多避碰行人人群感知机器人导航 链接:https://arxiv.org/abs/2109.02541

作者:Shunyi Yao1,Guangda Chen,Quecheng Qiu,Jun Ma,Xiaoping Chen,Jianmin Ji 机构: 1 School of Computer Science and Technology, University of Scienceand Technology of China (USTC), China 2 School of Data Science 备注:7 page 摘要:移动机器人在人群中导航是一项挑战。现有方法通常假设行人遵循预定义的避碰策略,如社会力模型(SFM)或最佳交互避碰(ORCA)。然而,在实际应用中,行人遵循多种不同的避碰策略,其性能通常需要进一步提高。在本文中,我们提出了一种基于地图的深度强化学习方法,用于不同行人的人群感知机器人导航。我们使用传感器地图来表示机器人周围的环境信息,包括其形状和障碍物的可见外观。我们还介绍了行人地图,它指定了机器人周围行人的运动。通过将这两个地图作为神经网络的输入,我们证明了可以训练导航策略,以更好地与遵循不同避碰策略的行人交互。我们在模拟器和实际机器人的多种场景下评估了我们的方法。结果表明,我们的方法可以使机器人成功地与各种行人交互,并且在成功率方面优于其他方法。 摘要:It is challenging for a mobile robot to navigate through human crowds. Existing approaches usually assume that pedestrians follow a predefined collision avoidance strategy, like social force model (SFM) or optimal reciprocal collision avoidance (ORCA). However, their performances commonly need to be further improved for practical applications, where pedestrians follow multiple different collision avoidance strategies. In this paper, we propose a map-based deep reinforcement learning approach for crowd-aware robot navigation with various pedestrians. We use the sensor map to represent the environmental information around the robot, including its shape and observable appearances of obstacles. We also introduce the pedestrian map that specifies the movements of pedestrians around the robot. By applying both maps as inputs of the neural network, we show that a navigation policy can be trained to better interact with pedestrians following different collision avoidance strategies. We evaluate our approach under multiple scenarios both in the simulator and on an actual robot. The results show that our approach allows the robot to successfully interact with various pedestrians and outperforms compared methods in terms of the success rate.

【3】 ViSTA: a Framework for Virtual Scenario-based Testing of Autonomous Vehicles 标题:Vista:一种基于虚拟场景的自动驾驶车辆测试框架 链接:https://arxiv.org/abs/2109.02529

作者:Andrea Piazzoni,Jim Cherian,Mohamed Azhar,Jing Yew Yap,James Lee Wei Shung,Roshan Vijay 机构:Nanyang Technological University, Singapore, CETRAN, ©, IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including 备注:Accepted at 2021 IEEE Autonomous Driving AI Test Challenge - AITest 2019 8 pages, 4 figures 摘要:在本文中,我们介绍了ViSTA,一个基于虚拟场景的自动驾驶车辆(AV)测试框架,该框架是作为2021 IEEE自动驾驶AI测试挑战的一部分开发的。基于场景的虚拟测试旨在构建AV需要克服的特定挑战,尽管虚拟测试环境可能不一定与真实世界相似。该方法旨在确定在道路上实际部署AV之前引起安全问题的具体问题。在本文中,我们描述了一种全面的测试用例生成方法,该方法有助于设计具有有意义参数的特殊用途场景,以自动和手动的方式形成测试用例,同时利用两者的优缺点。此外,我们描述了如何自动化测试用例的执行,并分析了在这些测试用例下AV的性能。 摘要:In this paper, we present ViSTA, a framework for Virtual Scenario-based Testing of Autonomous Vehicles (AV), developed as part of the 2021 IEEE Autonomous Test Driving AI Test Challenge. Scenario-based virtual testing aims to construct specific challenges posed for the AV to overcome, albeit in virtual test environments that may not necessarily resemble the real world. This approach is aimed at identifying specific issues that arise safety concerns before an actual deployment of the AV on the road. In this paper, we describe a comprehensive test case generation approach that facilitates the design of special-purpose scenarios with meaningful parameters to form test cases, both in automated and manual ways, leveraging the strength and weaknesses of either. Furthermore, we describe how to automate the execution of test cases, and analyze the performance of the AV under these test cases.

【4】 Predicting Performance of SLAM Algorithms 标题:SLAM算法的性能预测 链接:https://arxiv.org/abs/2109.02329

作者:Matteo Luperto,Valerio Castelli,Francesco Amigoni 备注:Working preprint draft. To be polished and submitted for peer review 摘要:在自主移动机器人应该展现的能力中,地图构建和定位无疑是最基本的。因此,已经提出了无数的算法来解决同步定位和映射(SLAM)问题。目前,他们的评估是根据在真实或模拟环境中对机器人收集的数据运行算法时获得的结果事后进行的。在本文中,我们提出了一种新的方法,该方法允许在SLAM算法实际运行之前,预先预测其在不可见环境中的性能。我们的方法收集了SLAM算法在许多模拟环境中的性能,建立了一个表示观测性能与环境某些几何特征之间关系的模型,并利用该模型从其特征出发预测算法在未知环境中的性能。 摘要:Among the abilities that autonomous mobile robots should exhibit, map building and localization are definitely recognized as fundamental. Consequently, countless algorithms for solving the Simultaneous Localization And Mapping (SLAM) problem have been proposed. Currently, their evaluation is performed ex-post, according to outcomes obtained when running the algorithms on data collected by robots in real or simulated environments. In this paper, we present a novel method that allows the ex-ante prediction of the performance of a SLAM algorithm in an unseen environment, before it is actually run. Our method collects the performance of a SLAM algorithm in a number of simulated environments, builds a model that represents the relationship between the observed performance and some geometrical features of the environments, and exploits such model to predict the performance of the algorithm in an unseen environment starting from its features.

【5】 Safe Reinforcement Learning using Formal Verification for Tissue Retraction in Autonomous Robotic-Assisted Surgery 标题:基于形式化验证的安全强化学习在自主机器人手术组织回缩中的应用 链接:https://arxiv.org/abs/2109.02323

作者:Ameya Pore,Davide Corsi,Enrico Marchesini,Diego Dall'Alba,Alicia Casals,Alessandro Farinelli,Paolo Fiorini 机构: automation of the TR Equal contribution 1 Department of Computer Science, University of Verona, Universitat Politecnicade Catalunya 备注:7 pages, 6 figures 摘要:深度强化学习(DRL)能够在动态环境中学习复杂的行为,是自动化重复手术子任务的可行解决方案。这种任务自动化可以减少外科医生的认知工作量,提高手术关键方面的精确度,减少与患者相关的并发症。然而,目前的DRL方法不保证任何安全标准,因为它们在不考虑与所执行的行动相关的风险的情况下最大化了累积回报。由于这一限制,DRL在机器人辅助微创手术(MIS)安全关键范式中的应用受到限制。在这项工作中,我们介绍了一个安全的DRL框架,该框架包含了通过DRL训练实现手术子任务自动化的安全约束。我们在一个虚拟场景中验证了我们的方法,该场景复制了MIS多个阶段中通常发生的组织收缩任务。此外,为了评估机械臂的安全行为,我们为DRL方法制定了一个正式的验证工具,该工具提供了不安全配置的可能性。我们的研究结果表明,正式的分析保证了高度可靠的安全性,使得机器人仪器在安全的工作空间内工作,避免与其他解剖结构的危险交互。 摘要:Deep Reinforcement Learning (DRL) is a viable solution for automating repetitive surgical subtasks due to its ability to learn complex behaviours in a dynamic environment. This task automation could lead to reduced surgeon's cognitive workload, increased precision in critical aspects of the surgery, and fewer patient-related complications. However, current DRL methods do not guarantee any safety criteria as they maximise cumulative rewards without considering the risks associated with the actions performed. Due to this limitation, the application of DRL in the safety-critical paradigm of robot-assisted Minimally Invasive Surgery (MIS) has been constrained. In this work, we introduce a Safe-DRL framework that incorporates safety constraints for the automation of surgical subtasks via DRL training. We validate our approach in a virtual scene that replicates a tissue retraction task commonly occurring in multiple phases of an MIS. Furthermore, to evaluate the safe behaviour of the robotic arms, we formulate a formal verification tool for DRL methods that provides the probability of unsafe configurations. Our results indicate that a formal analysis guarantees safety with high confidence such that the robotic instruments operate within the safe workspace and avoid hazardous interaction with other anatomical structures.

【6】 Autonomous tissue retraction with a biomechanically informed logic based framework 标题:具有基于生物力学信息逻辑的框架的自主组织回缩 链接:https://arxiv.org/abs/2109.02316

作者:D. Meli,E. Tagliabue,D. Dall'Alba,P. Fiorini 机构: 1Authors are with Department of Computer Science, University ofVerona 备注:Accepted to 2021 IEEE International Symposium on Medical Robotics (ISMR) 摘要:机器人辅助手术部分的自主性对于减少外科医生的认知负荷并最终改善整体手术结果至关重要。确保自主机器人手术系统(ARSS)安全的关键要求在于根据专家知识生成可解释的计划。此外,ARS必须能够根据动态和不可预测的解剖环境进行推理,并在意外情况下快速调整手术计划。在本文中,我们提出了第一个认知模块框架,用于在可变形解剖环境中自主规划和执行手术任务。我们的框架集成了用于任务级可解释推理的逻辑模块、用于补充真实传感器数据的基于物理的模拟以及用于上下文解释的态势感知模块。在模拟软组织回缩(去除隐藏感兴趣区域的组织的常见手术任务)上评估框架性能。结果表明,该框架具有成功完成任务、处理动态环境条件和可能的故障所需的适应性,同时保证了实际手术场景中所需的计算效率。该框架已公开提供。 摘要:Autonomy in parts of robot-assisted surgery is essential to reduce surgeons' cognitive load and eventually improve the overall surgical outcome. A key requirement to ensure safety in an Autonomous Robotic Surgical System (ARSS) lies in the generation of interpretable plans that rely on expert knowledge. Moreover, the ARSS must be able to reason on the dynamic and unpredictable anatomical environment, and quickly adapt the surgical plan in case of unexpected situations. In this paper, we present the first cognitive modular framework for the autonomous planning and execution of surgical tasks in deformable anatomical environments. Our framework integrates a logic module for task-level interpretable reasoning, a physics-based simulation that complements data from real sensors, and a situation awareness module for context interpretation. The framework performance is evaluated on simulated soft tissue retraction, a common surgical task to remove the tissue hiding a region of interest. Results show that the framework has the adaptability required to successfully accomplish the task, handling dynamic environmental conditions and possible failures, while guaranteeing the computational efficiency required in a real surgical scenario. The framework is made publicly available.

【7】 Task-Level Authoring for Remote Robot Teleoperation 标题:面向远程机器人遥操作的任务级创作 链接:https://arxiv.org/abs/2109.02301

作者:Emmanuel Senft,Michael Hagenow,Kevin Welsh,Robert Radwin,Michael Zinn,Michael Gleicher,Bilge Mutlu 机构:Department of Computer Sciences, University of Wisconsin–Madison, Madison, WI, Department of Mechanical Engineering, University of Wisconsin–Madison, Department of Industrial System and Engineering, University of, Wisconsin–Madison, Madison, WI, USA 备注:Provisionally accepted in Frontiers in Robotics and AI on the 09/03/2021. Abstract has been shortened due to arxiv abstract requirements 摘要:机器人的远程遥控操作可以扩大家庭维护、医疗保健、轻型制造和建筑等广泛行业领域专家的范围。然而,目前的直接控制方法是不切实际的,现有的机器人远程编程工具主要针对具有丰富机器人经验的用户。将机器人远程编程扩展到终端用户,即在某个领域是专家但在机器人技术方面是新手的用户,需要能够平衡复杂远程操作任务所需的丰富功能和易用性的工具。可用性面临的主要挑战是,新手用户无法指定完整而健壮的任务计划,以允许机器人自主执行任务,特别是在高度多变的环境中。我们的解决方案是允许操作员指定较短的高级命令序列,我们称之为任务级编写,以创建可变机器人自治的周期。这种方法允许没有经验的用户在不确定的环境中通过交叉探索、行为规范和执行作为单独的步骤来创建机器人行为。最终用户能够分解任务规范,并适应当前交互和环境的需求,将直接控制的反应性与异步操作结合起来。在这篇文章中,我们描述了一个在轻制造环境下的原型系统及其在用户研究中的经验验证,其中18名具有一定编程经验的参与者能够在几乎没有训练的情况下执行各种复杂的远程操作任务。我们的结果表明,我们的方法允许用户创建灵活的自治周期,并解决丰富的操作任务。此外,与比较直接的接口相比,参与者明显更喜欢我们的系统,这表明了我们方法的潜力。 摘要:Remote teleoperation of robots can broaden the reach of domain specialists across a wide range of industries such as home maintenance, health care, light manufacturing, and construction. However, current direct control methods are impractical, and existing tools for programming robot remotely have focused on users with significant robotic experience. Extending robot remote programming to end users, i.e., users who are experts in a domain but novices in robotics, requires tools that balance the rich features necessary for complex teleoperation tasks with ease of use. The primary challenge to usability is that novice users are unable to specify complete and robust task plans to allow a robot to perform duties autonomously, particularly in highly variable environments. Our solution is to allow operators to specify shorter sequences of high-level commands, which we call task-level authoring, to create periods of variable robot autonomy. This approach allows inexperienced users to create robot behaviors in uncertain environments by interleaving exploration, specification of behaviors, and execution as separate steps. End users are able to break down the specification of tasks and adapt to the current needs of the interaction and environments, combining the reactivity of direct control to asynchronous operation. In this paper, we describe a prototype system contextualized in light manufacturing and its empirical validation in a user study where 18 participants with some programming experience were able to perform a variety of complex telemanipulation tasks with little training. Our results show that our approach allowed users to create flexible periods of autonomy and solve rich manipulation tasks. Furthermore, participants significantly preferred our system over comparative more direct interfaces, demonstrating the potential of our approach.

【8】 Surgery Scene Restoration for Robot Assisted Minimally Invasive Surgery 标题:机器人辅助微创手术的手术场景恢复 链接:https://arxiv.org/abs/2109.02253

作者:Shahnewaz Ali,Yaqub Jonmohamadi,Ross Crawford,Davide Fontanarosa,Ajay K. Pandey 机构:Robotics and Autonomous Systems, School of Electrical Engineering and Robotics, Queensland, University of Technology, Gardens Point, Brisbane, QLD , AUSTRALIA. 摘要:微创手术(MIS)具有组织损伤小、出血少、恢复快等优点,但也限制了外科医生的能力。在缺乏触觉或触觉反馈等其他因素中,手术部位的可视化不良是最公认的因素之一,它表现出包括无意组织损伤在内的多种手术缺陷。在机器人辅助手术中,缺少框架上下文细节使得视觉任务在跟踪组织和工具、分割场景以及估计姿势和深度时具有挑战性。在MIS中,采集的帧受到不同噪声的影响,并且由于来自不同来源的运动而变得模糊。此外,当考虑水下环境时,例如膝关节镜检查,大部分可见噪声和模糊效应源自环境,对照明和成像条件的控制较差。此外,在MIS中,由于硬件的小型化,诸如自动白平衡和原始颜色信息到其标准RGB颜色空间之间的转换等过程通常不存在。有一个在线预处理框架,可以绕过这些缺点的高要求。我们提出的方法能够在一个预处理阶段从噪声、模糊和原始观察中恢复标准RGB颜色空间中的潜在清晰图像。 摘要:Minimally invasive surgery (MIS) offers several advantages including minimum tissue injury and blood loss, and quick recovery time, however, it imposes some limitations on surgeons ability. Among others such as lack of tactile or haptic feedback, poor visualization of the surgical site is one of the most acknowledged factors that exhibits several surgical drawbacks including unintentional tissue damage. To the context of robot assisted surgery, lack of frame contextual details makes vision task challenging when it comes to tracking tissue and tools, segmenting scene, and estimating pose and depth. In MIS the acquired frames are compromised by different noises and get blurred caused by motions from different sources. Moreover, when underwater environment is considered for instance knee arthroscopy, mostly visible noises and blur effects are originated from the environment, poor control on illuminations and imaging conditions. Additionally, in MIS, procedure like automatic white balancing and transformation between the raw color information to its standard RGB color space are often absent due to the hardware miniaturization. There is a high demand of an online preprocessing framework that can circumvent these drawbacks. Our proposed method is able to restore a latent clean and sharp image in standard RGB color space from its noisy, blur and raw observation in a single preprocessing stage.

【9】 Learning-Based Strategy Design for Robot-Assisted Reminiscence Therapy Based on a Developed Model for People with Dementia 标题:基于痴呆患者开发模型的机器人辅助记忆治疗基于学习的策略设计 链接:https://arxiv.org/abs/2109.02194

作者:Fengpei Yuan,Ran Zhang,Dania Bilal,Xiaopeng Zhao 机构: University of Tennessee, Knoxville TN , USA, Miami University, Oxford OH , USA 备注:10 pages, conference, 2 figures 摘要:在这篇论文中,机器人辅助回忆疗法(RT)被研究作为痴呆症(PwDs)患者的心理社会干预。我们的目标是通过强化学习来刺激PwD说话,从而为机器人提供一种对话策略。具体而言,为了描述PwD对机器人动作的随机反应,开发了PwD的仿真模型,该模型描述了不同PwD状态之间的转移概率,包括响应相关性、情绪水平和混淆条件。然后设计了一种Q-学习(QL)算法来实现机器人的最佳会话策略。目的是刺激PwD尽可能多地说话,同时保持PwD的状态尽可能积极。在某些情况下,已实现的策略为PwD提供了继续或改变话题或停止对话的选择,以便PwD有控制感来缓解对话压力。为了实现这一点,对标准的QL算法进行了修改,以便有意地将PwD选择的影响集成到Q值更新中。最后,仿真结果验证了学习算法的收敛性,验证了所实现策略的有效性。测试表明,该策略能够根据PwD的状态适时调整提示的难度水平,采取行动(例如,重复或解释提示,或安慰)帮助PwD摆脱不良状态,并允许PwD在不良状态持续时控制对话趋势。 摘要:In this paper, the robot-assisted Reminiscence Therapy (RT) is studied as a psychosocial intervention to persons with dementia (PwDs). We aim at a conversation strategy for the robot by reinforcement learning to stimulate the PwD to talk. Specifically, to characterize the stochastic reactions of a PwD to the robot's actions, a simulation model of a PwD is developed which features the transition probabilities among different PwD states consisting of the response relevance, emotion levels and confusion conditions. A Q-learning (QL) algorithm is then designed to achieve the best conversation strategy for the robot. The objective is to stimulate the PwD to talk as much as possible while keeping the PwD's states as positive as possible. In certain conditions, the achieved strategy gives the PwD choices to continue or change the topic, or stop the conversation, so that the PwD has a sense of control to mitigate the conversation stress. To achieve this, the standard QL algorithm is revised to deliberately integrate the impact of PwD's choices into the Q-value updates. Finally, the simulation results demonstrate the learning convergence and validate the efficacy of the achieved strategy. Tests show that the strategy is capable to duly adjust the difficulty level of prompt according to the PwD's states, take actions (e.g., repeat or explain the prompt, or comfort) to help the PwD out of bad states, and allow the PwD to control the conversation tendency when bad states continue.

【10】 Multi-Agent Variational Occlusion Inference Using People as Sensors 标题:以人为传感器的多智能体变分遮挡推理 链接:https://arxiv.org/abs/2109.02173

作者:Masha Itkina,Ye-Ji Mun,Katherine Driggs-Campbell,Mykel J. Kochenderfer 机构:Stanford University, University of Illinois Urbana-Champaign 备注:21 pages, 11 figures 摘要:自动驾驶车辆必须对城市环境中的空间遮挡进行推理,以确保安全,而不会过于谨慎。先前的工作探索了从观察到的道路交通人员的社会行为中推断出的阻塞。从代理行为推断占用率是一个固有的多模态问题;对于前方不同的占用模式,驾驶员可能会以相同的方式行事(例如,驾驶员可能在交通中或在开阔道路上以恒定速度行驶)。然而,过去的工作并未考虑这种多模态,因此忽略了对驾驶员行为与其环境之间关系中任意不确定性来源的建模。我们提出了一种遮挡推断方法,将观察到的人类智能体行为描述为传感器测量,并将其与标准传感器套件中的行为进行融合。为了捕获任意不确定性,我们训练了一个具有离散潜在空间的条件变分自动编码器,以学习从观察到的驾驶员轨迹到驾驶员前方视图的占用栅格表示的多模态映射。我们的方法处理多智能体场景,使用证据理论结合来自多个观测驱动程序的测量来解决传感器融合问题。我们的方法在真实数据集上得到了验证,优于基线,并展示了实时性能。我们的代码可在https://github.com/sisl/MultiAgentVariationalOcclusionInference . 摘要:Autonomous vehicles must reason about spatial occlusions in urban environments to ensure safety without being overly cautious. Prior work explored occlusion inference from observed social behaviors of road agents. Inferring occupancy from agent behaviors is an inherently multimodal problem; a driver may behave in the same manner for different occupancy patterns ahead of them (e.g., a driver may move at constant speed in traffic or on an open road). Past work, however, does not account for this multimodality, thus neglecting to model this source of aleatoric uncertainty in the relationship between driver behaviors and their environment. We propose an occlusion inference method that characterizes observed behaviors of human agents as sensor measurements, and fuses them with those from a standard sensor suite. To capture the aleatoric uncertainty, we train a conditional variational autoencoder with a discrete latent space to learn a multimodal mapping from observed driver trajectories to an occupancy grid representation of the view ahead of the driver. Our method handles multi-agent scenarios, combining measurements from multiple observed drivers using evidential theory to solve the sensor fusion problem. Our approach is validated on a real-world dataset, outperforming baselines and demonstrating real-time capable performance. Our code is available at https://github.com/sisl/MultiAgentVariationalOcclusionInference .

【11】 Shaping Individualized Impedance Landscapes for Gait Training via Reinforcement Learning 标题:基于强化学习的步态训练个性化阻抗景观塑造 链接:https://arxiv.org/abs/2109.02109

作者:Yufeng Zhang,Shuai Li,Karen J. Nolan,Damiano Zanotto 机构: Department of Mechanical Engineering, Stevens Institute of Technology 摘要:需要时协助(AAN)控制旨在通过鼓励患者积极参与,促进机器人辅助康复的治疗效果。大多数AAN控制器使用阻抗控制在目标运动周围创建顺应力场,以确保跟踪精度,同时允许适度的运动误差。然而,由于控制力场形状的参数通常是基于对受试者学习能力的简单假设手动调整或在线调整的,因此传统AAN控制器的有效性可能会受到限制。在这项工作中,我们提出了一种新的自适应AAN控制器,该控制器能够根据每个人的运动能力和任务要求,以相位相关的方式自主重塑力场。该控制器包括一种改进的策略改进和路径积分算法,一种无模型、基于采样的强化学习方法,可实时学习特定对象的阻抗图,以及通过指定性能驱动的学习目标嵌入AAN范式的分层策略参数评估结构。建议的控制策略对受试者运动反应的适应性及其促进短期运动适应的能力通过跑步机训练课程进行了实验验证,这些训练课程由身体健全的受试者在电动踝足矫形器的帮助下学习了改变的步态模式。 摘要:Assist-as-needed (AAN) control aims at promoting therapeutic outcomes in robot-assisted rehabilitation by encouraging patients' active participation. Impedance control is used by most AAN controllers to create a compliant force field around a target motion to ensure tracking accuracy while allowing moderate kinematic errors. However, since the parameters governing the shape of the force field are often tuned manually or adapted online based on simplistic assumptions about subjects' learning abilities, the effectiveness of conventional AAN controllers may be limited. In this work, we propose a novel adaptive AAN controller that is capable of autonomously reshaping the force field in a phase-dependent manner according to each individual's motor abilities and task requirements. The proposed controller consists of a modified Policy Improvement with Path Integral algorithm, a model-free, sampling-based reinforcement learning method that learns a subject-specific impedance landscape in real-time, and a hierarchical policy parameter evaluation structure that embeds the AAN paradigm by specifying performance-driven learning goals. The adaptability of the proposed control strategy to subjects' motor responses and its ability to promote short-term motor adaptations are experimentally validated through treadmill training sessions with able-bodied subjects who learned altered gait patterns with the assistance of a powered ankle-foot orthosis.

【12】 Explaining Autonomous Decisions in Swarms of Human-on-the-Loop Small Unmanned Aerial Systems 标题:解释成群的人在环路小型无人机系统中的自主决策 链接:https://arxiv.org/abs/2109.02077

作者:Ankit Agrawal,Jane Cleland-Huang 机构:Computer Science and Engineering, University of Notre Dame, Indiana, USA. 备注:10 2 pages; 6 Figures; 3 Tables; Accepted for publication at HCOMP'21 摘要:人工智能的快速发展已经将人们的注意力从传统的人工控制机器人转移到了完全自主的机器人上,这些机器人不需要明确的人工控制。这些系统通常被称为人在回路(HotL)系统。HotL系统的透明性要求对自主行为进行清晰的解释,以便人类了解环境中发生的事情,并理解机器人为何以某种方式行为。然而,在复杂的多机器人环境中,尤其是机器人自主、可移动且需要间歇性干预的环境中,人类可能难以保持态势感知。向人类提供关于自主行为的丰富解释往往会使他们获得过多的信息,并对他们对情况的理解产生负面影响。因此,解释多个机器人的自主行为或自主性会造成设计紧张,需要仔细研究。本文研究了用户界面(UI)设计权衡,该权衡与为成群的小型无人机系统(SUA)或无人机提供及时、详细的自主行为解释有关。我们分析了用户界面设计选择对人类感知情况的影响。我们与经验不足和专业的sUAS操作员进行了多用户研究,以展示我们的设计解决方案,并为HotL多sUAS界面的设计提供初步指导。 摘要:Rapid advancements in Artificial Intelligence have shifted the focus from traditional human-directed robots to fully autonomous ones that do not require explicit human control. These are commonly referred to as Human-on-the-Loop (HotL) systems. Transparency of HotL systems necessitates clear explanations of autonomous behavior so that humans are aware of what is happening in the environment and can understand why robots behave in a certain way. However, in complex multi-robot environments, especially those in which the robots are autonomous, mobile, and require intermittent interventions, humans may struggle to maintain situational awareness. Presenting humans with rich explanations of autonomous behavior tends to overload them with too much information and negatively affect their understanding of the situation. Therefore, explaining the autonomous behavior or autonomy of multiple robots creates a design tension that demands careful investigation. This paper examines the User Interface (UI) design trade-offs associated with providing timely and detailed explanations of autonomous behavior for swarms of small Unmanned Aerial Systems (sUAS) or drones. We analyze the impact of UI design choices on human awareness of the situation. We conducted multiple user studies with both inexperienced and expert sUAS operators to present our design solution and provide initial guidelines for designing the HotL multi-sUAS interface.

【13】 Navigational Path-Planning For All-Terrain Autonomous Agricultural Robot 标题:全地形自主农业机器人导航路径规划 链接:https://arxiv.org/abs/2109.02015

作者:Vedant Ghodke 机构:(Junior B.Tech, ECE, VIT Pune) 备注:8 pages, 23 figures, 1 table 摘要:劳动力短缺和维护成本增加迫使许多农业实业家转向自动化和机械化方法。自治系统的关键部件是所使用的路径规划技术。覆盖路径规划(CPP)算法用于在农田上导航,以执行各种农业操作,如播种、耕作或喷洒农药和肥料。本文比较了农田自主导航的新算法。为了减少导航限制,考虑了印度环境特有的高分辨率栅格地图表示。通过将网格单元区分为已覆盖、未探测、部分探测和存在障碍物,来覆盖自由空间。使用诸如时间效率、空间效率、准确性和对环境变化的鲁棒性等指标来评估所比较算法的性能。使用机器人操作系统(ROS)、达索系统经验平台(3DS经验)、MATLAB和Python对比较的算法进行了仿真。结果证明了该算法在野外自主导航中的适用性和机器人路径规划的可行性。 摘要:The shortage of workforce and increasing cost of maintenance has forced many farm industrialists to shift towards automated and mechanized approaches. The key component for autonomous systems is the path planning techniques used. Coverage path planning (CPP) algorithm is used for navigating over farmlands to perform various agricultural operations such as seeding, ploughing, or spraying pesticides and fertilizers. This report paper compares novel algorithms for autonomous navigation of farmlands. For reduction of navigational constraints, a high-resolution grid map representation is taken into consideration specific to Indian environments. The free space is covered by distinguishing the grid cells as covered, unexplored, partially explored and presence of an obstacle. The performance of the compared algorithms is evaluated with metrics such as time efficiency, space efficiency, accuracy, and robustness to changes in the environment. Robotic Operating System (ROS), Dassault Systemes Experience Platform (3DS Experience), MATLAB along Python were used for the simulation of the compared algorithms. The results proved the applicability of the algorithms for autonomous field navigation and feasibility with robotic path planning.

【14】 GamePlan: Game-Theoretic Multi-Agent Planning with Human Drivers at Intersections, Roundabouts, and Merging 标题:游戏计划:交叉口、环形交叉口和交叉口有人驾驶的博弈论多智能体规划 链接:https://arxiv.org/abs/2109.01896

作者:Rohan Chandra,Dinesh Manocha 机构:University of Maryland 摘要:我们提出了一种新的多智能体规划方法,该方法涉及无人驾驶交叉口、环形交叉口和合并过程中的驾驶员和自动车辆(AV)。在多智能体规划中,主要的挑战是预测其他智能体的行为,尤其是人类驾驶者的行为,因为他们的意图对其他智能体是隐藏的。我们的算法使用博弈论开发了一种新的拍卖,称为模型,该拍卖直接根据每个代理的驾驶风格(可通过常见的传感器(如激光雷达和照相机)确定其最佳行动。GamePlan将更高的优先级分配给更具攻击性或不耐烦的驾驶员,将更低的优先级分配给更保守或耐心的驾驶员;我们从理论上证明,这样一种方法虽然违反直觉,但在博弈论上是最优的。我们的方法成功地防止了碰撞和死锁。我们将我们的方法与先前最先进的拍卖技术(包括经济拍卖、基于时间的拍卖(先进先出)和随机竞价)进行比较,结果表明,当考虑到驾驶员行为时,这些方法中的每一种都会导致代理之间的冲突。此外,我们还比较了基于深度强化学习、深度学习和博弈论的方法,并介绍了我们在这些方法上的优势。最后,我们展示了我们的方法可以在真实世界中使用人工驱动程序实现。 摘要:We present a new method for multi-agent planning involving human drivers and autonomous vehicles (AVs) in unsignaled intersections, roundabouts, and during merging. In multi-agent planning, the main challenge is to predict the actions of other agents, especially human drivers, as their intentions are hidden from other agents. Our algorithm uses game theory to develop a new auction, called model, that directly determines the optimal action for each agent based on their driving style (which is observable via commonly available sensors like lidars and cameras). GamePlan assigns a higher priority to more aggressive or impatient drivers and a lower priority to more conservative or patient drivers; we theoretically prove that such an approach, although counter-intuitive, is game-theoretically optimal. Our approach successfully prevents collisions and deadlocks. We compare our approach with prior state-of-the-art auction techniques including economic auctions, time-based auctions (first-in first-out), and random bidding and show that each of these methods result in collisions among agents when taking into account driver behavior. We additionally compare with methods based on deep reinforcement learning, deep learning, and game theory and present our benefits over these approaches. Finally, we show that our approach can be implemented in the real-world with human drivers.

【15】 Fast Image-Anomaly Mitigation for Autonomous Mobile Robots 标题:自主移动机器人图像异常的快速消除 链接:https://arxiv.org/abs/2109.01889

作者:Gianmario Fumagalli,Yannick Huber,Marcin Dymczyk,Roland Siegwart,Renaud Dubé 备注:Published on 2021 International Conference on Intelligent Robots and Systems (IROS) 摘要:像雨或灰尘这样的相机异常会严重影响图像质量及其相关任务,如定位和分割。在这项工作中,我们通过实施一个预处理步骤来解决这一重要问题,该步骤可以实时有效地缓解此类伪影,从而支持部署计算能力有限的自治系统。我们提出了一种具有聚集性的浅层生成器,在对抗环境中训练以解决重建遮挡区域的不适定问题。我们添加了一个增强器,以进一步保留高频细节和图像着色。我们还制作了一个最大的公开可用数据集1来训练我们的架构,并使用真实的合成雨滴来改进模型的初始化。我们在现有数据集和我们自己的图像上对我们的框架进行基准测试,获得最先进的结果,同时实现实时性能,推理时间比现有方法快40倍。 摘要:Camera anomalies like rain or dust can severelydegrade image quality and its related tasks, such as localizationand segmentation. In this work we address this importantissue by implementing a pre-processing step that can effectivelymitigate such artifacts in a real-time fashion, thus supportingthe deployment of autonomous systems with limited computecapabilities. We propose a shallow generator with aggregation,trained in an adversarial setting to solve the ill-posed problemof reconstructing the occluded regions. We add an enhancer tofurther preserve high-frequency details and image colorization.We also produce one of the largest publicly available datasets1to train our architecture and use realistic synthetic raindrops toobtain an improved initialization of the model. We benchmarkour framework on existing datasets and on our own imagesobtaining state-of-the-art results while enabling real-time per-formance, with up to 40x faster inference time than existingapproaches.

【16】 Socially-Aware Multi-Agent Following with 2D Laser Scans via Deep Reinforcement Learning and Potential Field 标题:基于深度强化学习和势场的社会性感知多智能体跟踪二维激光扫描 链接:https://arxiv.org/abs/2109.01874

作者:Yuxiang Cui,Xiaolong Huang,Yue Wang,Rong Xiong 机构: Yue Wang and Rong Xiong are withthe State Key Laboratory of Industrial Control and Technology, ZhejiangUniversity 备注:6 pages, 6 figures, conference 摘要:动态行人环境中的目标跟踪是移动机器人的一项重要任务。然而,在拥挤的环境中,尤其是只有一个机器人的情况下,在避免碰撞的同时保持对目标的跟踪是一个挑战。在本文中,我们提出了一种多智能体方法,使任意数量的机器人仅使用二维激光扫描,以社会感知的方式跟踪目标。多智能体跟随问题是通过利用强化学习和势场的互补优势来解决的,其中强化学习部分处理局部交互,同时导航到势场指定的目标。具体而言,借助障碍地图表示中的激光扫描,基于学习的策略可以帮助机器人避免与静态障碍物和动态障碍物(如行人)发生碰撞,即具有社会意识。而每个机器人的编队控制和目标分配是从一个以目标为中心的势场中获得的,该势场是使用所有后续机器人的聚合状态信息构建的。实验在多种环境下进行,包括随机障碍物分布和不同数量的机器人。结果表明,我们的方法在不可见的动态环境中是成功的。这些机器人只需二维激光扫描,就能以符合社会要求的方式跟踪目标。 摘要:Target following in dynamic pedestrian environments is an important task for mobile robots. However, it is challenging to keep tracking the target while avoiding collisions in crowded environments, especially with only one robot. In this paper, we propose a multi-agent method for an arbitrary number of robots to follow the target in a socially-aware manner using only 2D laser scans. The multi-agent following problem is tackled by utilizing the complementary strengths of both reinforcement learning and potential field, in which the reinforcement learning part handles local interactions while navigating to the goals assigned by the potential field. Specifically, with the help of laser scans in obstacle map representation, the learning-based policy can help the robots avoid collisions with both static obstacles and dynamic obstacles like pedestrians in advance, namely socially aware. While the formation control and goal assignment for each robot is obtained from a target-centered potential field constructed using aggregated state information from all the following robots. Experiments are conducted in multiple settings, including random obstacle distributions and different numbers of robots. Results show that our method works successfully in unseen dynamic environments. The robots can follow the target in a socially compliant manner with only 2D laser scans.

【17】 GOHOME: Graph-Oriented Heatmap Output forfuture Motion Estimation 标题:GoHome:面向图形的热图输出,用于未来的运动估计 链接:https://arxiv.org/abs/2109.01827

作者:Thomas Gilles,Stefano Sabatini,Dzmitry Tsishkou,Bogdan Stanciulescu,Fabien Moutarde 机构:Huawei Technologies France , Mines ParisTech 摘要:在本文中,我们提出了GOHOME,一种利用高清晰度地图的图形表示和稀疏投影生成热图输出的方法,该热图输出表示交通场景中给定代理的未来位置概率分布。此热图输出生成代理未来可能位置的无约束二维网格表示,允许固有的多模态和预测的不确定性度量。我们的面向图形的模型避免了将周围环境表示为方形图像并使用经典CNN处理的高计算负担,而是只关注代理在不久的将来可能出现的最可能的通道。GOHOME在误码率$6$度量的Argover运动预测基准上达到3$rd$,同时与1$^{st}$place方法HOME相比,实现了显著的加速和内存负担降低。我们还强调,热图输出支持多模式集成,并使用我们的最佳集成将1$^{st}$place MissRate$\%$提高了15$%$以上。 摘要:In this paper, we propose GOHOME, a method leveraging graph representations of the High Definition Map and sparse projections to generate a heatmap output representing the future position probability distribution for a given agent in a traffic scene. This heatmap output yields an unconstrained 2D grid representation of agent future possible locations, allowing inherent multimodality and a measure of the uncertainty of the prediction. Our graph-oriented model avoids the high computation burden of representing the surrounding context as squared images and processing it with classical CNNs, but focuses instead only on the most probable lanes where the agent could end up in the immediate future. GOHOME reaches 3$rd$ on Argoverse Motion Forecasting Benchmark on the MissRate$_6$ metric while achieving significant speed-up and memory burden diminution compared to 1$^{st}$ place method HOME. We also highlight that heatmap output enables multimodal ensembling and improve 1$^{st}$ place MissRate$_6$ by more than 15$%$ with our best ensemble.

【18】 Communicating Inferred Goals with Passive Augmented Reality and Active Haptic Feedback 标题:用被动增强现实和主动触觉反馈传递推断的目标 链接:https://arxiv.org/abs/2109.01747

作者:James F. Mullen Jr,Josh Mosier,Sounak Chakrabarti,Anqi Chen,Tyler White,Dylan P. Losey 机构: Department ofMechanical Engineering 备注:8 pages, 5 figures 摘要:机器人在与人类互动时学习。考虑人类遥操作辅助机器人手臂:当人类引导并校正手臂的运动时,机器人收集关于人类期望任务的信息。但是人类如何知道他们的机器人推断出了什么呢?今天的方法通常侧重于传达意图:例如,通过清晰的动作或手势来指示机器人正在计划什么。然而,结束机器人推理的循环需要的不仅仅是揭示机器人的当前策略:机器人还应该显示它认为可能的替代方案,并在需要额外指导时提示人类教师。在本文中,我们提出了一种结合被动和主动反馈的多模态机器人推理通信方法。具体地说,我们利用信息丰富的增强现实来被动地可视化机器人的推断,并利用吸引注意力的触觉腕带来主动提示和指导人类的教学。我们将我们的系统应用于共享自治任务,其中机器人必须实时推断人类的目标。在此背景下,我们将被动和主动模式集成到一个单一的算法框架中,该框架确定何时以及提供何种类型的反馈。结合被动和主动反馈在实验上优于单模态基线;在一项面对面用户研究中,我们证明了我们的综合方法提高了人类教授机器人的效率,同时减少了人类与机器人互动的时间。此处有视频:https://youtu.be/swq_u4iIP-g 摘要:Robots learn as they interact with humans. Consider a human teleoperating an assistive robot arm: as the human guides and corrects the arm's motion, the robot gathers information about the human's desired task. But how does the human know what their robot has inferred? Today's approaches often focus on conveying intent: for instance, upon legible motions or gestures to indicate what the robot is planning. However, closing the loop on robot inference requires more than just revealing the robot's current policy: the robot should also display the alternatives it thinks are likely, and prompt the human teacher when additional guidance is necessary. In this paper we propose a multimodal approach for communicating robot inference that combines both passive and active feedback. Specifically, we leverage information-rich augmented reality to passively visualize what the robot has inferred, and attention-grabbing haptic wristbands to actively prompt and direct the human's teaching. We apply our system to shared autonomy tasks where the robot must infer the human's goal in real-time. Within this context, we integrate passive and active modalities into a single algorithmic framework that determines when and which type of feedback to provide. Combining both passive and active feedback experimentally outperforms single modality baselines; during an in-person user study, we demonstrate that our integrated approach increases how efficiently humans teach the robot while simultaneously decreasing the amount of time humans spend interacting with the robot. Videos here: https://youtu.be/swq_u4iIP-g

机器翻译,仅供参考

0 人点赞