机器学习学术速递[7.23]

2021-07-27 11:14:45 浏览数 (1)

访问www.arxivdaily.com获取含摘要速递,涵盖CS|物理|数学|经济|统计|金融|生物|电气领域,更有搜索、收藏、发帖等功能!点击阅读原文即可访问

cs.LG 方向,今日共计79篇

Graph相关(图学习|图神经网络|图优化等)(3篇)

【1】 Data Considerations in Graph Representation Learning for Supply Chain Networks 标题:供应链网络图表示学习中的数据考虑因素

作者:Ajmal Aziz,Edward Elson Kosasih,Ryan-Rhys Griffiths,Alexandra Brintrup 机构: a supply network leadsto structural dependencies amongst firms and subsequent 1Department of Engineering, University of Cambridge 2Department of Physics 备注:ICML 2021 Workshop on Machine Learning for Data 链接:https://arxiv.org/abs/2107.10609 摘要:供应链网络数据对于希望了解其道德状况、供应安全和效率的企业来说是一项宝贵的资产。然而,由于信息不完整,仅仅拥有一个数据集并不能充分促成可采取行动的决策。在本文中,我们提出了一种图表示学习方法,以揭示潜在的依赖关系的联系,重点公司可能不知道。据我们所知,我们的工作是第一个将供应链表示为具有可学习嵌入的异构知识图。我们证明了我们的表示有助于使用关系图卷积网络预测全球汽车供应链网络的最新性能。预计我们的方法将直接适用于希望切断与邪恶实体的联系并降低供应失败风险的企业。更抽象地说,我们的方法将有助于为供应链网络的表示学习提供信息,用于下游任务,而不仅仅是链接预测。 摘要:Supply chain network data is a valuable asset for businesses wishing to understand their ethical profile, security of supply, and efficiency. Possession of a dataset alone however is not a sufficient enabler of actionable decisions due to incomplete information. In this paper, we present a graph representation learning approach to uncover hidden dependency links that focal companies may not be aware of. To the best of our knowledge, our work is the first to represent a supply chain as a heterogeneous knowledge graph with learnable embeddings. We demonstrate that our representation facilitates state-of-the-art performance on link prediction of a global automotive supply chain network using a relational graph convolutional network. It is anticipated that our method will be directly applicable to businesses wishing to sever links with nefarious entities and mitigate risk of supply failure. More abstractly, it is anticipated that our method will be useful to inform representation learning of supply chain networks for downstream tasks beyond link prediction.

【2】 Ready for Emerging Threats to Recommender Systems? A Graph Convolution-based Generative Shilling Attack 标题:准备好应对推荐系统面临的新威胁了吗?一种基于图卷积的生成性先令攻击

作者:Fan Wu,Min Gao,Junliang Yu,Zongwei Wang,Kecheng Liu,Xu Wange 机构:Key Laboratory of Dependable Service Computing in Cyber Physical Society (Chongqing University), Ministry of Education, Chongqing, China, School of Big Data and Software Engineering, Chongqing University, Chongqing, China 备注:16 pages, 21 figures, Information Sciences - Journal - Elsevier 链接:https://arxiv.org/abs/2107.10457 摘要:为了研究推荐系统的鲁棒性,研究人员提出了各种先令攻击模型,并分析了它们的不利影响。原始攻击是高度可行的,但由于过于简单的手工规则,其效率较低;而升级的攻击功能更强大,但成本更高,而且很难部署,因为它们需要更多的建议知识。为了平衡攻击的可行性和有效性,本文提出了一种新的基于图卷积的生成先令攻击(GOAT)。GOAT采用原始攻击范式,通过抽样为假用户分配项目;升级攻击范式,通过基于深度学习的模型生成假评分。它部署了一个生成性对抗网络(GAN),学习真实的评级分布来生成虚假的评级。此外,生成器结合了一个定制的图卷积结构,该结构利用共同评分项目之间的相关性来平滑虚假评分并增强其真实性。在两个公共数据集上的大量实验从多个角度评估了山羊的性能。我们对GOAT的研究证明了以较低的成本构建更强大、更智能的攻击模型的技术可行性,能够分析此类攻击的威胁,并指导研究必要的预防措施。 摘要:To explore the robustness of recommender systems, researchers have proposed various shilling attack models and analyzed their adverse effects. Primitive attacks are highly feasible but less effective due to simplistic handcrafted rules, while upgraded attacks are more powerful but costly and difficult to deploy because they require more knowledge from recommendations. In this paper, we explore a novel shilling attack called Graph cOnvolution-based generative shilling ATtack (GOAT) to balance the attacks' feasibility and effectiveness. GOAT adopts the primitive attacks' paradigm that assigns items for fake users by sampling and the upgraded attacks' paradigm that generates fake ratings by a deep learning-based model. It deploys a generative adversarial network (GAN) that learns the real rating distribution to generate fake ratings. Additionally, the generator combines a tailored graph convolution structure that leverages the correlations between co-rated items to smoothen the fake ratings and enhance their authenticity. The extensive experiments on two public datasets evaluate GOAT's performance from multiple perspectives. Our study of the GOAT demonstrates technical feasibility for building a more powerful and intelligent attack model with a much-reduced cost, enables analysis the threat of such an attack and guides for investigating necessary prevention measures.

【3】 Structure-aware Interactive Graph Neural Networks for the Prediction of Protein-Ligand Binding Affinity 标题:基于结构感知交互图神经网络的蛋白质-配体结合亲和力预测

作者:Shuangli Li,Jingbo Zhou,Tong Xu,Liang Huang,Fan Wang,Haoyi Xiong,Weili Huang,Dejing Dou,Hui Xiong 机构:University of Science and Technology of China,Business Intelligence Lab, Baidu Research, Baidu Inc.,Baidu Research USA,Oregon State University,HWL Consulting LLC,Rutgers University 备注:11 pages, 8 figures, Accepted by KDD 2021 (Research Track) 链接:https://arxiv.org/abs/2107.10670 摘要:药物的发现往往依赖于蛋白质-配体结合亲和力的成功预测。最近的进展表明,应用图神经网络(GNNs)通过学习蛋白质-配体复合物的表示来更好地预测亲和力是很有希望的。然而,现有的解决方案通常将蛋白质-配体复合物作为拓扑图数据处理,没有充分利用生物分子的结构信息。在GNN模型中,原子间的长程相互作用也被忽略了。为此,我们提出了一种结构感知的交互式图神经网络(SIGN),它由两个部分组成:极性启发图注意层(PGAL)和成对交互式池(PiPool)。具体地说,PGAL迭代地执行节点边缘聚合过程来更新节点和边缘的嵌入,同时保留原子之间的距离和角度信息。然后,采用PiPool方法收集交互边缘,并进行后续重建损失,以反映全局交互。在两个基准上进行了详尽的实验研究,验证了该方法的优越性。 摘要:Drug discovery often relies on the successful prediction of protein-ligand binding affinity. Recent advances have shown great promise in applying graph neural networks (GNNs) for better affinity prediction by learning the representations of protein-ligand complexes. However, existing solutions usually treat protein-ligand complexes as topological graph data, thus the biomolecular structural information is not fully utilized. The essential long-range interactions among atoms are also neglected in GNN models. To this end, we propose a structure-aware interactive graph neural network (SIGN) which consists of two components: polar-inspired graph attention layers (PGAL) and pairwise interactive pooling (PiPool). Specifically, PGAL iteratively performs the node-edge aggregation process to update embeddings of nodes and edges while preserving the distance and angle information among atoms. Then, PiPool is adopted to gather interactive edges with a subsequent reconstruction loss to reflect the global interactions. Exhaustive experimental study on two benchmarks verifies the superiority of SIGN.

GAN|对抗|攻击|生成相关(7篇)

【1】 Semantic Text-to-Face GAN -ST^2FG 标题:语义文本到面对面的GAN-ST^2FG

作者:Manan Oza,Sukalpa Chanda,David Doermann 机构:Department of Computer Science and Engineering, University at Buffalo, Buffalo, NY , Department of Computer Science and Communication, Østfold University College, Halden, Norway 备注:arXiv admin note: text overlap with arXiv:2010.12136 by other authors 链接:https://arxiv.org/abs/2107.10756 摘要:利用生成性对抗网络(GANs)生成的人脸已经达到了前所未有的真实感。这些面孔,也被称为“深赝品”,看起来像真实的照片,像素级的失真非常小。虽然一些工作已经使模型的训练成为可能,从而产生特定的对象属性,但是基于自然语言描述生成人脸图像的方法还没有得到充分的探索。对于安全和犯罪识别来说,提供一个像素描艺术家一样工作的基于GAN的系统将是非常有用的。本文提出了一种基于语义文本描述的人脸图像生成方法。所学习的模型提供了一个文本描述和一个面部类型的轮廓,该模型使用该轮廓来绘制特征。我们的模型使用仿射组合模块(ACM)机制进行训练,将BERT中的文本嵌入与使用自注意矩阵的GAN潜在空间相结合。这避免了由于“注意”不足而导致的特征丢失,如果简单地将文本嵌入和潜在向量连接起来,可能会发生这种情况。我们的方法能够生成图像,这些图像非常精确地与具有许多面部细节特征的面部的详尽文本描述对齐,并且有助于生成更好的图像。如果提供额外的文本描述或句子,该方法还能够对先前生成的图像进行增量更改。 摘要:Faces generated using generative adversarial networks (GANs) have reached unprecedented realism. These faces, also known as "Deep Fakes", appear as realistic photographs with very little pixel-level distortions. While some work has enabled the training of models that lead to the generation of specific properties of the subject, generating a facial image based on a natural language description has not been fully explored. For security and criminal identification, the ability to provide a GAN-based system that works like a sketch artist would be incredibly useful. In this paper, we present a novel approach to generate facial images from semantic text descriptions. The learned model is provided with a text description and an outline of the type of face, which the model uses to sketch the features. Our models are trained using an Affine Combination Module (ACM) mechanism to combine the text embedding from BERT and the GAN latent space using a self-attention matrix. This avoids the loss of features due to inadequate "attention", which may happen if text embedding and latent vector are simply concatenated. Our approach is capable of generating images that are very accurately aligned to the exhaustive textual descriptions of faces with many fine detail features of the face and helps in generating better images. The proposed method is also capable of making incremental changes to a previously generated image if it is provided with additional textual descriptions or sentences.

【2】 3D Shape Generation with Grid-based Implicit Functions 标题:基于网格隐函数的三维形状生成

作者:Moritz Ibing,Isaak Lim,Leif Kobbelt 机构:Visual Computing Institute, RWTH Aachen University 备注:CVPR 2021 链接:https://arxiv.org/abs/2107.10607 摘要:以前在3D环境中生成形状的方法是在自动编码器(AE)的潜在空间上训练GAN。尽管这产生了令人信服的结果,但它有两个主要缺点。由于GAN仅限于重现AE所训练的数据集,因此我们不能将训练好的AE用于新数据。此外,由于AE只给出了一个全局表示,因此很难在生成过程中加入空间监督。为了解决这些问题,我们建议在网格上训练GAN(即每个单元覆盖一个形状的一部分)。在该表示中,每个细胞配备有由AE提供的潜在载体。这种局部表示使得能够更具表现性(因为基于单元的潜在向量可以以新颖的方式组合)以及生成过程的空间控制(例如通过边界框)。我们的方法在所有已建立的评估指标上都优于目前的水平,这是为定量评估GANs的生成能力而提出的。我们展示了这些措施的局限性,并建议采用统计分析中的稳健标准作为替代方法。 摘要:Previous approaches to generate shapes in a 3D setting train a GAN on the latent space of an autoencoder (AE). Even though this produces convincing results, it has two major shortcomings. As the GAN is limited to reproduce the dataset the AE was trained on, we cannot reuse a trained AE for novel data. Furthermore, it is difficult to add spatial supervision into the generation process, as the AE only gives us a global representation. To remedy these issues, we propose to train the GAN on grids (i.e. each cell covers a part of a shape). In this representation each cell is equipped with a latent vector provided by an AE. This localized representation enables more expressiveness (since the cell-based latent vectors can be combined in novel ways) as well as spatial control of the generation process (e.g. via bounding boxes). Our method outperforms the current state of the art on all established evaluation measures, proposed for quantitatively evaluating the generative capabilities of GANs. We show limitations of these measures and propose the adaptation of a robust criterion from statistical analysis as an alternative.

【3】 Towards Explaining Adversarial Examples Phenomenon in Artificial Neural Networks 标题:人工神经网络中对抗性例子现象的解释

作者:Ramin Barati,Reza Safabakhsh,Mohammad Rahmati 机构:Computer engineering Department, Amirkabir university of Technology, Tehran, Iran 备注:submitted to 25th International Conference on Pattern Recognition (ICPR) 链接:https://arxiv.org/abs/2107.10599 摘要:本文从收敛的角度研究了对抗性例子的存在性和对抗性训练,并证明了人工神经网络的点态收敛可以解释这些现象。我们的建议的主要贡献在于它将逃避攻击和对抗训练的目标与学习理论中已经定义的概念联系起来。此外,我们扩展和统一了文献中的一些其他建议,并对这些建议中提出的意见提供了替代性解释。通过不同的实验,我们证明了该框架在现象研究中的价值,并且适用于实际问题。 摘要:In this paper, we study the adversarial examples existence and adversarial training from the standpoint of convergence and provide evidence that pointwise convergence in ANNs can explain these observations. The main contribution of our proposal is that it relates the objective of the evasion attacks and adversarial training with concepts already defined in learning theory. Also, we extend and unify some of the other proposals in the literature and provide alternative explanations on the observations made in those proposals. Through different experiments, we demonstrate that the framework is valuable in the study of the phenomenon and is applicable to real-world problems.

【4】 Abstract Reasoning via Logic-guided Generation 标题:基于逻辑制导生成的抽象推理

作者:Sihyun Yu,Sangwoo Mo,Sungsoo Ahn,Jinwoo Shin 机构: those DNN-based approaches attempt to derive the 1Korea Advanced Institute of Science and Technology(KAIST) 2Mohamed bin Zayed University of Artificial In-telligence (MBZUAI) 备注:ICML 2021 Workshop on Self-Supervised Learning for Reasoning and Perception (Spotlight Talk) 链接:https://arxiv.org/abs/2107.10493 摘要:抽象推理,即从给定的观察结果中推断出复杂的模式,是人工通用智能的核心组成部分。当人们通过排除错误的候选或首先构造答案来寻找答案时,基于先验深度神经网络(DNN)的方法主要集中于前者的判别方法。本文旨在为后一种方法设计一个框架,弥合人工智能和人类智能之间的鸿沟。为此,我们提出了逻辑引导生成(LoGe)这一新的生成性DNN框架,将抽象推理简化为命题逻辑中的优化问题。LoGe由三个步骤组成:从图像中提取命题变量,用逻辑层推理答案变量,并从变量中重构答案图像。我们证明了在RAVEN基准下,LoGe优于黑盒DNN框架的生成性抽象推理,即基于从观察中获取各种属性的正确规则来重构答案。 摘要:Abstract reasoning, i.e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence. While humans find the answer by either eliminating wrong candidates or first constructing the answer, prior deep neural network (DNN)-based methods focus on the former discriminative approach. This paper aims to design a framework for the latter approach and bridge the gap between artificial and human intelligence. To this end, we propose logic-guided generation (LoGe), a novel generative DNN framework that reduces abstract reasoning as an optimization problem in propositional logic. LoGe is composed of three steps: extract propositional variables from images, reason the answer variables with a logic layer, and reconstruct the answer image from the variables. We demonstrate that LoGe outperforms the black box DNN frameworks for generative abstract reasoning under the RAVEN benchmark, i.e., reconstructing answers based on capturing correct rules of various attributes from observations.

【5】 Unsupervised Detection of Adversarial Examples with Model Explanations 标题:基于模型解释的对抗性实例的无监督检测

作者:Gihyuk Ko,Gyumin Lim 机构:Carnegie Mellon University, Pittsburgh, USA, CSRC, KAIST, Daejeon, South Korea 备注:AdvML@KDD'21 链接:https://arxiv.org/abs/2107.10480 摘要:深度神经网络(DNNs)在各种机器学习应用中表现出了显著的性能。然而,众所周知,DNNs容易受到简单的对抗性扰动,这会导致模型错误地对输入进行分类。在本文中,我们提出了一个简单而有效的方法来检测对手的例子,使用开发的方法来解释模型的行为。我们的主要观察结果是,加入微小的、人类无法察觉的扰动会导致模型解释的剧烈变化,从而导致不寻常或不规则的解释形式。基于这一点,我们提出了一种非监督的检测对抗性的例子使用重建网络训练只对模型解释良性的例子。通过对MNIST手写数据集的评估表明,该方法能够以高置信度检测由最新算法生成的对抗性实例。据我们所知,这项工作是第一次提出使用模型解释的无监督防御方法。 摘要:Deep Neural Networks (DNNs) have shown remarkable performance in a diverse range of machine learning applications. However, it is widely known that DNNs are vulnerable to simple adversarial perturbations, which causes the model to incorrectly classify inputs. In this paper, we propose a simple yet effective method to detect adversarial examples, using methods developed to explain the model's behavior. Our key observation is that adding small, humanly imperceptible perturbations can lead to drastic changes in the model explanations, resulting in unusual or irregular forms of explanations. From this insight, we propose an unsupervised detection of adversarial examples using reconstructor networks trained only on model explanations of benign examples. Our evaluations with MNIST handwritten dataset show that our method is capable of detecting adversarial examples generated by the state-of-the-art algorithms with high confidence. To the best of our knowledge, this work is the first in suggesting unsupervised defense method using model explanations.

【6】 Adversarial for Good? How the Adversarial ML Community's Values Impede Socially Beneficial Uses of Attacks 标题:永远是对抗性的吗?敌对的ML社区的价值观如何阻碍攻击的社会效益使用

作者:Kendra Albert,Maggie Delano,Bogdan Kulynych,Ram Shankar Siva Kumar 机构: studyingEqual contribution 1Harvard Law School 备注:Author list is ordered alphabetically as there is equal contribution. 4 pages Accepted by the ICML 2021 workshop on "A Blessing in Disguise:The Prospects and Perils of Adversarial Machine Learning" 链接:https://arxiv.org/abs/2107.10302 摘要:来自对抗性机器学习(ML)的攻击有可能被“永久地”利用:它们可以被用来与ML内部现有的权力结构背道而驰,为那些原本会成为监视和控制目标的人创造喘息的空间。但是,大多数关于对抗性ML的研究还没有开发出抵抗ML系统的工具。为什么?在这篇论文中,我们回顾了对抗性ML研究人员在NeurIPS 2020论文中所写的更广泛的影响陈述,并评估了作者对其工作目标的假设。我们还收集有关作者如何更普遍地看待其作品影响的信息。我们发现,大多数神经网络研究人员在NoRIP中持有两个基本假设,这使得他们很难考虑攻击的社会有益用途:(1)希望使系统具有鲁棒性,不依赖于上下文,(2)系统的攻击者通常是坏的,系统的防御者是正常的。也就是说,尽管他们表达了和假定的中立性,大多数敌对的ML研究人员认为他们工作的目标是确保系统的安全,这使得概念化和构建打破现状的工具变得困难。 摘要:Attacks from adversarial machine learning (ML) have the potential to be used "for good": they can be used to run counter to the existing power structures within ML, creating breathing space for those who would otherwise be the targets of surveillance and control. But most research on adversarial ML has not engaged in developing tools for resistance against ML systems. Why? In this paper, we review the broader impact statements that adversarial ML researchers wrote as part of their NeurIPS 2020 papers and assess the assumptions that authors have about the goals of their work. We also collect information about how authors view their work's impact more generally. We find that most adversarial ML researchers at NeurIPS hold two fundamental assumptions that will make it difficult for them to consider socially beneficial uses of attacks: (1) it is desirable to make systems robust, independent of context, and (2) attackers of systems are normatively bad and defenders of systems are normatively good. That is, despite their expressed and supposed neutrality, most adversarial ML researchers believe that the goal of their work is to secure systems, making it difficult to conceptualize and build tools for disrupting the status quo.

【7】 cCorrGAN: Conditional Correlation GAN for Learning Empirical Conditional Distributions in the Elliptope 标题:cCorGAN:用于学习椭圆经验条件分布的条件相关GAN

作者:Gautier Marti,Victor Goubet,Frank Nielsen 机构: Independent researcher, ESILV - ´Ecole Sup´erieure d’Ing´enieurs L´eonard de Vinci, Paris, France, Sony Computer Science Laboratories Inc, Tokyo, Japan 备注:None 链接:https://arxiv.org/abs/2107.10606 摘要:本文提出了一种基于条件生成对抗网络的相关矩阵椭圆型中条件分布的近似方法。我们用定量金融的一个应用来说明这种方法:对相关收益的蒙特卡罗模拟来比较基于风险的投资组合构建方法。最后,我们讨论了目前的局限性,并主张进一步探索椭圆几何以改进结果。 摘要:We propose a methodology to approximate conditional distributions in the elliptope of correlation matrices based on conditional generative adversarial networks. We illustrate the methodology with an application from quantitative finance: Monte Carlo simulations of correlated returns to compare risk-based portfolio construction methods. Finally, we discuss about current limitations and advocate for further exploration of the elliptope geometry to improve results.

半/弱/无/有监督|不确定性|主动学习(6篇)

【1】 Active Learning in Incomplete Label Multiple Instance Multiple Label Learning 标题:不完全标签多实例多标签学习中的主动学习

作者:Tam Nguyen,Raviv Raich 机构:Active Learning in Incomplete Label Multiple, Instance Multiple Label Learning 备注:Machine learning, Multiple instance multiple label learning, Active learning, incomplete label learning 链接:https://arxiv.org/abs/2107.10804 摘要:在多实例多标签学习中,每个样本(一个包)由多个实例组成。为了减轻标签的复杂性,每个样本都与一组袋级标签相关联,使袋内的实例未标记。这种设置对于表示具有多种语义的复杂对象更为方便和自然。与单实例标记相比,这种方法允许以同等的标记成本标记较大的数据集。然而,对于足够大的数据集,标记所有行李可能会变得昂贵得令人望而却步。主动学习使用迭代标记和再训练方法,目的是使用少量标记样本提供合理的分类性能。据我们所知,在MIML环境下,只有少数关于主动学习的工作可用。这些方法可以提供实际的解决方案,以减少标签成本,但其效力仍然不清楚。在本文中,我们提出了一种新的基于包类对的MIML环境下的主动学习方法。由于部分可用的袋级标签,我们重点放在不完整的标签MIML设置为建议的主动学习方法。我们的方法是基于一个有区别的图形模型与有效和准确的推理。在查询过程中,我们将主动学习准则应用到新的包类对选择策略中。此外,我们还引入了一种在线随机梯度下降算法,以在每次查询后提供有效的模型更新。在基准数据集上的数值实验表明了该方法的鲁棒性。 摘要:In multiple instance multiple label learning, each sample, a bag, consists of multiple instances. To alleviate labeling complexity, each sample is associated with a set of bag-level labels leaving instances within the bag unlabeled. This setting is more convenient and natural for representing complicated objects, which have multiple semantic meanings. Compared to single instance labeling, this approach allows for labeling larger datasets at an equivalent labeling cost. However, for sufficiently large datasets, labeling all bags may become prohibitively costly. Active learning uses an iterative labeling and retraining approach aiming to provide reasonable classification performance using a small number of labeled samples. To our knowledge, only a few works in the area of active learning in the MIML setting are available. These approaches can provide practical solutions to reduce labeling cost but their efficacy remains unclear. In this paper, we propose a novel bag-class pair based approach for active learning in the MIML setting. Due to the partial availability of bag-level labels, we focus on the incomplete-label MIML setting for the proposed active learning approach. Our approach is based on a discriminative graphical model with efficient and exact inference. For the query process, we adapt active learning criteria to the novel bag-class pair selection strategy. Additionally, we introduce an online stochastic gradient descent algorithm to provide an efficient model update after each query. Numerical experiments on benchmark datasets illustrate the robustness of the proposed approach.

【2】 Mini-data-driven Deep Arbitrary Polynomial Chaos Expansion for Uncertainty Quantification 标题:用于不确定性量化的小型数据驱动深度任意多项式混沌展开

作者:Xiaohu Zheng,Jun Zhang,Ning Wang,Guijian Tang,Wen Yao 机构:College of Aerospace Science and Engineering, National University of Defense Technology, No. , Deya Road, Changsha, National Innovation Institute of Defense Technology, Chinese Academy of Military Science, No. , Fengtai East Street, Beijing , China 链接:https://arxiv.org/abs/2107.10428 摘要:基于替代模型的不确定性量化方法近年来受到广泛关注。多项式混沌展开(PCE)和深度学习(DL)都是建立代理模型的有效方法。然而,PCE需要增加扩展阶数来提高代理模型的精度,这使得需要更多的标记数据来求解扩展系数,而DL也需要大量的标记数据来训练神经网络模型。提出了一种深度任意多项式混沌展开(deep-aPCE)方法,以提高代理模型精度和训练数据代价之间的平衡。一方面,利用多层感知器(MLP)模型求解任意多项式混沌展开的自适应展开系数,以较低的展开阶提高深层aPCE模型的精度。另一方面,利用自适应任意多项式混沌展开的性质,在少量有标记数据和大量无标记数据的基础上构造MLP训练代价函数,可以显著降低训练数据代价。通过四个算例和一个实际工程问题验证了深部aPCE方法的有效性。 摘要:The surrogate model-based uncertainty quantification method has drawn a lot of attention in recent years. Both the polynomial chaos expansion (PCE) and the deep learning (DL) are powerful methods for building a surrogate model. However, the PCE needs to increase the expansion order to improve the accuracy of the surrogate model, which causes more labeled data to solve the expansion coefficients, and the DL also needs a lot of labeled data to train the neural network model. This paper proposes a deep arbitrary polynomial chaos expansion (Deep aPCE) method to improve the balance between surrogate model accuracy and training data cost. On the one hand, the multilayer perceptron (MLP) model is used to solve the adaptive expansion coefficients of arbitrary polynomial chaos expansion, which can improve the Deep aPCE model accuracy with lower expansion order. On the other hand, the adaptive arbitrary polynomial chaos expansion's properties are used to construct the MLP training cost function based on only a small amount of labeled data and a large scale of non-labeled data, which can significantly reduce the training data cost. Four numerical examples and an actual engineering problem are used to verify the effectiveness of the Deep aPCE method.

【3】 StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion 标题:StarGANv2-VC:一种多样化、无监督、非并行的自然发音转换框架

作者:Yinghao Aaron Li,Ali Zare,Nima Mesgarani 机构:Department of Electrical Engineering, Columbia University, USA 备注:To be published in INTERSPEECH 2021 Proceedings 链接:https://arxiv.org/abs/2107.10394 摘要:我们提出了一种无监督的非并行多对多语音转换(VC)方法,该方法使用一种称为StarGAN v2的生成性对抗网络(GAN)。结合使用敌方源分类器损失和感知损失,我们的模型明显优于以前的VC模型。虽然我们的模型只训练了20个说英语的人,但它可以推广到各种语音转换任务,如任意对多、跨语言和歌唱转换。通过使用一个风格编码器,我们的框架还可以将普通阅读语音转换成风格语音,如情感语音和假声语音。在一个非平行多对多语音转换任务上进行的主客观评价实验表明,我们的模型能够产生自然发音的语音,接近最先进的基于文本到语音(TTS)的语音转换方法的音质,而不需要文本标签。此外,我们的模型是完全卷积的,并且具有比实时声码器更快的速度,例如并行WaveGAN可以执行实时语音转换。 摘要:We present an unsupervised non-parallel many-to-many voice conversion (VC) method using a generative adversarial network (GAN) called StarGAN v2. Using a combination of adversarial source classifier loss and perceptual loss, our model significantly outperforms previous VC models. Although our model is trained only with 20 English speakers, it generalizes to a variety of voice conversion tasks, such as any-to-many, cross-lingual, and singing conversion. Using a style encoder, our framework can also convert plain reading speech into stylistic speech, such as emotional and falsetto speech. Subjective and objective evaluation experiments on a non-parallel many-to-many voice conversion task revealed that our model produces natural sounding voices, close to the sound quality of state-of-the-art text-to-speech (TTS) based voice conversion methods without the need for text labels. Moreover, our model is completely convolutional and with a faster-than-real-time vocoder such as Parallel WaveGAN can perform real-time voice conversion.

【4】 Ensemble-based Uncertainty Quantification: Bayesian versus Credal Inference 标题:基于集成的不确定性量化:贝叶斯与凭证推理

作者:Mohammad Hossein Shaker,Eyke Hüllermeier 机构:Department of Computer Science, Paderborn University, Paderborn, Germany, Institute of Informatics, University of Munich (LMU), Munich, Germany 备注:arXiv admin note: text overlap with arXiv:2001.00893 链接:https://arxiv.org/abs/2107.10384 摘要:区分和量化两种重要的不确定性,即任意性不确定性和认知性不确定性,近年来在机器学习研究中受到越来越多的关注。在本文中,我们考虑基于集合的不确定性量化方法。区分不同类型的不确定性感知学习算法,我们特别关注基于credal集的贝叶斯方法和方法,这些方法自然地从集成学习的角度提出自己的观点。对于这两种方法,我们解决的问题是如何量化任意和认知的不确定性。在一个带有拒绝选项的分类的实证研究中,评估和比较了相应措施的有效性。 摘要:The idea to distinguish and quantify two important types of uncertainty, often referred to as aleatoric and epistemic, has received increasing attention in machine learning research in the last couple of years. In this paper, we consider ensemble-based approaches to uncertainty quantification. Distinguishing between different types of uncertainty-aware learning algorithms, we specifically focus on Bayesian methods and approaches based on so-called credal sets, which naturally suggest themselves from an ensemble learning point of view. For both approaches, we address the question of how to quantify aleatoric and epistemic uncertainty. The effectiveness of corresponding measures is evaluated and compared in an empirical study on classification with a reject option.

【5】 Small-text: Active Learning for Text Classification in Python 标题:小文本:Python中文本分类的主动学习

作者:Christopher Schröder,Lydia Müller,Andreas Niekler,Martin Potthast 机构:†Leipzig University, Germany, §Institute for Applied Informatics (InfAI), Leipzig, Germany 备注:preprint 链接:https://arxiv.org/abs/2107.10314 摘要:我们提出了一个简单的模块化主动学习库small text,它为Python中的文本分类提供了基于池的主动学习。它附带了各种预先实现的最先进的查询策略,包括一些可以利用GPU的策略。明确定义的接口允许将多种这样的查询策略与不同的分类器相结合,从而促进快速混合和匹配,并支持快速开发主动学习实验和应用程序。为了以一致的方式访问各种分类器,它集成了几个著名的机器学习库,即sciket learn、PyTorch和huggingface transformers,后者作为可选的可安装扩展提供。该图书馆可在麻省理工学院许可证下获得https://github.com/webis-de/small-text. 摘要:We present small-text, a simple modular active learning library, which offers pool-based active learning for text classification in Python. It comes with various pre-implemented state-of-the-art query strategies, including some which can leverage the GPU. Clearly defined interfaces allow to combine a multitude of such query strategies with different classifiers, thereby facilitating a quick mix and match, and enabling a rapid development of both active learning experiments and applications. To make various classifiers accessible in a consistent way, it integrates several well-known machine learning libraries, namely, scikit-learn, PyTorch, and huggingface transformers -- for which the latter integrations are available as optionally installable extensions. The library is available under the MIT License at https://github.com/webis-de/small-text.

【6】 High Frequency EEG Artifact Detection with Uncertainty via Early Exit Paradigm 标题:基于早期退出范式的不确定高频脑电伪影检测

作者:Lorena Qendro,Alexander Campbell,Pietro Liò,Cecilia Mascolo 机构: We eval-uate our approach on the Temple University Hos-pital EEG Artifact Corpus (v 2, ToEqual contribution 1University of Cambridge 2The Alan Tur-ing Institute 备注:ICML 2021 Workshop on Human In the Loop Learning 链接:https://arxiv.org/abs/2107.10746 摘要:脑电图(EEG)是监测和诊断脑部疾病的关键。然而,脑电信号受到非大脑伪影的干扰,限制了其有效性。当前的工件检测管道需要大量资源,并且严重依赖手工制作的特性。此外,这些管道本质上是确定性的,因此无法捕捉预测性的不确定性。提出了一种用于高频脑电伪影检测的深度学习框架E4G。我们的框架利用了早期退出范式,构建了一个能够捕获不确定性的隐式模型集合。我们评估我们的方法在天普大学医院EEG伪影语料库(V2.0)实现最先进的分类结果。此外,E4G提供了校准良好的不确定度指标,可与montecarlo等采样技术相媲美。E4G为支持临床医生在环框架中的不确定性感知工件检测打开了大门。 摘要:Electroencephalography (EEG) is crucial for the monitoring and diagnosis of brain disorders. However, EEG signals suffer from perturbations caused by non-cerebral artifacts limiting their efficacy. Current artifact detection pipelines are resource-hungry and rely heavily on hand-crafted features. Moreover, these pipelines are deterministic in nature, making them unable to capture predictive uncertainty. We propose E4G, a deep learning framework for high frequency EEG artifact detection. Our framework exploits the early exit paradigm, building an implicit ensemble of models capable of capturing uncertainty. We evaluate our approach on the Temple University Hospital EEG Artifact Corpus (v2.0) achieving state-of-the-art classification results. In addition, E4G provides well-calibrated uncertainty metrics comparable to sampling techniques like Monte Carlo dropout in just a single forward pass. E4G opens the door to uncertainty-aware artifact detection supporting clinicians-in-the-loop frameworks.

迁移|Zero/Few/One-Shot|自适应(6篇)

【1】 Learning to Transfer: A Foliated Theory 标题:学习迁移:一种层次分明的理论

作者:Janith Petangoda,Marc Peter Deisenroth,Nicholas A. M. Monk 机构:Department of Computing, Imperial College London, London, United Kingdom, Centre for Artificial Intelligence, University College London, School of Mathematics and Statisitics, University of Sheffield, Sheffield, United Kingdom, Editor: To Be Added 链接:https://arxiv.org/abs/2107.10763 摘要:“学习迁移”考虑任务的学习解决方案,即相关知识可以从已知的任务解决方案转移到新的相关任务。这对于一般学习以及提高学习过程的效率都很重要。虽然学习迁移的技巧已经被实验研究过,但是我们仍然缺乏一个基本的问题描述,这个问题揭示了什么是相关的任务,以及任务之间的关系如何被建设性地利用。在这项工作中,我们引入了一个框架,使用微分几何理论的叶面提供了这样的基础。 摘要:Learning to transfer considers learning solutions to tasks in a such way that relevant knowledge can be transferred from known task solutions to new, related tasks. This is important for general learning, as well as for improving the efficiency of the learning process. While techniques for learning to transfer have been studied experimentally, we still lack a foundational description of the problem that exposes what related tasks are, and how relationships between tasks can be exploited constructively. In this work, we introduce a framework using the differential geometric theory of foliations that provides such a foundation.

【2】 Back-Translated Task Adaptive Pretraining: Improving Accuracy and Robustness on Text Classification 标题:反向翻译任务自适应预训练:提高文本分类的准确性和鲁棒性

作者:Junghoon Lee,Jounghee Kim,Pilsung Kang 机构:Korea University, Seoul, Republic of Korea 链接:https://arxiv.org/abs/2107.10474 摘要:在大型文本语料库上预训练、在下游文本语料库上微调、在下游任务上微调的语言模型(LMs)成为多个自然语言处理(NLP)任务的实际训练策略。最近,一种自适应预训练方法利用任务相关数据重新训练预训练语言模型,取得了显著的效果。然而,现有的自适应预训练方法由于需要对LM进行预训练的数据量相对较少,存在着对任务分布拟合不足的问题。为了充分利用自适应预训练的概念,我们提出了一种后译任务自适应预训练(BT-TAPT)方法,该方法通过增加任务数据量,将LM推广到目标任务域,从而增加LM再训练的任务特定数据量。实验结果表明,与传统的自适应预训练方法相比,提出的BT-TAPT方法在高、低资源数据的分类精度和对噪声的鲁棒性方面都有较大的提高。 摘要:Language models (LMs) pretrained on a large text corpus and fine-tuned on a downstream text corpus and fine-tuned on a downstream task becomes a de facto training strategy for several natural language processing (NLP) tasks. Recently, an adaptive pretraining method retraining the pretrained language model with task-relevant data has shown significant performance improvements. However, current adaptive pretraining methods suffer from underfitting on the task distribution owing to a relatively small amount of data to re-pretrain the LM. To completely use the concept of adaptive pretraining, we propose a back-translated task-adaptive pretraining (BT-TAPT) method that increases the amount of task-specific data for LM re-pretraining by augmenting the task data using back-translation to generalize the LM to the target task domain. The experimental results show that the proposed BT-TAPT yields improved classification accuracy on both low- and high-resource data and better robustness to noise than the conventional adaptive pretraining method.

【3】 Online-Learning Deep Neuro-Adaptive Dynamic Inversion Controller for Model Free Control 标题:用于无模型控制的在线学习深度神经自适应动态逆控制器

作者:Nathan Lutes,K. Krishnamurthy,Venkata Sriram Siddhardh Nadendla,S. N. Balakrishnan 机构:∗Mechanical and Aerospace Dept., †Computer Science Dept., Missouri University of Science and Technology, Rolla, MO, USA 备注:8 pages, 4 fugures, manuscript under review for CDC'2021 链接:https://arxiv.org/abs/2107.10383 摘要:自适应方法在控制文献中很流行,因为它们在建模领域提供了灵活性和容错性。神经网络自适应控制特别有利于机器学习算法逼近未知函数的强大特性和传统自适应控制中放松某些约束的能力。深度神经网络是一种大框架网络,其逼近特性比浅层神经网络优越得多。然而,由于特定尺寸的复杂情况,例如训练中的消失/爆炸梯度,实现深度神经网络可能很困难。本文提出了一种新的权值更新算法,利用深度神经网络训练,只需加入梯度的符号,就可以避开消失/爆炸梯度问题,实现了一种神经自适应控制器。所设计的控制器是一种自适应动态逆控制器,在二次估计回路中利用一个改进的状态观测器来训练网络。深度神经网络在线学习整个工厂模型,创建完全无模型的控制器。在一个2连杆平面机器人手臂上进行了仿真验证。该控制器能快速学习非线性对象,在跟踪控制问题中表现出良好的性能。 摘要:Adaptive methods are popular within the control literature due to the flexibility and forgiveness they offer in the area of modelling. Neural network adaptive control is favorable specifically for the powerful nature of the machine learning algorithm to approximate unknown functions and for the ability to relax certain constraints within traditional adaptive control. Deep neural networks are large framework networks with vastly superior approximation characteristics than their shallow counterparts. However, implementing a deep neural network can be difficult due to size specific complications such as vanishing/exploding gradients in training. In this paper, a neuro-adaptive controller is implemented featuring a deep neural network trained on a new weight update law that escapes the vanishing/exploding gradient problem by only incorporating the sign of the gradient. The type of controller designed is an adaptive dynamic inversion controller utilizing a modified state observer in a secondary estimation loop to train the network. The deep neural network learns the entire plant model on-line, creating a controller that is completely model free. The controller design is tested in simulation on a 2 link planar robot arm. The controller is able to learn the nonlinear plant quickly and displays good performance in the tracking control problem.

【4】 QuantumNAS: Noise-Adaptive Search for Robust Quantum Circuits 标题:QuantumNAS:稳健量子电路的噪声自适应搜索

作者:Hanrui Wang,Yongshan Ding,Jiaqi Gu,Yujun Lin,David Z. Pan,Frederic T. Chong,Song Han 机构:Massachusetts Institute of Technology, Yale University, The University of Texas at Austin, University of Chicago 备注:15 pages, 23 figures. Code available at this https URL 链接:https://arxiv.org/abs/2107.10845 摘要:量子噪声是噪声中尺度量子计算机面临的主要挑战。有限的研究工作探索了更高层次的优化,使量子电路具有抗噪声能力。我们提出并实验实现了第一个用于变分电路和量子位映射的噪声自适应共搜索的综合框架QuantumNAS。变分量子电路是构造用于机器学习的量子神经网络和用于量子模拟的变分方法的一种很有前途的方法。然而,在高维Hilbert空间中,寻找最佳变分电路及其最佳参数是一个挑战。通过引入一种新型的门共享超电路,实现了参数训练和电路搜索的解耦。超级电路是通过采样和更新其中的子电路来训练的,它提供了从零开始训练的子电路性能的精确估计。然后,我们对子电路及其量子位映射进行进化搜索。利用从超电路中继承的参数估计子电路的性能,并用实际器件噪声模型进行仿真。最后,我们执行迭代的门修剪和微调,以进一步以细粒度的方式去除冗余门。通过在10台量子计算机上使用12个QML和VQE基准进行广泛评估,QuantumNAS显著优于无噪声搜索、人类基线和随机基线。对于QML任务,QuantumNAS是第一个在真正的量子计算机上演示超过95%的2级、85%的4级和32%的10级分类精度的。在H2、H2O、LiH、CH4、BeH2上,与UCCSD基线相比,VQE任务的特征值最低。我们还开发了开源QuantumEngine(https://github.com/mit-han-lab/pytorch-quantum)为参数化量子电路的快速训练提供便利。 摘要:Quantum noise is the key challenge in Noisy Intermediate-Scale Quantum (NISQ) computers. Limited research efforts have explored a higher level of optimization by making the quantum circuit resilient to noise. We propose and experimentally implement QuantumNAS, the first comprehensive framework for noise-adaptive co-search of variational circuit and qubit mapping. Variational quantum circuits are a promising approach for constructing quantum neural networks for machine learning and variational ansatzes for quantum simulation. However, finding the best variational circuit and its optimal parameters is challenging in a high-dimensional Hilbert space. We propose to decouple the parameter training and circuit search by introducing a novel gate-sharing SuperCircuit. The SuperCircuit is trained by sampling and updating the SubCircuits in it and provides an accurate estimation of SubCircuit performance trained from scratch. Then we perform an evolutionary co-search of SubCircuit and its qubit mapping. The SubCircuit performance is estimated with parameters inherited from SuperCircuit and simulated with real device noise models. Finally, we perform iterative gate pruning and finetuning to further remove the redundant gates in a fine-grained manner. Extensively evaluated with 12 QML and VQE benchmarks on 10 quantum computers, QuantumNAS significantly outperforms noise-unaware search, human and random baselines. For QML tasks, QuantumNAS is the first to demonstrate over 95% 2-class, 85% 4-class, and 32% 10-class classification accuracy on real quantum computers. It also achieves the lowest eigenvalue for VQE tasks on H2, H2O, LiH, CH4, BeH2 compared with UCCSD baselines. We also open-source QuantumEngine (https://github.com/mit-han-lab/pytorch-quantum) for fast training of parameterized quantum circuits to facilitate future research.

【5】 Improving Polyphonic Sound Event Detection on Multichannel Recordings with the Sørensen-Dice Coefficient Loss and Transfer Learning 标题:利用Sørensen-Dice系数损失和迁移学习改进多通道录音的复音事件检测

作者:Karn N. Watcharasupat,Thi Ngoc Tho Nguyen,Ngoc Khanh Nguyen,Zhen Jian Lee,Douglas L. Jones,Woon Seng Gan 机构:School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore., Dept. of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL, USA. 备注:Under review for the 6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), 2021 链接:https://arxiv.org/abs/2107.10471 摘要:最近,S{o}rensen--Dice系数作为一种损失函数(也称为Dice loss)越来越受欢迎,因为它在负样本数显著超过正样本数的任务中具有鲁棒性,如语义分割、自然语言处理和声音事件检测。传统的二元交叉熵损失的复调声音事件检测系统的训练往往会导致次优的检测性能,因为训练常常被来自负样本的更新所淹没。在本文中,我们研究了骰子丢失、模态内和模态间转移学习、数据增强和记录格式对多声道输入的合成音事件检测系统性能的影响。我们的分析表明,在不同的训练设置和录制格式下,以骰子损失训练的复调声音事件检测系统在F1分数和错误率方面始终优于以交叉熵损失训练的复调声音事件检测系统。通过使用迁移学习和适当组合不同的数据增强技术,我们进一步提高了性能。 摘要:The S{o}rensen--Dice Coefficient has recently seen rising popularity as a loss function (also known as Dice loss) due to its robustness in tasks where the number of negative samples significantly exceeds that of positive samples, such as semantic segmentation, natural language processing, and sound event detection. Conventional training of polyphonic sound event detection systems with binary cross-entropy loss often results in suboptimal detection performance as the training is often overwhelmed by updates from negative samples. In this paper, we investigated the effect of the Dice loss, intra- and inter-modal transfer learning, data augmentation, and recording formats, on the performance of polyphonic sound event detection systems with multichannel inputs. Our analysis showed that polyphonic sound event detection systems trained with Dice loss consistently outperformed those trained with cross-entropy loss across different training settings and recording formats in terms of F1 score and error rate. We achieved further performance gains via the use of transfer learning and an appropriate combination of different data augmentation techniques.

【6】 Design of a Graphical User Interface for Few-Shot Machine Learning Classification of Electron Microscopy Data 标题:电子显微镜数据Few-Shot机器学习分类的图形用户界面设计

作者:Christina Doty,Shaun Gallagher,Wenqi Cui,Wenya Chen,Shweta Bhushan,Marjolein Oostrom,Sarah Akers,Steven R. Spurgeon 机构:)Department of Materials Science and Engineering, University of Washington, Seattle, Washington , )Department of Chemistry, University of Washington, Seattle, )Department of Electrical and Computer Engineering, University of Washington 备注:19 pages, 4 figures 链接:https://arxiv.org/abs/2107.10387 摘要:随着现代电子显微镜产生的数据量的不断增长,需要快速、可伸缩和灵活的图像分割和分析方法。Few-Shot机器学习是一种很有前途的高通量分析方法,它可以从用户提供的一些例子中对图像进行丰富的分类。然而,目前这种方法的命令行实现速度慢,使用起来不直观,缺乏执行有效分类所需的实时反馈。在这里,我们报告了一个基于Python的图形用户界面的开发,它使最终用户能够轻松地执行和可视化少数镜头学习模型的输出。此界面是轻量级的,可以在本地或web上托管,提供了重复执行、共享和聚集源极少数镜头分析的机会。 摘要:The recent growth in data volumes produced by modern electron microscopes requires rapid, scalable, and flexible approaches to image segmentation and analysis. Few-shot machine learning, which can richly classify images from a handful of user-provided examples, is a promising route to high-throughput analysis. However, current command-line implementations of such approaches can be slow and unintuitive to use, lacking the real-time feedback necessary to perform effective classification. Here we report on the development of a Python-based graphical user interface that enables end users to easily conduct and visualize the output of few-shot learning models. This interface is lightweight and can be hosted locally or on the web, providing the opportunity to reproducibly conduct, share, and crowd-source few-shot analyses.

强化学习(1篇)

【1】 Accelerating Quadratic Optimization with Reinforcement Learning 标题:用强化学习加速二次优化

作者:Jeffrey Ichnowski,Paras Jain,Bartolomeo Stellato,Goran Banjac,Michael Luo,Francesco Borrelli,Joseph E. Gonzalez,Ion Stoica,Ken Goldberg 机构:University of California, Berkeley, Princeton University, ETH Zurich 备注:25 pages, 7 figures. Code available at this https URL 链接:https://arxiv.org/abs/2107.10847 摘要:二次优化的一阶方法(如OSQP)广泛应用于大规模机器学习和嵌入式优化控制中,其中许多相关问题需要快速解决。这些方法面临两个持久的挑战:手动超参数调整和收敛到高精度的解决方案的时间。为了解决这些问题,我们探讨了强化学习(RL)如何学习一种调整参数以加速收敛的策略。在使用著名的QP基准测试的实验中,我们发现我们的RL策略RLQP的性能比最先进的QP解算器高出3倍。RLQP很好地概括了以前从未见过的具有不同维度和结构的问题,包括QPLIB、Netlib LP和Maros Meszaros问题。有关RLQP的代码,请访问https://github.com/berkeleyautomation/rlqp. 摘要:First-order methods for quadratic optimization such as OSQP are widely used for large-scale machine learning and embedded optimal control, where many related problems must be rapidly solved. These methods face two persistent challenges: manual hyperparameter tuning and convergence time to high-accuracy solutions. To address these, we explore how Reinforcement Learning (RL) can learn a policy to tune parameters to accelerate convergence. In experiments with well-known QP benchmarks we find that our RL policy, RLQP, significantly outperforms state-of-the-art QP solvers by up to 3x. RLQP generalizes surprisingly well to previously unseen problems with varying dimension and structure from different applications, including the QPLIB, Netlib LP and Maros-Meszaros problems. Code for RLQP is available at https://github.com/berkeleyautomation/rlqp.

元学习(1篇)

【1】 Spinning Sequence-to-Sequence Models with Meta-Backdoors 标题:带元后门的旋转序列到序列模型

作者:Eugene Bagdasaryan,Vitaly Shmatikov 机构:Cornell Tech 链接:https://arxiv.org/abs/2107.10443 摘要:我们研究了神经序列对序列(seq2seq)模型的一个新威胁:当输入包含对手选择的触发词时,训练时间攻击导致模型“旋转”其输出并支持某种情绪。例如,摘要模型将输出提及某个人或组织名称的任何文本的肯定摘要。我们引入“元后门”的概念来解释模型旋转攻击。这些攻击产生的模型的输出是有效的,保留了上下文,但也满足了对手选择的元任务(例如,积极情绪)。先前研究的语言模型中的后门只是简单地翻转情感标签或替换单词,而不考虑上下文。它们的输出与触发器的输入不正确。另一方面,元后门是第一类后门,可以针对seq2seq模型部署,以(a)在输出中引入对手选择的自旋,同时(b)保持标准的准确性度量。为了证明模型纺纱的可行性,我们开发了一种新的后门技术。它将对抗性元任务(如情绪分析)堆叠到seq2seq模型上,将所需的元任务输出(如积极情绪)反向传播到我们称之为“伪词”的单词嵌入空间中的点,并使用伪词移动seq2seq模型的整个输出分布。使用流行的、不太流行的和全新的专有名词作为触发点,我们在BART摘要模型上对该技术进行了评估,结果表明,该方法在显著改变情绪的同时保持了输出的胭脂分数。我们解释了为什么模型旋转在人工智能驱动的造谣中是一种危险的技术,并讨论了如何减轻这些攻击。 摘要:We investigate a new threat to neural sequence-to-sequence (seq2seq) models: training-time attacks that cause models to "spin" their output and support a certain sentiment when the input contains adversary-chosen trigger words. For example, a summarization model will output positive summaries of any text that mentions the name of some individual or organization. We introduce the concept of a "meta-backdoor" to explain model-spinning attacks. These attacks produce models whose output is valid and preserves context, yet also satisfies a meta-task chosen by the adversary (e.g., positive sentiment). Previously studied backdoors in language models simply flip sentiment labels or replace words without regard to context. Their outputs are incorrect on inputs with the trigger. Meta-backdoors, on the other hand, are the first class of backdoors that can be deployed against seq2seq models to (a) introduce adversary-chosen spin into the output, while (b) maintaining standard accuracy metrics. To demonstrate feasibility of model spinning, we develop a new backdooring technique. It stacks the adversarial meta-task (e.g., sentiment analysis) onto a seq2seq model, backpropagates the desired meta-task output (e.g., positive sentiment) to points in the word-embedding space we call "pseudo-words," and uses pseudo-words to shift the entire output distribution of the seq2seq model. Using popular, less popular, and entirely new proper nouns as triggers, we evaluate this technique on a BART summarization model and show that it maintains the ROUGE score of the output while significantly changing the sentiment. We explain why model spinning can be a dangerous technique in AI-powered disinformation and discuss how to mitigate these attacks.

符号|符号学习(1篇)

【1】 Hash-Based Tree Similarity and Simplification in Genetic Programming for Symbolic Regression 标题:符号回归遗传规划中基于散列的树相似化简

作者:Bogdan Burlacu,Lukas Kammerer,Michael Affenzeller,Gabriel Kronberger 机构: Josef Ressel Centre for Symbolic Regression, Heuristic and Evolutionary Algorithms Laboratory, University of Applied Sciences Upper Austria, Softwarepark , Hagenberg, Institute for Formal Models and Verification 备注:None 链接:https://arxiv.org/abs/2107.10640 摘要:本文介绍了一种用于识别同构子树的高效树哈希算法,它在符号回归遗传规划中有两个重要的应用:快速在线计算种群多样性和符号表达式树的代数简化。基于这种散列方法,我们提出了一种简单的多样性保持机制,在一组符号回归基准问题上取得了很好的结果。 摘要:We introduce in this paper a runtime-efficient tree hashing algorithm for the identification of isomorphic subtrees, with two important applications in genetic programming for symbolic regression: fast, online calculation of population diversity and algebraic simplification of symbolic expression trees. Based on this hashing approach, we propose a simple diversity-preservation mechanism with promising results on a collection of symbolic regression benchmark problems.

医学相关(6篇)

【1】 Read, Attend, and Code: Pushing the Limits of Medical Codes Prediction from Clinical Notes by Machines 标题:阅读、参与和编码:用机器推动临床记录中医疗编码预测的极限

作者:Byung-Hak Kim,Varun Ganapathi 备注:To appear in Proceedings of Machine Learning Research, Volume 149: Machine Learning for Healthcare Conference (MLHC), Virtual, August 6-7, 2021 链接:https://arxiv.org/abs/2107.10650 摘要:从临床笔记中预测医疗代码是当前医疗系统中每个医疗服务机构的一个实际和必要的需求。自动注释将节省大量的时间和过度的努力,由人类程序员今天。然而,最大的挑战是从非结构化的自由文本临床笔记中的数千个高维代码中直接识别出合适的医学代码。在过去的三年里,卷积神经网络(CNN)和长-短期记忆(LTSM)网络在处理最具挑战性的模拟III全标签住院患者临床笔记数据集的基准方面有了巨大的改进。这一进展提出了一个基本问题,即自动机器学习(ML)系统离人类编码人员的工作性能有多远。我们在相同的子样本测试集上评估了人类编码者的基本表现。我们还提出了我们的读,参加,和代码(RAC)模型学习医学代码分配映射。通过将卷积嵌入与自我注意和代码标题引导的注意模块相结合,结合基于句子排列的数据增强和随机加权平均训练,RAC建立了一种新的最新技术(SOTA),大大超过了当前最好的宏F1 18.7%,超过人类水平的编码基线。这一新的里程碑标志着一个有意义的步骤,完全自主的医疗编码(AMC)的机器达到对等的人类编码器的性能在医学代码预测。 摘要:Prediction of medical codes from clinical notes is both a practical and essential need for every healthcare delivery organization within current medical systems. Automating annotation will save significant time and excessive effort spent by human coders today. However, the biggest challenge is directly identifying appropriate medical codes out of several thousands of high-dimensional codes from unstructured free-text clinical notes. In the past three years, with Convolutional Neural Networks (CNN) and Long Short-Term Memory (LTSM) networks, there have been vast improvements in tackling the most challenging benchmark of the MIMIC-III-full-label inpatient clinical notes dataset. This progress raises the fundamental question of how far automated machine learning (ML) systems are from human coders' working performance. We assessed the baseline of human coders' performance on the same subsampled testing set. We also present our Read, Attend, and Code (RAC) model for learning the medical code assignment mappings. By connecting convolved embeddings with self-attention and code-title guided attention modules, combined with sentence permutation-based data augmentations and stochastic weight averaging training, RAC establishes a new state of the art (SOTA), considerably outperforming the current best Macro-F1 by 18.7%, and reaches past the human-level coding baseline. This new milestone marks a meaningful step toward fully autonomous medical coding (AMC) in machines reaching parity with human coders' performance in medical code prediction.

【2】 Benchmarking AutoML Frameworks for Disease Prediction Using Medical Claims 标题:使用医疗索赔对AutoML疾病预测框架进行基准测试

作者:Roland Albert A. Romero,Mariefel Nicole Y. Deypalan,Suchit Mehrotra,John Titus Jungao,Natalie E. Sheils,Elisabetta Manduchi,Jason H. Moore 机构: OptumLabs at UnitedHealth Group, Minnetonka, MN , USA, Department of Biostatistics, Epidemiology & Informatics and Institute for Biomedical Informatics, Perelman School of, Medicine, University of Pennsylvania, Philadelphia, PA , USA, Correspondence to: 备注:22 pages, 8 figures, 7 tables 链接:https://arxiv.org/abs/2107.10495 摘要:我们确定并比较AutoML工具在大型、高度不平衡的医疗数据集上的性能。我们使用历史管理声明生成了一个大型数据集,包括2019年之前四个不同时间窗口的人口统计信息和疾病代码标志。然后,我们在此数据集上训练了三个AutoML工具,以预测2019年的六种不同疾病结果,并评估了几个指标上的模型性能。AutoML工具显示出与基线随机森林模型相比的改进,但彼此之间没有显著差异。所有模型的准确率召回曲线下的区域都很低,在保持较高的真阴性率的同时,未能预测真阳性。模型表现与患病率没有直接关系。我们提供了一个特定的用例来说明如何选择一个阈值,在真阳性率和假阳性率之间提供最佳的平衡,因为这在医学应用中是一个重要的考虑因素。医疗保健数据集为AutoML工具带来了一些挑战,包括大样本量、高度不平衡以及可用功能类型的限制。在可伸缩性方面的改进,不平衡学习重采样和集成方法的组合,以及精心策划的特征选择是实现更好性能的下一步。在所研究的三个工具中,没有一个AutoML工具在预测性能方面始终优于其他工具。在这项研究中的模型的性能表明,在处理医疗索赔数据方面可能有改进的空间。最后,最佳预测阈值的选择应以具体的实际应用为指导。 摘要:We ascertain and compare the performances of AutoML tools on large, highly imbalanced healthcare datasets. We generated a large dataset using historical administrative claims including demographic information and flags for disease codes in four different time windows prior to 2019. We then trained three AutoML tools on this dataset to predict six different disease outcomes in 2019 and evaluated model performances on several metrics. The AutoML tools showed improvement from the baseline random forest model but did not differ significantly from each other. All models recorded low area under the precision-recall curve and failed to predict true positives while keeping the true negative rate high. Model performance was not directly related to prevalence. We provide a specific use-case to illustrate how to select a threshold that gives the best balance between true and false positive rates, as this is an important consideration in medical applications. Healthcare datasets present several challenges for AutoML tools, including large sample size, high imbalance, and limitations in the available features types. Improvements in scalability, combinations of imbalance-learning resampling and ensemble approaches, and curated feature selection are possible next steps to achieve better performance. Among the three explored, no AutoML tool consistently outperforms the rest in terms of predictive performance. The performances of the models in this study suggest that there may be room for improvement in handling medical claims data. Finally, selection of the optimal prediction threshold should be guided by the specific practical application.

【3】 Improving COVID-19 Forecasting using eXogenous Variables 标题:利用外生变量改进冠状病毒预测

作者:Mohammadhossein Toutiaee,Xiaochuan Li,Yogesh Chaudhari,Shophine Sivaraja,Aishwarya Venkataraj,Indrajeet Javeri,Yuan Ke,Ismailcem Arpinar,Nicole Lazar,John Miller 机构:Department of Computer Science, The University of Georgia, Athens, GA, Department of Statistics, The Pennsylvania State University, University Park, PA 链接:https://arxiv.org/abs/2107.10397 摘要:在这项工作中,我们通过考虑国家和州层面的数据来研究美国的大流行过程。我们提出并比较了包含辅助变量的多时间序列预测技术。一种是基于时空图神经网络的方法,它利用一种混合的深度学习结构和人类流动数据来预测流感大流行的过程。此图中的节点表示由于COVID-19导致的状态级死亡,边表示人的移动趋势,时间边对应于跨时间的节点属性。第二种方法是基于美国使用SARIMA模型和外生变量预测COVID-19死亡率的统计技术。我们在美国的州和国家层面的COVID-19数据上评估了这些技术,并声称SARIMA和MCP模型通过外生变量生成预测值可以丰富基础模型,分别捕获国家和州层面数据的复杂性。我们证明了COVID-19数据集的预测精度显著提高,在国家级数据中,预测精度比GCN-LSTM模型分别提高了64.58%和59.18%(平均),在州级数据中,预测精度比GCN-LSTM模型分别提高了58.79%和52.40%(平均)。此外,我们提出的模型比平行研究(AUG-NN)平均提高了27.35%的精确度。 摘要:In this work, we study the pandemic course in the United States by considering national and state levels data. We propose and compare multiple time-series prediction techniques which incorporate auxiliary variables. One type of approach is based on spatio-temporal graph neural networks which forecast the pandemic course by utilizing a hybrid deep learning architecture and human mobility data. Nodes in this graph represent the state-level deaths due to COVID-19, edges represent the human mobility trend and temporal edges correspond to node attributes across time. The second approach is based on a statistical technique for COVID-19 mortality prediction in the United States that uses the SARIMA model and eXogenous variables. We evaluate these techniques on both state and national levels COVID-19 data in the United States and claim that the SARIMA and MCP models generated forecast values by the eXogenous variables can enrich the underlying model to capture complexity in respectively national and state levels data. We demonstrate significant enhancement in the forecasting accuracy for a COVID-19 dataset, with a maximum improvement in forecasting accuracy by 64.58% and 59.18% (on average) over the GCN-LSTM model in the national level data, and 58.79% and 52.40% (on average) over the GCN-LSTM model in the state level data. Additionally, our proposed model outperforms a parallel study (AUG-NN) by 27.35% improvement of accuracy on average.

【4】 Segmentation of Cardiac Structures via Successive Subspace Learning with Saab Transform from Cine MRI 标题:基于Saab变换的连续子空间学习在电影MRI心脏结构分割中的应用

作者:Xiaofeng Liu,Fangxu Xing,Hanna K. Gaggin,Weichung Wang,C. -C. Jay Kuo,Georges El Fakhri,Jonghye Woo 机构: Wang is with Institute of Applied Mathematical Sciences, NationalTaiwan University 备注:43rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2021) 链接:https://arxiv.org/abs/2107.10718 摘要:电影磁共振成像(MRI)评价心血管疾病(CVD)已被用于无创性评价详细的心脏结构和功能。从电影MRI中准确分割心脏结构是CVD早期诊断和预后判断的关键步骤,而卷积神经网络(CNN)技术的应用极大地提高了CVD的诊断率。然而,CNN模型存在许多局限性,例如有限的可解释性和高度的复杂性,从而限制了其在临床实践中的应用。在这项工作中,为了解决这一局限性,我们提出了一种轻量级的、可解释的机器学习模型,即基于调整偏差子空间近似(Saab)变换的逐次子空间学习(successive subspace learning)模型,用于电影MRI图像的准确、高效分割。具体来说,我们的分割框架由以下步骤组成:(1)在不同分辨率下对近远邻域进行顺序扩展(2) 提出了一种基于Saab变换的无监督降维方法(3) 分类熵引导的有监督降维特征选择(4) 基于梯度增强的特征拼接和像素分类;以及(5)用于后处理的条件随机场。在ACDC 2017分割数据库上的实验结果显示,我们的框架在描绘左心室、右心室和心肌方面比最先进的U-Net模型表现更好,参数少200$倍,因此显示了其在临床实践中的潜在应用。 摘要:Assessment of cardiovascular disease (CVD) with cine magnetic resonance imaging (MRI) has been used to non-invasively evaluate detailed cardiac structure and function. Accurate segmentation of cardiac structures from cine MRI is a crucial step for early diagnosis and prognosis of CVD, and has been greatly improved with convolutional neural networks (CNN). There, however, are a number of limitations identified in CNN models, such as limited interpretability and high complexity, thus limiting their use in clinical practice. In this work, to address the limitations, we propose a lightweight and interpretable machine learning model, successive subspace learning with the subspace approximation with adjusted bias (Saab) transform, for accurate and efficient segmentation from cine MRI. Specifically, our segmentation framework is comprised of the following steps: (1) sequential expansion of near-to-far neighborhood at different resolutions; (2) channel-wise subspace approximation using the Saab transform for unsupervised dimension reduction; (3) class-wise entropy guided feature selection for supervised dimension reduction; (4) concatenation of features and pixel-wise classification with gradient boost; and (5) conditional random field for post-processing. Experimental results on the ACDC 2017 segmentation database, showed that our framework performed better than state-of-the-art U-Net models with 200$times$ fewer parameters in delineating the left ventricle, right ventricle, and myocardium, thus showing its potential to be used in clinical practice.

【5】 Project Achoo: A Practical Model and Application for COVID-19 Detection from Recordings of Breath, Voice, and Cough 标题:Achoo项目:从呼吸、声音和咳嗽记录中检测冠状病毒的实用模型和应用

作者:Alexander Ponomarchuk,Ilya Burenko,Elian Malkin,Ivan Nazarov,Vladimir Kokh,Manvel Avetisian,Leonid Zhukov 机构:∗ Sber AI Lab 链接:https://arxiv.org/abs/2107.10716 摘要:COVID-19大流行引起了人们对感染检测和监测解决方案的极大兴趣和需求。在本文中,我们提出了一种机器学习方法来快速分类COVID-19使用录音消费设备。该方法将信号处理方法与微调的深度学习网络相结合,提供了信号去噪、咳嗽检测和分类的方法。我们还开发并部署了一个移动应用程序,它使用症状检查器以及语音、呼吸和咳嗽信号来检测COVID-19感染。该应用程序在开源数据集和最终用户在beta测试期间收集的噪声数据上都表现出了强大的性能。 摘要:The COVID-19 pandemic created a significant interest and demand for infection detection and monitoring solutions. In this paper we propose a machine learning method to quickly triage COVID-19 using recordings made on consumer devices. The approach combines signal processing methods with fine-tuned deep learning networks and provides methods for signal denoising, cough detection and classification. We have also developed and deployed a mobile application that uses symptoms checker together with voice, breath and cough signals to detect COVID-19 infection. The application showed robust performance on both open sourced datasets and on the noisy data collected during beta testing by the end users.

【6】 Machine Learning Characterization of Cancer Patients-Derived Extracellular Vesicles using Vibrational Spectroscopies 标题:基于振动光谱的癌症患者细胞外小泡的机器学习表征

作者:Abicumaran Uthamacumaran,Samir Elouatik,Mohamed Abdouh,Michael Berteau-Rainville,Zhu- Hua Gao,Goffredo Arena 机构:Concordia University, Department of Physics, Montreal, QC, Canada, Université de Montréal, Département de chimie, Montreal, QC, Canada, Cancer Research Program, Research Institute of the McGill University Health Centre, Decarie 备注:50 pages 链接:https://arxiv.org/abs/2107.10332 摘要:癌症的早期发现在医学上是一个具有挑战性的问题。肿瘤患者的血清富含异质性分泌性脂质结合细胞外小泡(EVs),这些细胞外小泡提供了一系列复杂的信息和生物标志物,代表了它们的起源细胞,目前正在液体活检和癌症筛查领域进行研究。振动光谱为评估复杂生物样品的结构和生物物理性质提供了非侵入性方法。在这项研究中,对从9名患者(包括4种不同的癌症亚型(结直肠癌、肝细胞癌、乳腺癌和胰腺癌)和5名健康患者(对照组)的血清中提取的EVs进行了多重拉曼光谱测量。FTIR(Fourier Transform Infrared)光谱测量作为拉曼光谱分析的补充方法,对四种癌症亚型中的两种进行了分析。AdaBoost随机森林分类器、决策树和支持向量机(SVM)将癌症EV的基线校正拉曼光谱与健康对照组的拉曼光谱(18个光谱)区分开来,当降低到1800到1940厘米的光谱频率范围时,分类准确率大于90%,接受0.5分的训练/测试。对14个光谱的FTIR分类准确率为80%。我们的研究结果表明,基本的机器学习算法是区分癌症患者EVs和健康患者EVs复杂振动光谱的有力工具。这些实验方法有望成为机器智能辅助早期癌症筛查的有效液体活检方法。 摘要:The early detection of cancer is a challenging problem in medicine. The blood sera of cancer patients are enriched with heterogeneous secretory lipid bound extracellular vesicles (EVs), which present a complex repertoire of information and biomarkers, representing their cell of origin, that are being currently studied in the field of liquid biopsy and cancer screening. Vibrational spectroscopies provide non-invasive approaches for the assessment of structural and biophysical properties in complex biological samples. In this study, multiple Raman spectroscopy measurements were performed on the EVs extracted from the blood sera of 9 patients consisting of four different cancer subtypes (colorectal cancer, hepatocellular carcinoma, breast cancer and pancreatic cancer) and five healthy patients (controls). FTIR(Fourier Transform Infrared) spectroscopy measurements were performed as a complementary approach to Raman analysis, on two of the four cancer subtypes. The AdaBoost Random Forest Classifier, Decision Trees, and Support Vector Machines (SVM) distinguished the baseline corrected Raman spectra of cancer EVs from those of healthy controls (18 spectra) with a classification accuracy of greater than 90% when reduced to a spectral frequency range of 1800 to 1940 inverse cm, and subjected to a 0.5 training/testing split. FTIR classification accuracy on 14 spectra showed an 80% classification accuracy. Our findings demonstrate that basic machine learning algorithms are powerful tools to distinguish the complex vibrational spectra of cancer patient EVs from those of healthy patients. These experimental methods hold promise as valid and efficient liquid biopsy for machine intelligence-assisted early cancer screening.

蒸馏|知识提取(1篇)

【1】 COfEE: A Comprehensive Ontology for Event Extraction from text, with an online annotation tool 标题:Cofee:一个用于从文本中提取事件的综合本体,带有在线注释工具

作者:Ali Balali,Masoud Asadpour,Seyed Hossein Jafari 机构:a School of ECE, College of Engineering, University of Tehran, Tehran, Iran 链接:https://arxiv.org/abs/2107.10326 摘要:随着时间的推移,数据在网上大量发布,但大多数数据都是非结构化的,因此很难理解和解释。信息提取(IE)方法从非结构化数据中提取结构化信息。其中一个具有挑战性的IE任务是事件提取(EE),它试图从文本中获得关于特定事件及其参与者的信息。EE在建立知识库、信息检索、摘要和在线监控系统等领域有着广泛的应用。在过去的几十年中,一些事件本体如ACE、CAMEO和ICEWS被开发用来定义文本中观察到的事件的形式、参与者和维度。这些事件本体论还存在一些不足,如只涉及政治事件等少数主题,对论据角色的界定结构不灵活,缺乏分析维度,事件子类型的选择复杂等。为了解决这些问题,我们提出了一个事件本体,即COfEE,它结合了专家领域知识、以前的本体和数据驱动的方法来从文本中识别事件。COfEE由两个层次(事件类型和事件子类型)组成,其中包括与环境问题、网络空间、犯罪活动和自然灾害相关的新类别,需要立即监控。此外,根据每个事件子类型定义动态角色,以捕获事件的各个维度。在后续的实验中,我们以维基百科事件为例,对所提出的本体进行了评估,结果表明该本体具有通用性和综合性。此外,为了便于事件提取的金标准数据的准备,提出了一种基于COfEE的独立于语言的在线工具。 摘要:Data is published on the web over time in great volumes, but majority of the data is unstructured, making it hard to understand and difficult to interpret. Information Extraction (IE) methods extract structured information from unstructured data. One of the challenging IE tasks is Event Extraction (EE) which seeks to derive information about specific incidents and their actors from the text. EE is useful in many domains such as building a knowledge base, information retrieval, summarization and online monitoring systems. In the past decades, some event ontologies like ACE, CAMEO and ICEWS were developed to define event forms, actors and dimensions of events observed in the text. These event ontologies still have some shortcomings such as covering only a few topics like political events, having inflexible structure in defining argument roles, lack of analytical dimensions, and complexity in choosing event sub-types. To address these concerns, we propose an event ontology, namely COfEE, that incorporates both expert domain knowledge, previous ontologies and a data-driven approach for identifying events from text. COfEE consists of two hierarchy levels (event types and event sub-types) that include new categories relating to environmental issues, cyberspace, criminal activity and natural disasters which need to be monitored instantly. Also, dynamic roles according to each event sub-type are defined to capture various dimensions of events. In a follow-up experiment, the proposed ontology is evaluated on Wikipedia events, and it is shown to be general and comprehensive. Moreover, in order to facilitate the preparation of gold-standard data for event extraction, a language-independent online tool is presented based on COfEE.

聚类(2篇)

【1】 Selective Pseudo-label Clustering 标题:选择性伪标签聚类

作者:Louis Mahon,Thomas Lukasiewicz 机构:Department of Computer Science, University of Oxford, UK 链接:https://arxiv.org/abs/2107.10692 摘要:深度神经网络(DNNs)为解决高维数据聚类这一具有挑战性的任务提供了一种方法。DNNs可以提取有用的特征,从而产生低维的表示,更适合于聚类技术。由于聚类通常是在纯无监督的环境中执行的,在这种环境中没有可用的训练标签,因此出现了如何训练DNN特征提取器的问题。现有的最精确的方法是将DNN的训练与聚类目标相结合,这样可以利用聚类过程中的信息来更新DNN,从而产生更好的聚类特征。这种方法的一个问题是,聚类算法产生的这些“伪标签”是有噪声的,它们包含的任何错误都会影响DNN的训练。在本文中,我们提出了选择性伪标签聚类,它只使用最有信心的伪标签来训练DNN。我们正式证明了在一定条件下的性能增益。将该方法应用于图像聚类中,在三种常用的图像数据集上取得了良好的效果。代码位于https://github.com/Lou1sM/clustering. 摘要:Deep neural networks (DNNs) offer a means of addressing the challenging task of clustering high-dimensional data. DNNs can extract useful features, and so produce a lower dimensional representation, which is more amenable to clustering techniques. As clustering is typically performed in a purely unsupervised setting, where no training labels are available, the question then arises as to how the DNN feature extractor can be trained. The most accurate existing approaches combine the training of the DNN with the clustering objective, so that information from the clustering process can be used to update the DNN to produce better features for clustering. One problem with this approach is that these ``pseudo-labels'' produced by the clustering algorithm are noisy, and any errors that they contain will hurt the training of the DNN. In this paper, we propose selective pseudo-label clustering, which uses only the most confident pseudo-labels for training the~DNN. We formally prove the performance gains under certain conditions. Applied to the task of image clustering, the new approach achieves a state-of-the-art performance on three popular image datasets. Code is available at https://github.com/Lou1sM/clustering.

【2】 Neural Ordinary Differential Equation Model for Evolutionary Subspace Clustering and Its Applications 标题:进化子空间聚类的神经常微分方程模型及其应用

作者:Mingyuan Bai,S. T. Boris Choy,Junping Zhang,Junbin Gao 链接:https://arxiv.org/abs/2107.10484 摘要:神经常微分方程(neural-ODE)模型由于能够处理不规则的时间步长,即在等间隔的时间间隔内观测不到数据,在时间序列分析中受到越来越多的关注。在多维时间序列分析中,一个任务是进行演化子空间聚类,目的是根据时间数据演化的低维子空间结构对其进行聚类。现有的许多方法只能处理具有规则时间步长的时间序列,而在数据丢失等情况下,时间序列的采样是不均匀的。本文提出了一种进化子空间聚类的神经ODE模型来克服这一局限性,并引入了一种新的具有子空间自表达约束的目标函数。实验结果表明,该方法不仅可以在任意时间步长内插进化子空间聚类任务的数据,而且比现有的进化子空间聚类方法具有更高的精度。文中用合成数据和实际数据说明了该方法的有效性。 摘要:The neural ordinary differential equation (neural ODE) model has attracted increasing attention in time series analysis for its capability to process irregular time steps, i.e., data are not observed over equally-spaced time intervals. In multi-dimensional time series analysis, a task is to conduct evolutionary subspace clustering, aiming at clustering temporal data according to their evolving low-dimensional subspace structures. Many existing methods can only process time series with regular time steps while time series are unevenly sampled in many situations such as missing data. In this paper, we propose a neural ODE model for evolutionary subspace clustering to overcome this limitation and a new objective function with subspace self-expressiveness constraint is introduced. We demonstrate that this method can not only interpolate data at any time step for the evolutionary subspace clustering task, but also achieve higher accuracy than other state-of-the-art evolutionary subspace clustering methods. Both synthetic and real-world data are used to illustrate the efficacy of our proposed method.

点云|SLAM|雷达|激光|深度RGBD相关(1篇)

【1】 Correspondence-Free Point Cloud Registration with SO(3)-Equivariant Implicit Shape Representations 标题:基于SO(3)-等变隐式形状表示的无对应点云配准

作者:Minghan Zhu,Maani Ghaffari,Huei Peng 机构:PengarewiththeUniversityofMichigan 备注:7 pages. 2 figures. Submitted to CoRL 2021 链接:https://arxiv.org/abs/2107.10296 摘要:提出了一种无对应的点云旋转配准方法。我们学习在特征空间中为每个点云嵌入一个保持SO(3)-等变特性的方法,这是由等变神经网络的最新发展所支持的。该方法将等变特征学习与隐式形状模型相结合,具有三大优点。首先,由于类似于点网的网络结构具有置换不变性,消除了数据关联的必要性。其次,由于so3-等变性,特征空间中的配准问题可以用Horn方法以封闭形式求解。第三,由于内隐形状学习,配准对点云噪声具有鲁棒性。实验结果表明,与现有的无对应深度配准方法相比,该方法具有更好的性能。 摘要:This paper proposes a correspondence-free method for point cloud rotational registration. We learn an embedding for each point cloud in a feature space that preserves the SO(3)-equivariance property, enabled by recent developments in equivariant neural networks. The proposed shape registration method achieves three major advantages through combining equivariant feature learning with implicit shape models. First, the necessity of data association is removed because of the permutation-invariant property in network architectures similar to PointNet. Second, the registration in feature space can be solved in closed-form using Horn's method due to the SO(3)-equivariance property. Third, the registration is robust to noise in the point cloud because of implicit shape learning. The experimental results show superior performance compared with existing correspondence-free deep registration methods.

联邦学习|隐私保护|加密(1篇)

【1】 Fed-ensemble: Improving Generalization through Model Ensembling in Federated Learning 标题:FED集成:通过联邦学习中的模型集成改进泛化

作者:Naichen Shi,Fan Lai,Raed Al Kontar,Mosharaf Chowdhury 链接:https://arxiv.org/abs/2107.10663 摘要:在本文中,我们提出Fed集成:一种将模型集成引入联邦学习(FL)的简单方法。Fed-ensemble不是通过聚集局部模型来更新单个全局模型,而是通过随机排列来更新一组K个模型,然后通过模型平均来获得预测。Fed系综可以在已建立的FL方法中容易地使用,并且不施加计算开销,因为它只需要在每个通信回合中将K个模型中的一个发送到客户端。理论上,我们证明了在神经切线核机制下,所有K模型对新数据的预测属于同一预测后验分布。这一结果又揭示了模型平均的推广优势。我们还说明了fed系综有一个优雅的贝叶斯解释。实验结果表明,与几种FL算法相比,该模型在广泛的数据集上具有更高的性能,并且在FL应用中经常遇到的异构环境下具有更高的性能。 摘要:In this paper we propose Fed-ensemble: a simple approach that bringsmodel ensembling to federated learning (FL). Instead of aggregating localmodels to update a single global model, Fed-ensemble uses random permutations to update a group of K models and then obtains predictions through model averaging. Fed-ensemble can be readily utilized within established FL methods and does not impose a computational overhead as it only requires one of the K models to be sent to a client in each communication round. Theoretically, we show that predictions on newdata from all K models belong to the same predictive posterior distribution under a neural tangent kernel regime. This result in turn sheds light onthe generalization advantages of model averaging. We also illustrate thatFed-ensemble has an elegant Bayesian interpretation. Empirical results show that our model has superior performance over several FL algorithms,on a wide range of data sets, and excels in heterogeneous settings often encountered in FL applications.

推理|分析|理解|解释(4篇)

【1】 Accuracy analysis of Educational Data Mining using Feature Selection Algorithm 标题:基于特征选择算法的教育数据挖掘精度分析

作者:Ali Almalki,Pawel Wocjan 机构:Department of Computer Science, University of Central Florida, Orlando, Florida, United States 备注:None 链接:https://arxiv.org/abs/2107.10669 摘要:摘要-收集相关信息来预测学生的学习进度是一项乏味的任务。由于数据库中存在大量不相关的数据,结果不准确。目前,由于数据中存在太多不相关的属性和特征,无法对学生数据进行准确的测量和分析。借助教育数据挖掘(EDM),可以提高信息的质量。这项研究展示了EDM如何使用相关属性和机器学习算法来帮助测量数据的准确性。在EDM中,不相关的特征在不改变原始数据的情况下被删除。本研究中使用的数据集来自Kaggle.com。结果比较基于查全率、查准率和f-测度来检验学生数据的准确性。这项研究的重要性在于通过为研究者提供更准确的结果来帮助提高教育研究的质量。 摘要:Abstract - Gathering relevant information to predict student academic progress is a tedious task. Due to the large amount of irrelevant data present in databases which provides inaccurate results. Currently, it is not possible to accurately measure and analyze student data because there are too many irrelevant attributes and features in the data. With the help of Educational Data Mining (EDM), the quality of information can be improved. This research demonstrates how EDM helps to measure the accuracy of data using relevant attributes and machine learning algorithms performed. With EDM, irrelevant features are removed without changing the original data. The data set used in this study was taken from Kaggle.com. The results compared on the basis of recall, precision and f-measure to check the accuracy of the student data. The importance of this research is to help improve the quality of educational research by providing more accurate results for researchers.

【2】 Inter and Intra-Annual Spatio-Temporal Variability of Habitat Suitability for Asian Elephants in India: A Random Forest Model-based Analysis 标题:印度亚洲象栖息地适宜性的年际和年内时空变异性:基于随机森林模型的分析

作者:P. Anjali,Deepak N. Subramani 机构:Dept. of Computational and Data Sciences, Indian Institute of Science, Bangalore, India 备注:Submitted for possible publication in the IEEE International India Geoscience and Remote Sensing Symposium 2021 (InGARSS 2021) 链接:https://arxiv.org/abs/2107.10478 摘要:我们发展了一个随机森林模型来估计亚洲象在印度的物种分布,并研究适合亚洲象的栖息地的年内和年间的时空变异。以气候、地形变量和卫星反演的土地利用/土地覆盖(LULC)、净第一性生产力(NPP)、叶面积指数(LAI)和归一化植被指数(NDVI)为预测因子,利用全球生物多样性信息保护区亚洲象的物种观测数据建立随机森林模型。一个仔细的超参数调整和训练验证测试周期被完成,以确定重要的预测因素,并开发一个最终的模型,该模型的精确度和召回率分别为0.78和0.77。应用该模型估计了适宜生境的时空变异性。我们观察到,适当栖息地的季节性减少可以解释亚洲象的迁徙模式和不断增加的人象冲突。此外,据观察,可利用的适宜生境总面积已减少,这加剧了这一问题。此机器学习模型旨在作为基于代理的模型的输入,我们正在构建该模型,作为人工智能驱动的决策支持工具的一部分,以减少人类与野生动物之间的冲突。 摘要:We develop a Random Forest model to estimate the species distribution of Asian elephants in India and study the inter and intra-annual spatiotemporal variability of habitats suitable for them. Climatic, topographic variables and satellite-derived Land Use/Land Cover (LULC), Net Primary Productivity (NPP), Leaf Area Index (LAI), and Normalized Difference Vegetation Index (NDVI) are used as predictors, and the species sighting data of Asian elephants from Global Biodiversity Information Reserve is used to develop the Random Forest model. A careful hyper-parameter tuning and training-validation-testing cycle are completed to identify the significant predictors and develop a final model that gives precision and recall of 0.78 and 0.77. The model is applied to estimate the spatial and temporal variability of suitable habitats. We observe that seasonal reduction in the suitable habitat may explain the migration patterns of Asian elephants and the increasing human-elephant conflict. Further, the total available suitable habitat area is observed to have reduced, which exacerbates the problem. This machine learning model is intended to serve as an input to the Agent-Based Model that we are building as part of our Artificial Intelligence-driven decision support tool to reduce human-wildlife conflict.

【3】 iReason: Multimodal Commonsense Reasoning using Videos and Natural Language with Interpretability 标题:iReason:基于视频和自然语言的多模态常识推理

作者:Aman Chadha,Vinija Jain 机构:Department of Computer Science, Stanford University 备注:12 pages, 1 figure, 7 tables 链接:https://arxiv.org/abs/2107.10300 摘要:因果关系知识对于构建健壮的人工智能系统至关重要。深度学习模型通常在需要因果推理的任务上表现不佳,而因果推理通常是使用某种形式的常识推导出来的,这些常识不是在输入中立即可用的,而是由人类隐含地推断出来的。先前的工作已经揭示了在没有因果关系的情况下,模型所受到的虚假的观测偏差。虽然语言表征模型在学习的嵌入中保留了语境知识,但在训练过程中它们不考虑因果关系。通过将因果关系与输入特征融合到一个执行视觉认知任务(如场景理解、视频字幕、视频问答等)的现有模型中,由于因果关系带来的洞察力,可以获得更好的性能。最近,有人提出了一些模型来处理从视觉或文本模态中挖掘因果数据的任务。然而,目前还没有广泛的研究通过视觉和语言形式并置来挖掘因果关系。虽然图像为我们提供了丰富且易于处理的资源,可以从中挖掘因果关系知识,但视频密度更高,并且由自然的时间顺序事件组成。此外,文本信息提供了视频中可能隐含的细节。我们提出了iReason,一个使用视频和自然语言字幕推断视觉语义常识知识的框架。此外,iReason的架构集成了一个因果合理化模块,以帮助解释性、错误分析和偏差检测的过程。我们通过与语言表征学习模型(BERT,GPT-2)以及当前最先进的多模态因果关系模型的双管齐下的比较分析,证明了iReason的有效性。 摘要:Causality knowledge is vital to building robust AI systems. Deep learning models often perform poorly on tasks that require causal reasoning, which is often derived using some form of commonsense knowledge not immediately available in the input but implicitly inferred by humans. Prior work has unraveled spurious observational biases that models fall prey to in the absence of causality. While language representation models preserve contextual knowledge within learned embeddings, they do not factor in causal relationships during training. By blending causal relationships with the input features to an existing model that performs visual cognition tasks (such as scene understanding, video captioning, video question-answering, etc.), better performance can be achieved owing to the insight causal relationships bring about. Recently, several models have been proposed that have tackled the task of mining causal data from either the visual or textual modality. However, there does not exist widespread research that mines causal relationships by juxtaposing the visual and language modalities. While images offer a rich and easy-to-process resource for us to mine causality knowledge from, videos are denser and consist of naturally time-ordered events. Also, textual information offers details that could be implicit in videos. We propose iReason, a framework that infers visual-semantic commonsense knowledge using both videos and natural language captions. Furthermore, iReason's architecture integrates a causal rationalization module to aid the process of interpretability, error analysis and bias detection. We demonstrate the effectiveness of iReason using a two-pronged comparative analysis with language representation learning models (BERT, GPT-2) as well as current state-of-the-art multimodal causality models.

【4】 What Makes Sound Event Localization and Detection Difficult? Insights from Error Analysis 标题:是什么让声音事件定位和检测变得困难?从错误分析中得到的启示

作者:Thi Ngoc Tho Nguyen,Karn N. Watcharasupat,Zhen Jian Lee,Ngoc Khanh Nguyen,Douglas L. Jones,Woon Seng Gan 机构: School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore., Dept. of Electrical and Computer Engineering, University of Illinois at Urbana-Champaign, IL, USA. 备注:Under review for the 6th Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE), 2021 链接:https://arxiv.org/abs/2107.10469 摘要:声事件定位与检测(SELD)是一个新兴的研究课题,旨在将声事件检测与波达方向估计的任务统一起来。因此,SELD继承了噪声、混响、干扰、复调和声源的非平稳性等两个任务的挑战。此外,SELD常常面临一个额外的挑战,即为多个重叠的声音事件分配检测到的声音类别和到达方向之间的正确对应。以往的研究表明,混响环境中的未知干扰往往会导致SELD系统性能的严重下降。为了进一步了解SELD任务的挑战,我们对两个SELD系统进行了详细的错误分析,这两个系统在DCASE SELD挑战的团队类别中都排名第二,一个在2020年,一个在2021年。实验结果表明,复调是SELD的主要挑战,因为很难检测出所有感兴趣的声音事件。此外,SELD系统对于训练集中占主导地位的复调场景往往会产生较少的错误。 摘要:Sound event localization and detection (SELD) is an emerging research topic that aims to unify the tasks of sound event detection and direction-of-arrival estimation. As a result, SELD inherits the challenges of both tasks, such as noise, reverberation, interference, polyphony, and non-stationarity of sound sources. Furthermore, SELD often faces an additional challenge of assigning correct correspondences between the detected sound classes and directions of arrival to multiple overlapping sound events. Previous studies have shown that unknown interferences in reverberant environments often cause major degradation in the performance of SELD systems. To further understand the challenges of the SELD task, we performed a detailed error analysis on two of our SELD systems, which both ranked second in the team category of DCASE SELD Challenge, one in 2020 and one in 2021. Experimental results indicate polyphony as the main challenge in SELD, due to the difficulty in detecting all sound events of interest. In addition, the SELD systems tend to make fewer errors for the polyphonic scenario that is dominant in the training set.

检测相关(2篇)

【1】 Bandit Quickest Changepoint Detection 标题:强盗最快变点检测

作者:Aditya Gopalan,Venkatesh Saligrama,Braghadeesh Lakshminarayanan 机构:Dept. of Electrical Communication Engineering, Indian Institute of Science, Bengaluru , India, Dept. of Electrical and Computer Engineering, Boston University, Boston, MA , USA 备注:26 pages including appendices 链接:https://arxiv.org/abs/2107.10492 摘要:在许多工业和安全应用中,检测时间行为模式的突然变化是很有意义的。突变通常是局部的,主要通过一个对齐的传感动作(例如,一个视野狭窄的摄像机)可以观察到。由于资源限制,对所有传感器进行连续监测是不切实际的。我们提出了bandit最快变化点检测框架作为一种平衡感知成本和检测延迟的方法。在这个框架中,依次选择感测动作(或传感器),并且只观察与所选动作相对应的测量。我们推导了一般一类有限参数化概率分布的检测时延的信息论下界。然后,我们提出了一个计算效率高的在线感知方案,它无缝地平衡了探索不同感知选项和利用查询信息行为的需要。我们推导了该方案的期望延迟界,并证明了在低虚警率下这些界与我们的信息论下界相匹配,从而证明了该方法的最优性。然后,我们在合成数据集和真实数据集上进行了大量实验,证明了我们提出的方法的有效性。 摘要:Detecting abrupt changes in temporal behavior patterns is of interest in many industrial and security applications. Abrupt changes are often local and observable primarily through a well-aligned sensing action (e.g., a camera with a narrow field-of-view). Due to resource constraints, continuous monitoring of all of the sensors is impractical. We propose the bandit quickest changepoint detection framework as a means of balancing sensing cost with detection delay. In this framework, sensing actions (or sensors) are sequentially chosen, and only measurements corresponding to chosen actions are observed. We derive an information-theoretic lower bound on the detection delay for a general class of finitely parameterized probability distributions. We then propose a computationally efficient online sensing scheme, which seamlessly balances the need for exploration of different sensing options with exploitation of querying informative actions. We derive expected delay bounds for the proposed scheme and show that these bounds match our information-theoretic lower bounds at low false alarm rates, establishing optimality of the proposed method. We then perform a number of experiments on synthetic and real datasets demonstrating the efficacy of our proposed method.

【2】 An overcome of far-distance limitation on tunnel CCTV-based accident detection in AI deep-learning frameworks 标题:AI深度学习框架下隧道闭路电视事故检测远距离限制的克服

作者:Kyu-Beom Lee,Hyu-Soung Shin 机构:) Smart City & Construction Engineering, UST, Gyeonggi Province , Republic of, ) Department of Future & Smart Construction Research, KICT, Gyeonggi Province, Republic of Korea 备注:6 pages, 3 figures, to be presented in "2021 INTERNATIONAL CONFERENCE ON TUNNELS AND UNDERGROUND SPACES" conference 链接:https://arxiv.org/abs/2107.10567 摘要:隧道CCTV安装在低高度、长距离区间。然而,由于安装高度的限制,在距离上会产生严重的透视效应,在现有的基于隧道闭路电视的事故检测系统(Pflugfelder 2005)中,几乎不可能检测到距离闭路电视较远的车辆。为了克服这一限制,通过基于逆透视变换的目标检测算法,通过重新设置感兴趣区域(ROI)来检测车辆目标。它能探测到远离闭路电视的车辆。为了验证这一过程,本文同时在CCTV原始图像和扭曲图像的基础上创建了由图像和边界框组成的数据集,并比较了用这两个数据集训练的深度学习目标检测模型的性能。因此,与训练原始图像的模型相比,训练扭曲图像的模型能够在远离闭路电视的位置更准确地检测车辆物体。 摘要:Tunnel CCTVs are installed to low height and long-distance interval. However, because of the limitation of installation height, severe perspective effect in distance occurs, and it is almost impossible to detect vehicles in far distance from the CCTV in the existing tunnel CCTV-based accident detection system (Pflugfelder 2005). To overcome the limitation, a vehicle object is detected through an object detection algorithm based on an inverse perspective transform by re-setting the region of interest (ROI). It can detect vehicles that are far away from the CCTV. To verify this process, this paper creates each dataset consisting of images and bounding boxes based on the original and warped images of the CCTV at the same time, and then compares performance of the deep learning object detection models trained with the two datasets. As a result, the model that trained the warped image was able to detect vehicle objects more accurately at the position far from the CCTV compared to the model that trained the original image.

分类|识别(4篇)

【1】 Typing assumptions improve identification in causal discovery 标题:键入假设改进了因果发现中的识别

作者:Philippe Brouillard,Perouz Taslakian,Alexandre Lacoste,Sebastien Lachapelle,Alexandre Drouin 备注:Accepted for presentation as a contributed talk at the Workshop on the Neglected Assumptions in Causal Inference (NACI) at the 38th International Conference on Machine Learning, 2021 链接:https://arxiv.org/abs/2107.10703 摘要:从观测数据中发现因果关系是一项具有挑战性的任务,不可能总是找到精确的解决方案。在数据生成过程的假设下,因果图通常可以被识别为等价类。提出新的现实假设来限定这些等价类是一个活跃的研究领域。在这项工作中,我们提出了一套新的假设,限制可能的因果关系的基础上的性质的变量。因此,我们引入类型化有向无环图,其中变量类型用于确定因果关系的有效性。我们证明,无论是理论上还是实证上,所提出的假设可以导致在因果图的识别显着收益。 摘要:Causal discovery from observational data is a challenging task to which an exact solution cannot always be identified. Under assumptions about the data-generative process, the causal graph can often be identified up to an equivalence class. Proposing new realistic assumptions to circumscribe such equivalence classes is an active field of research. In this work, we propose a new set of assumptions that constrain possible causal relationships based on the nature of the variables. We thus introduce typed directed acyclic graphs, in which variable types are used to determine the validity of causal relationships. We demonstrate, both theoretically and empirically, that the proposed assumptions can result in significant gains in the identification of the causal graph.

【2】 A Systematic Literature Review of Automated ICD Coding and Classification Systems using Discharge Summaries 标题:放电自动编码与分类系统的系统文献综述

作者:Rajvir Kaur,Jeewani Anupama Ginige,Oliver Obst 机构:• A systematic literature review focus on automated ICD code assignment using discharge summaries was conducted., • We review computerised systems that employ Artificial Intelligence, Machine Learning, Deep Learning and Natural 备注:33 pages, 1 figure. Under review in the Journal of Artificial Intelligence in Medicine 链接:https://arxiv.org/abs/2107.10652 摘要:长期以来,自由文本临床叙述的编码一直被认为有利于辅助用途,如资金、保险索赔处理和研究。当前分配代码的场景是一个手动过程,非常昂贵、耗时且容易出错。近年来,许多研究者研究了利用自然语言处理(NLP)、相关机器学习(ML)和深度学习(DL)等方法和技术来解决临床叙述的人工编码问题,帮助人类编码者更准确、高效地分配临床代码。这篇系统性文献综述全面综述了自动临床编码系统,该系统利用适当的NLP、ML和DL方法和技术将ICD代码分配给出院总结。我们遵循了系统评价和元分析(PRISMA)指南的首选报告项目,并在四个学术数据库(PubMed、ScienceDirect、计算机协会(ACM)数字图书馆)中对2010年1月至2020年12月的出版物进行了全面搜索,以及计算语言学协会(ACL)选集。我们审查了7556份出版物;38例符合纳入标准。本综述确定:具有出院总结的数据集;NLP技术以及其他一些数据提取过程、不同的特征提取和嵌入技术。为了衡量分类方法的性能,采用了不同的评价指标。最后,对有兴趣的ICD代码自动分配学者提出了今后的研究方向。仍需努力提高ICD编码预测的准确性,利用最新版本的分类系统提供大规模的去识别临床语料库。这可以成为一个平台来指导和分享知识与经验不足的程序员和研究人员。 摘要:Codification of free-text clinical narratives have long been recognised to be beneficial for secondary uses such as funding, insurance claim processing and research. The current scenario of assigning codes is a manual process which is very expensive, time-consuming and error prone. In recent years, many researchers have studied the use of Natural Language Processing (NLP), related Machine Learning (ML) and Deep Learning (DL) methods and techniques to resolve the problem of manual coding of clinical narratives and to assist human coders to assign clinical codes more accurately and efficiently. This systematic literature review provides a comprehensive overview of automated clinical coding systems that utilises appropriate NLP, ML and DL methods and techniques to assign ICD codes to discharge summaries. We have followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA) guidelines and conducted a comprehensive search of publications from January, 2010 to December 2020 in four academic databases- PubMed, ScienceDirect, Association for Computing Machinery(ACM) Digital Library, and the Association for Computational Linguistics(ACL) Anthology. We reviewed 7,556 publications; 38 met the inclusion criteria. This review identified: datasets having discharge summaries; NLP techniques along with some other data extraction processes, different feature extraction and embedding techniques. To measure the performance of classification methods, different evaluation metrics are used. Lastly, future research directions are provided to scholars who are interested in automated ICD code assignment. Efforts are still required to improve ICD code prediction accuracy, availability of large-scale de-identified clinical corpora with the latest version of the classification system. This can be a platform to guide and share knowledge with the less experienced coders and researchers.

【3】 Interpretable SincNet-based Deep Learning for Emotion Recognition from EEG brain activity 标题:基于可解释SincNet的脑电信号情感识别深度学习

作者:Juan Manuel Mayor-Torres,Mirco Ravanelli,Sara E. Medina-DeVilliers,Matthew D. Lerner,Giuseppe Riccardi 机构: 2Mila -Quebec Artifical Intelligence Institute, 3 StonyBrookUniversity 链接:https://arxiv.org/abs/2107.10790 摘要:机器学习方法,如深度学习,在医学领域显示出良好的效果。然而,这些算法缺乏可解释性,可能会阻碍它们在医疗决策支持系统中的应用。本文研究了一种可解释的深度学习技术,称为SincNet。SincNet是一种卷积神经网络,它通过可训练的sinc函数有效地学习定制的带通滤波器。在这项研究中,我们使用SincNet来分析孤独症谱系障碍(ASD)个体的神经活动,这些个体的神经振荡活动存在特征性差异。特别地,我们提出了一种新的基于SincNet的神经网络来检测ASD患者的情绪。所学习的滤波器可以很容易地检测出脑电图频谱的哪一部分用于预测情绪。我们发现,我们的系统自动学习高-$alpha$(9-13赫兹)和$beta$(13-30赫兹)波段抑制经常出现在自闭症患者。这一结果与最近神经科学关于情绪识别的研究一致,后者发现这些条带抑制与在自闭症患者中观察到的行为缺陷有关。在不牺牲情感识别性能的前提下,提高了SincNet的可解释性。 摘要:Machine learning methods, such as deep learning, show promising results in the medical domain. However, the lack of interpretability of these algorithms may hinder their applicability to medical decision support systems. This paper studies an interpretable deep learning technique, called SincNet. SincNet is a convolutional neural network that efficiently learns customized band-pass filters through trainable sinc-functions. In this study, we use SincNet to analyze the neural activity of individuals with Autism Spectrum Disorder (ASD), who experience characteristic differences in neural oscillatory activity. In particular, we propose a novel SincNet-based neural network for detecting emotions in ASD patients using EEG signals. The learned filters can be easily inspected to detect which part of the EEG spectrum is used for predicting emotions. We found that our system automatically learns the high-$alpha$ (9-13 Hz) and $beta$ (13-30 Hz) band suppression often present in individuals with ASD. This result is consistent with recent neuroscience studies on emotion recognition, which found an association between these band suppressions and the behavioral deficits observed in individuals with ASD. The improved interpretability of SincNet is achieved without sacrificing performance in emotion recognition.

【4】 A baseline model for computationally inexpensive speech recognition for Kazakh using the Coqui STT framework 标题:基于Coqui STT框架的哈萨克语低成本语音识别基线模型

作者:Ilnar Salimzianov 机构:Taruen 备注:4 pages, 2 tables 链接:https://arxiv.org/abs/2107.10637 摘要:移动设备正在改变人们与计算机的交互方式,应用程序的语音接口也变得越来越重要。最近出版的自动语音识别系统非常精确,但通常需要强大的机器(专门的图形处理单元)进行推理,这使得它们无法在商品设备上运行,尤其是在流模式下。在不使用GPU的情况下,我们对哈萨克ASR基线模型(Khassanov等人,2021年)的准确性印象深刻,但对其推理时间不满意,因此我们训练了一个新的基线声学模型(与上述论文在同一数据集上)和三个语言模型,以用于Coqui STT框架。结果看起来很有希望,但是需要进一步的训练和参数扫描,或者限制ASR系统必须支持的词汇量,才能达到生产级的精度。 摘要:Mobile devices are transforming the way people interact with computers, and speech interfaces to applications are ever more important. Automatic Speech Recognition systems recently published are very accurate, but often require powerful machinery (specialised Graphical Processing Units) for inference, which makes them impractical to run on commodity devices, especially in streaming mode. Impressed by the accuracy of, but dissatisfied with the inference times of the baseline Kazakh ASR model of (Khassanov et al.,2021) when not using a GPU, we trained a new baseline acoustic model (on the same dataset as the aforementioned paper) and three language models for use with the Coqui STT framework. Results look promising, but further epochs of training and parameter sweeping or, alternatively, limiting the vocabulary that the ASR system must support, is needed to reach a production-level accuracy.

3D|3D重建等相关(1篇)

【1】 DeltaCharger: Charging Robot with Inverted Delta Mechanism and CNN-driven High Fidelity Tactile Perception for Precise 3D Positioning 标题:DeltaCharger:具有倒置Delta机构和CNN驱动的高保真触觉的充电机器人精确三维定位

作者:Iaroslav Okunevich,Daria Trinitatova,Pavel Kopanev,Dzmitry Tsetserukou 机构: Skolkovo Institute of Science and Technology 备注:Accepted to IEEE Robotics and Automation Letters and 17th International Conference on Automation Science and Engineering (CASE) 2021, IEEE copyright, 7 pages, 9 figures 链接:https://arxiv.org/abs/2107.10710 摘要:DeltaCharger是一种新型的充电机器人,采用倒三角结构对电极进行三维定位,实现了两个移动机器人之间可靠、安全的能量传输。嵌入式高保真触觉传感器允许使用接触面上的压力数据估计充电机构上的电极和目标机器人上的电极之间的角度、垂直和水平偏差。这对于防止短路至关重要。本文介绍了该样机的工作原理,并对不同的机器学习模型进行了评价研究。实验结果表明,该系统利用卷积神经网络(CNN)对压力数据进行角度、垂直和水平偏差测量,准确率分别为95.46%、98.2%和86.9%。DeltaCharger有可能带来一个新的充电系统水平,提高移动自主机器人的普及率。 摘要:DeltaCharger is a novel charging robot with an Inverted Delta structure for 3D positioning of electrodes to achieve robust and safe transferring energy between two mobile robots. The embedded high-fidelity tactile sensors allow to estimate the angular, vertical and horizontal misalignments between electrodes on the charger mechanism and electrodes on the target robot using pressure data on the contact surfaces. This is crucial for preventing a short circuit. In this paper, the mechanism of the developed prototype and evaluation study of different machine learning models for misalignment prediction are presented. The experimental results showed that the proposed system can measure the angle, vertical and horizontal values of misalignment from pressure data with an accuracy of 95.46%, 98.2%, and 86.9%, respectively, using a Convolutional Neural Network (CNN). DeltaCharger can potentially bring a new level of charging systems and improve the prevalence of mobile autonomous robots.

编码器(1篇)

【1】 β-Annealed Variational Autoencoder for glitches标题:β-针对毛刺的退火变分自动编码器

作者:Sivaramakrishnan Sankarapandian,Brian Kulis 机构:Proscia Inc., Department of ECE, Boston University 链接:https://arxiv.org/abs/2107.10667 摘要:像LIGO和Virgo这样的引力波探测器容易受到各种仪器和环境干扰的影响,这些干扰被称为小故障,可以掩盖和模拟引力波。虽然目前确定了22类非高斯噪声梯度,但随着这些探测器在观测运行之间的调试,类的数量可能会增加。由于识别和标记新的噪声梯度既费时又费力,因此我们提出$beta$-lead-annevas以无监督的方式从spectrograms中学习表示。使用与cite{alemi2017fixing}相同的公式,我们从信息论的角度来看待瓶颈变量~cite{burgess2018understanding},并将它们连接到$beta$-VAEs~cite{higgins2017beta}。基于这一联系,我们提出了一种$beta$-VAEs中超参数$beta$的退火方案,其优点是:1)可调谐的超参数少一个,2)重建质量好,同时产生相似的解纠缠水平。 摘要:Gravitational wave detectors such as LIGO and Virgo are susceptible to various types of instrumental and environmental disturbances known as glitches which can mask and mimic gravitational waves. While there are 22 classes of non-Gaussian noise gradients currently identified, the number of classes is likely to increase as these detectors go through commissioning between observation runs. Since identification and labelling new noise gradients can be arduous and time-consuming, we propose $beta$-Annelead VAEs to learn representations from spectograms in an unsupervised way. Using the same formulation as cite{alemi2017fixing}, we view Bottleneck-VAEs~cite{burgess2018understanding} through the lens of information theory and connect them to $beta$-VAEs~cite{higgins2017beta}. Motivated by this connection, we propose an annealing schedule for the hyperparameter $beta$ in $beta$-VAEs which has advantages of: 1) One fewer hyperparameter to tune, 2) Better reconstruction quality, while producing similar levels of disentanglement.

优化|敛散性(1篇)

【1】 Robust Topology Optimization Using Variational Autoencoders 标题:基于变分自动编码器的鲁棒拓扑优化

作者:Rini Jasmine Gladstone,Mohammad Amin Nabian,Vahid Keshavarzzadeh,Hadi Meidani 机构:University of Illinois at Urbana-Champaign, NVIDIA, University of Utah 链接:https://arxiv.org/abs/2107.10661 摘要:拓扑优化是在一定的性能约束条件下,通过最小化一个代价函数,在设计域内寻找材料最优排列的过程。鲁棒拓扑优化(RTO)还考虑了输入不确定性的影响,在降低对输入不确定性响应灵敏度的同时,使结构的平均性能达到最佳。采用有限元和蒙特卡罗抽样方法进行实时优化计算,计算量大。在这项工作中,我们使用神经网络代理,通过基于代理的优化来实现更快的求解方法,并建立一个变分自动编码器(VAE)来将高维设计空间转换为低维设计空间。此外,有限元求解器将被神经网络替代。此外,为了进一步促进设计探索,我们将我们的搜索限制在一个子空间,该子空间由在不同的输入不确定性实现下的确定性拓扑优化问题的解组成。利用这些神经网络逼近,形成了一种基于梯度的优化方法,以最小化低维设计子空间上的预测目标函数。我们证明了所提出的方法对两个柔度最小化问题的有效性,并且证明了VAE能够很好地从最小训练数据中学习设计的特征,并且将设计空间转化为低维的潜在空间可以提高问题的计算效率。所得到的基于梯度的优化算法产生的优化设计具有较低的鲁棒性比在训练集中观察到的。 摘要:Topology Optimization is the process of finding the optimal arrangement of materials within a design domain by minimizing a cost function, subject to some performance constraints. Robust topology optimization (RTO) also incorporates the effect of input uncertainties and produces a design with the best average performance of the structure while reducing the response sensitivity to input uncertainties. It is computationally expensive to carry out RTO using finite element and Monte Carlo sampling. In this work, we use neural network surrogates to enable a faster solution approach via surrogate-based optimization and build a Variational Autoencoder (VAE) to transform the the high dimensional design space into a low dimensional one. Furthermore, finite element solvers will be replaced by a neural network surrogate. Also, to further facilitate the design exploration, we limit our search to a subspace, which consists of designs that are solutions to deterministic topology optimization problems under different realizations of input uncertainties. With these neural network approximations, a gradient-based optimization approach is formed to minimize the predicted objective function over the low dimensional design subspace. We demonstrate the effectiveness of the proposed approach on two compliance minimization problems and show that VAE performs well on learning the features of the design from minimal training data, and that converting the design space into a low dimensional latent space makes the problem computationally efficient. The resulting gradient-based optimization algorithm produces optimal designs with lower robust compliances than those observed in the training set.

预测|估计(4篇)

【1】 A Framework for Imbalanced Time-series Forecasting 标题:非平衡时间序列预测的一个框架

作者:Luis P. Silvestrin,Leonardos Pantiskas,Mark Hoogendoorn 机构:Computer Science Department, Vrije Universiteit Amsterdam, NL 链接:https://arxiv.org/abs/2107.10709 摘要:时间序列预测在许多领域中起着重要的作用。在深度学习算法进步的推动下,它已被用于预测风力发电的风电生产、股市波动或电机过热。在这些任务中,我们感兴趣的是准确地预测某些特定的时刻,这些时刻在数据集中往往被低估,从而导致一个被称为不平衡回归的问题。在文献中,虽然被认为是一个具有挑战性的问题,但对如何在实际环境中处理这个问题的关注有限。在这篇文章中,我们提出了一种分析时间序列预测问题的一般方法,集中在那些代表性不足的时刻,以减少不平衡。我们的方法是在一个大型工业公司的案例研究的基础上发展起来的,我们用它来举例说明这种方法。 摘要:Time-series forecasting plays an important role in many domains. Boosted by the advances in Deep Learning algorithms, it has for instance been used to predict wind power for eolic energy production, stock market fluctuations, or motor overheating. In some of these tasks, we are interested in predicting accurately some particular moments which often are underrepresented in the dataset, resulting in a problem known as imbalanced regression. In the literature, while recognized as a challenging problem, limited attention has been devoted on how to handle the problem in a practical setting. In this paper, we put forward a general approach to analyze time-series forecasting problems focusing on those underrepresented moments to reduce imbalances. Our approach has been developed based on a case study in a large industrial company, which we use to exemplify the approach.

【2】 Tri-Branch Convolutional Neural Networks for Top-k Focused Academic Performance Prediction标题:基于三分支卷积神经网络的TOP-K聚焦学习成绩预测

作者:Chaoran Cui,Jian Zong,Yuling Ma,Xinhua Wang,Lei Guo,Meng Chen,Yilong Yin 链接:https://arxiv.org/abs/2107.10424 摘要:学业成绩预测是利用学生的相关信息来预测其未来的学业成绩,有利于个性化教学、学业预警等众多教育应用。本文通过分析学生的日常行为轨迹来解决这一问题,通过校园智能卡记录可以全面跟踪学生的日常行为轨迹。与以往的研究不同,我们提出了一种新的三分支CNN结构,它配备了行、列和深度卷积和注意操作,分别以端到端的方式捕捉学生行为的持续性、规律性和时间分布特征。此外,我们将学业成绩预测作为一个排名前-$k$的问题,并引入一个排名前-$k$的重点损失,以确保识别学业风险学生的准确性。在一个大规模的真实世界数据集上进行了大量的实验,结果表明,我们的方法比最近提出的学习成绩预测方法有很大的优越性。为了重现性,我们的代码已经在https://github.com/ZongJ1111/Academic-Performance-Prediction. 摘要:Academic performance prediction aims to leverage student-related information to predict their future academic outcomes, which is beneficial to numerous educational applications, such as personalized teaching and academic early warning. In this paper, we address the problem by analyzing students' daily behavior trajectories, which can be comprehensively tracked with campus smartcard records. Different from previous studies, we propose a novel Tri-Branch CNN architecture, which is equipped with row-wise, column-wise, and depth-wise convolution and attention operations, to capture the characteristics of persistence, regularity, and temporal distribution of student behavior in an end-to-end manner, respectively. Also, we cast academic performance prediction as a top-$k$ ranking problem, and introduce a top-$k$ focused loss to ensure the accuracy of identifying academically at-risk students. Extensive experiments were carried out on a large-scale real-world dataset, and we show that our approach substantially outperforms recently proposed methods for academic performance prediction. For the sake of reproducibility, our codes have been released at https://github.com/ZongJ1111/Academic-Performance-Prediction.

【3】 Rethinking Trajectory Forecasting Evaluation 标题:关于弹道预报评估的再思考

作者:Boris Ivanovic,Marco Pavone 机构:NVIDIA Research 备注:4 pages, 2 figures 链接:https://arxiv.org/abs/2107.10297 摘要:预测其他智能体的行为是现代机器人自主堆栈的一个组成部分,特别是在具有人机交互的安全关键场景中,例如自动驾驶。反过来,人们对轨道预测有了大量的兴趣和研究,产生了各种各样的方法。然而,所有工作的共同点是使用相同的基于精度的评估指标,例如位移误差和对数似然。虽然这些指标是信息性的,但它们是任务不可知的,被评估为相等的预测可能会导致截然不同的结果,例如在下游规划和决策中。在这项工作中,我们后退了一步,并对当前的轨迹预测指标进行了批判性评估,提出了任务感知指标,作为部署预测的系统中性能的更好度量。此外,我们还提供了一个这样一个指标的例子,将规划意识纳入现有的轨迹预测指标中。 摘要:Forecasting the behavior of other agents is an integral part of the modern robotic autonomy stack, especially in safety-critical scenarios with human-robot interaction, such as autonomous driving. In turn, there has been a significant amount of interest and research in trajectory forecasting, resulting in a wide variety of approaches. Common to all works, however, is the use of the same few accuracy-based evaluation metrics, e.g., displacement error and log-likelihood. While these metrics are informative, they are task-agnostic and predictions that are evaluated as equal can lead to vastly different outcomes, e.g., in downstream planning and decision making. In this work, we take a step back and critically evaluate current trajectory forecasting metrics, proposing task-aware metrics as a better measure of performance in systems where prediction is being deployed. We additionally present one example of such a metric, incorporating planning-awareness within existing trajectory forecasting metrics.

【4】 Predicting Power Electronics Device Reliability under Extreme Conditions with Machine Learning Algorithms 标题:基于机器学习算法的极端条件下电力电子器件可靠性预测

作者:Carlos Olivares,Raziur Rahman,Christopher Stankus,Jade Hampton,Andrew Zedwick,Moinuddin Ahmed 备注:11 pages, 8 figures. Submitted to IEEE Transactions on Device and Materials Reliability 链接:https://arxiv.org/abs/2107.10292 摘要:在极端环境下运行时,电力设备的可靠性是一个主要问题,因为这样做会缩短任何电力系统或传感基础设施的运行寿命。由于存在系统故障的可能性,设备在实现之前必须经过实验验证,这既昂贵又耗时。在本文中,我们利用机器学习算法来预测设备的可靠性,大大减少了进行实验的需要。为了训练模型,我们测试了来自10个不同制造商的224个电源设备。首先,我们描述了一种处理数据以进行建模的方法。基于内部测试数据,我们实现了各种ML模型,并观察到梯度增强和LSTM编解码网络等计算模型可以高精度地预测功率器件的故障。 摘要:Power device reliability is a major concern during operation under extreme environments, as doing so reduces the operational lifetime of any power system or sensing infrastructure. Due to a potential for system failure, devices must be experimentally validated before implementation, which is expensive and time-consuming. In this paper, we have utilized machine learning algorithms to predict device reliability, significantly reducing the need for conducting experiments. To train the models, we have tested 224 power devices from 10 different manufacturers. First, we describe a method to process the data for modeling purposes. Based on the in-house testing data, we implemented various ML models and observed that computational models such as Gradient Boosting and LSTM encoder-decoder networks can predict power device failure with high accuracy.

其他神经网络|深度学习|模型|建模(14篇)

【1】 Solving inverse problems with deep neural networks driven by sparse signal decomposition in a physics-based dictionary 标题:用稀疏信号分解驱动的深度神经网络求解物理词典中的反问题

作者:Gaetan Rensonnet,Louise Adam,Benoit Macq 机构:We consider the case in which a generative model of the 1ICTEAM Institute, Universit´e catholique de Louvain 备注:Accepted for publication in Workshop on Interpretable ML in Healthcare at International Conference on Machine Learning (ICML) 2021. 10 pages (including 3 for references), 4 figures 链接:https://arxiv.org/abs/2107.10657 摘要:深度神经网络(DNN)具有一种令人印象深刻的能力,可以反转非常复杂的模型,即从模型的输出中学习生成参数。经过训练后,DNN的正向传递通常比用于求解逆问题的传统优化方法快得多。然而,这是以较低的可解释性为代价的,这是大多数医学应用的一个基本限制。我们提出了一种求解一般反问题的方法,它结合了DNN的效率和传统分析方法的可解释性。首先将测量结果投影到一个密集的基于模型的响应字典中。由此产生的稀疏表示,然后馈送到DNN与一个由问题的物理结构驱动的快速参数学习。我们的方法可以处理生成性正演模型,这些模型的评估成本很高,并且在精度和计算时间上与完全学习的DNN表现出相似的性能,同时保持较高的可解释性和易于训练。以磁共振成像(MRI)为例,给出了基于模型的脑参数估计的具体结果。 摘要:Deep neural networks (DNN) have an impressive ability to invert very complex models, i.e. to learn the generative parameters from a model's output. Once trained, the forward pass of a DNN is often much faster than traditional, optimization-based methods used to solve inverse problems. This is however done at the cost of lower interpretability, a fundamental limitation in most medical applications. We propose an approach for solving general inverse problems which combines the efficiency of DNN and the interpretability of traditional analytical methods. The measurements are first projected onto a dense dictionary of model-based responses. The resulting sparse representation is then fed to a DNN with an architecture driven by the problem's physics for fast parameter learning. Our method can handle generative forward models that are costly to evaluate and exhibits similar performance in accuracy and computation time as a fully-learned DNN, while maintaining high interpretability and being easier to train. Concrete results are shown on an example of model-based brain parameter estimation from magnetic resonance imaging (MRI).

【2】 Lumen: A Machine Learning Framework to Expose Influence Cues in Text 标题:Lumen:一种揭示文本中影响线索的机器学习框架

作者:Hanyu Shi,Mirela Silva,Daniel Capecci,Luiz Giovanini,Lauren Czech,Juliana Fernandes,Daniela Oliveira 机构: Fernandes is with the Department of Advertising 链接:https://arxiv.org/abs/2107.10655 摘要:网络钓鱼和造谣是流行的社会工程攻击,攻击者总是在文本中应用影响线索,使其更吸引用户。我们介绍了Lumen,一个基于学习的框架,它揭示了文本中的影响线索:(i)说服,(ii)框架,(iii)情感,(iv)客观性/主观性,(v)内疚/责备,以及(vi)强调的使用。Lumen接受了一个新开发的3K文本数据集的训练,该数据集由造谣、网络钓鱼、超党派新闻和主流新闻组成。与其他学习模型相比,流明和LSTM的F1微评分最好,但流明的解释性更好。我们的研究结果强调了ML在文本中暴露影响线索的前景,其目标是应用于自动标记工具,以提高基于人的检测的准确性,并降低用户对欺骗性在线内容的依赖。 摘要:Phishing and disinformation are popular social engineering attacks with attackers invariably applying influence cues in texts to make them more appealing to users. We introduce Lumen, a learning-based framework that exposes influence cues in text: (i) persuasion, (ii) framing, (iii) emotion, (iv) objectivity/subjectivity, (v) guilt/blame, and (vi) use of emphasis. Lumen was trained with a newly developed dataset of 3K texts comprised of disinformation, phishing, hyperpartisan news, and mainstream news. Evaluation of Lumen in comparison to other learning models showed that Lumen and LSTM presented the best F1-micro score, but Lumen yielded better interpretability. Our results highlight the promise of ML to expose influence cues in text, towards the goal of application in automatic labeling tools to improve the accuracy of human-based detection and reduce the likelihood of users falling for deceptive online content.

【3】 Semiparametric Latent Topic Modeling on Consumer-Generated Corpora 标题:基于消费者生成语料库的半参数潜在主题建模

作者:Dominic B. Dayta,Erniel B. Barrios 机构:School of StatisticsUniversity of the PhilippinesDiliman 链接:https://arxiv.org/abs/2107.10651 摘要:传统的主题建模方法普遍存在过拟合问题,并且在重建稀疏主题结构方面存在弱点。基于消费者生成语料库的动机,提出了半参数主题模型,该模型采用非负矩阵分解和半参数回归两步建模方法。该模型能够重建语料库中稀疏的主题结构,为预测进入语料库的新文档中的主题提供了一个生成模型。假设存在与主题相关的辅助信息,这种方法在语料库较小且词汇量有限的情况下,能更好地发现潜在的主题结构。在一个实际的消费者反馈语料库中,该模型还提供了可解释的和有用的主题定义,与其他方法产生的主题定义相当。 摘要:Legacy procedures for topic modelling have generally suffered problems of overfitting and a weakness towards reconstructing sparse topic structures. With motivation from a consumer-generated corpora, this paper proposes semiparametric topic model, a two-step approach utilizing nonnegative matrix factorization and semiparametric regression in topic modeling. The model enables the reconstruction of sparse topic structures in the corpus and provides a generative model for predicting topics in new documents entering the corpus. Assuming the presence of auxiliary information related to the topics, this approach exhibits better performance in discovering underlying topic structures in cases where the corpora are small and limited in vocabulary. In an actual consumer feedback corpus, the model also demonstrably provides interpretable and useful topic definitions comparable with those produced by other methods.

【4】 HANT: Hardware-Aware Network Transformation 标题:HANT:硬件感知的网络转型

作者:Pavlo Molchanov,Jimmy Hall,Hongxu Yin,Jan Kautz,Nicolo Fusi,Arash Vahdat 机构:NVIDIA, Microsoft Research 链接:https://arxiv.org/abs/2107.10624 摘要:给定一个经过训练的网络,我们如何加速它以满足在特定硬件上部署的效率需求?常用的硬件感知网络压缩技术通过剪枝、核融合、量化和降低精度来解决这一问题。但是,这些方法不会改变底层网络的操作。在本文中,我们提出了硬件感知网络变换(HANT),它通过使用一种类似于神经结构的搜索方法,将低效的操作替换为更有效的选择,从而加速网络。HANT分两个阶段来解决这个问题:第一阶段采用分层特征图提取的方法,对教师模型的每一层进行大量的交替操作训练。在第二阶段,有效操作的组合选择被放宽到整数优化问题,可以在几秒钟内解决。我们通过核融合和量化来扩展HANT,进一步提高吞吐量。我们对EfficientNet家族的加速实验结果表明,HANT可以将其加速3.6倍,在ImageNet数据集上的前1精度下降<0.4%。当比较相同的潜伏期水平时,HANT可以将EfficientNet-B4加速到与EfficientNet-B1相同的潜伏期,同时准确率提高3%。我们检查了一个大的操作池,每层最多197个,并提供了对所选操作和最终体系结构的见解。 摘要:Given a trained network, how can we accelerate it to meet efficiency needs for deployment on particular hardware? The commonly used hardware-aware network compression techniques address this question with pruning, kernel fusion, quantization and lowering precision. However, these approaches do not change the underlying network operations. In this paper, we propose hardware-aware network transformation (HANT), which accelerates a network by replacing inefficient operations with more efficient alternatives using a neural architecture search like approach. HANT tackles the problem in two phase: In the first phase, a large number of alternative operations per every layer of the teacher model is trained using layer-wise feature map distillation. In the second phase, the combinatorial selection of efficient operations is relaxed to an integer optimization problem that can be solved in a few seconds. We extend HANT with kernel fusion and quantization to improve throughput even further. Our experimental results on accelerating the EfficientNet family show that HANT can accelerate them by up to 3.6x with <0.4% drop in the top-1 accuracy on the ImageNet dataset. When comparing the same latency level, HANT can accelerate EfficientNet-B4 to the same latency as EfficientNet-B1 while having 3% higher accuracy. We examine a large pool of operations, up to 197 per layer, and we provide insights into the selected operations and final architectures.

【5】 Improving the Authentication with Built-in Camera ProtocolUsing Built-in Motion Sensors: A Deep Learning Solution 标题:利用内置摄像机协议改进身份验证--使用内置运动传感器:深度学习解决方案

作者:Cezara Benegui,Radu Tudor Ionescu 链接:https://arxiv.org/abs/2107.10536 摘要:我们提出了一种基于内置运动传感器的深度学习方案,增强了基于内置摄像头(ABC)的认证协议。标准的ABC协议基于相机传感器的光响应不均匀性(PRNU)来识别移动设备,同时还考虑了基于QR码的元信息。在认证过程中,用户需要在屏幕上拍摄两张包含两个二维码的照片。提出的二维码图像还包含一个独特的探头信号,类似于相机指纹,由协议生成。在验证过程中,服务器计算接收到的照片的指纹,并在以下情况下对用户进行身份验证:(i)存在探测信号;(ii)嵌入QR码中的元数据正确;(iii)相机指纹正确识别。然而,当攻击者可以从外部照片计算出相机指纹时,协议容易受到伪造攻击,如我们的初步工作所示。在此背景下,我们提出了一种基于运动传感器数据的ABC协议的增强方案,作为附加的被动认证层。智能手机可以通过其运动传感器数据进行识别,与照片不同的是,用户从不在社交媒体平台上发布这些数据,因此比单独使用照片更安全。为此,我们将运动信号转化为深度神经网络产生的嵌入向量,将支持向量机应用于智能手机识别任务。我们对ABC协议的修改产生了一个多模式协议,该协议将我们之前工作中提出的攻击的错误接受率降低到0.07%。 摘要:We propose an enhanced version of the Authentication with Built-in Camera (ABC) protocol by employing a deep learning solution based on built-in motion sensors. The standard ABC protocol identifies mobile devices based on the photo-response non-uniformity (PRNU) of the camera sensor, while also considering QR-code-based meta-information. During authentication, the user is required to take two photos that contain two QR codes presented on a screen. The presented QR code images also contain a unique probe signal, similar to a camera fingerprint, generated by the protocol. During verification, the server computes the fingerprint of the received photos and authenticates the user if (i) the probe signal is present, (ii) the metadata embedded in the QR codes is correct and (iii) the camera fingerprint is identified correctly. However, the protocol is vulnerable to forgery attacks when the attacker can compute the camera fingerprint from external photos, as shown in our preliminary work. In this context, we propose an enhancement for the ABC protocol based on motion sensor data, as an additional and passive authentication layer. Smartphones can be identified through their motion sensor data, which, unlike photos, is never posted by users on social media platforms, thus being more secure than using photographs alone. To this end, we transform motion signals into embedding vectors produced by deep neural networks, applying Support Vector Machines for the smartphone identification task. Our change to the ABC protocol results in a multi-modal protocol that lowers the false acceptance rate for the attack proposed in our previous work to a percentage as low as 0.07%.

【6】 Evaluating the Quality of Finite Element Meshes with Machine Learning 标题:基于机器学习的有限元网格质量评价

作者:Joachim Sprave,Christian Drescher 机构:Mercedes-Benz AG 备注:14 pages, 4 figures 链接:https://arxiv.org/abs/2107.10507 摘要:本文讨论了结构力学仿真中有限元网格质量的评价问题。提出了一种基于专家评估数据训练的机器学习模型的应用。该任务被描述为一个分类问题,其中网格中每个元素的质量由其自身的属性和邻接结构决定。提出了一种特定领域的简单表示方法,使得现成的机器学习方法可以应用。来自工业实践的实验数据证明了有希望的结果。 摘要:This paper addresses the problem of evaluating the quality of finite element meshes for the purpose of structural mechanic simulations. It proposes the application of a machine learning model trained on data collected from expert evaluations. The task is characterised as a classification problem, where quality of each individual element in a mesh is determined by its own properties and adjacency structures. A domain-specific, yet simple representation is proposed such that off-the-shelf machine learning methods can be applied. Experimental data from industry practice demonstrates promising results.

【7】 External-Memory Networks for Low-Shot Learning of Targets in Forward-Looking-Sonar Imagery 标题:用于前视声纳图像目标近距离学习的外部记忆网络

作者:Isaac J. Sledge,Christopher D. Toole,Joseph A. Maestri,Jose C. Principe 机构: Eckis Chair and the Distinguished Professor with both the Department of Electrical and ComputerEngineering and the Department of Biomedical Engineering, He is the directorof the Computational NeuroEngineering Laboratory (CNEL) at the University of Florida 备注:Submitted to IEEE Journal of Oceanic Engineering 链接:https://arxiv.org/abs/2107.10504 摘要:提出了一种基于记忆的前视声纳(FLS)图像目标实时分析框架。我们的框架依赖于首先使用小规模DenseNet启发的网络从图像中去除非歧视性细节。这样做简化了随后的分析,并允许从几个标记的示例进行概括。然后将滤波后的图像级联成一个新的基于神经网络的卷积匹配网络NRMN,用于低炮目标识别。我们采用小尺度流网(LFN)在局部时间尺度上对FLS图像进行配准。LFN使得目标标签在图像间的一致投票成为可能,并且通常提高了目标检测和识别率。我们评估我们的框架使用真实世界的FLS图像与多个广泛的目标类,具有较高的类内可变性和丰富的子类结构。我们表明,很少有镜头学习,从10到30个特定类的样本,执行类似于监督深层网络训练上百个样本每类。有效的零炮学习也是可能的。当分心元件被移除时,NRMNs的感应传输特性实现了高性能。 摘要:We propose a memory-based framework for real-time, data-efficient target analysis in forward-looking-sonar (FLS) imagery. Our framework relies on first removing non-discriminative details from the imagery using a small-scale DenseNet-inspired network. Doing so simplifies ensuing analyses and permits generalizing from few labeled examples. We then cascade the filtered imagery into a novel NeuralRAM-based convolutional matching network, NRMN, for low-shot target recognition. We employ a small-scale FlowNet, LFN to align and register FLS imagery across local temporal scales. LFN enables target label consensus voting across images and generally improves target detection and recognition rates. We evaluate our framework using real-world FLS imagery with multiple broad target classes that have high intra-class variability and rich sub-class structure. We show that few-shot learning, with anywhere from ten to thirty class-specific exemplars, performs similarly to supervised deep networks trained on hundreds of samples per class. Effective zero-shot learning is also possible. High performance is realized from the inductive-transfer properties of NRMNs when distractor elements are removed.

【8】 Learning Sparse Fixed-Structure Gaussian Bayesian Networks 标题:稀疏固定结构高斯贝叶斯网络的学习

作者:Arnab Bhattacharyya,Davin Choo,Rishikesh Gajjala,Sutanu Gayen,Yuhao Wang 机构:National University of Singapore, Indian Institute of Science, Bangalore 备注:30 pages, 11 figures 链接:https://arxiv.org/abs/2107.10450 摘要:高斯-贝叶斯网络(又称线性高斯结构方程模型)广泛应用于连续变量间因果关系的建模。在这项工作中,我们研究了学习一个固定结构的高斯-贝叶斯网络到总变差距离有界误差的问题。我们分析了常用的逐点最小二乘回归(LeastSquares),证明了它具有接近最优的样本复杂度。我们还研究了几个新的算法:BatchAvgLeastSquares取每个节点上多批最小二乘解的平均值,这样就可以在批大小和批数之间进行插值。我们证明了BatchAvgLeastSquares也具有接近最优的样本复杂度。-CauchyEst在每个节点上取多批线性系统解的中值。我们证明了专门用于多树的算法CauchyEstTree具有接近最优的样本复杂度。实验结果表明,对于无污染的、可实现的数据,LeastSquares算法的性能最好,但是在存在污染或DAG错误的情况下,CauchyEst/CauchyEstTree和BatchAvgLeastSquares算法的性能更好。 摘要:Gaussian Bayesian networks (a.k.a. linear Gaussian structural equation models) are widely used to model causal interactions among continuous variables. In this work, we study the problem of learning a fixed-structure Gaussian Bayesian network up to a bounded error in total variation distance. We analyze the commonly used node-wise least squares regression (LeastSquares) and prove that it has a near-optimal sample complexity. We also study a couple of new algorithms for the problem: - BatchAvgLeastSquares takes the average of several batches of least squares solutions at each node, so that one can interpolate between the batch size and the number of batches. We show that BatchAvgLeastSquares also has near-optimal sample complexity. - CauchyEst takes the median of solutions to several batches of linear systems at each node. We show that the algorithm specialized to polytrees, CauchyEstTree, has near-optimal sample complexity. Experimentally, we show that for uncontaminated, realizable data, the LeastSquares algorithm performs best, but in the presence of contamination or DAG misspecification, CauchyEst/CauchyEstTree and BatchAvgLeastSquares respectively perform better.

【9】 Improve Learning from Crowds via Generative Augmentation 标题:通过生成性增强提高从人群中学习的能力

作者:Zhendong Chu,Hongning Wang 机构:Department of Computer Science, University of Virginia, Charlottesville, VA , USA 备注:KDD 2021 链接:https://arxiv.org/abs/2107.10449 摘要:众包为有监督的机器学习提供了一个有效的标签收集模式。然而,为了控制注释成本,众包数据中的每个实例通常由少量注释者注释。这就产生了稀疏性问题,限制了基于这些数据训练的机器学习模型的质量。在本文中,我们研究如何处理稀疏的众包数据使用数据扩充。具体来说,我们建议通过增加原始稀疏注释来直接学习分类器。我们利用生成性对抗网络实现了两个高质量增强的原则:1)生成的注释应该遵循真实注释的分布,这是由鉴别器测量的;2) 生成的注释应该与地面真值标签具有高互信息,这是由辅助网络测量的。在三个真实的数据集上,通过大量的实验和与一系列最先进的群体学习方法的比较,证明了我们的数据扩充框架的有效性。它显示了我们的算法在低预算众包的潜力。 摘要:Crowdsourcing provides an efficient label collection schema for supervised machine learning. However, to control annotation cost, each instance in the crowdsourced data is typically annotated by a small number of annotators. This creates a sparsity issue and limits the quality of machine learning models trained on such data. In this paper, we study how to handle sparsity in crowdsourced data using data augmentation. Specifically, we propose to directly learn a classifier by augmenting the raw sparse annotations. We implement two principles of high-quality augmentation using Generative Adversarial Networks: 1) the generated annotations should follow the distribution of authentic ones, which is measured by a discriminator; 2) the generated annotations should have high mutual information with the ground-truth labels, which is measured by an auxiliary network. Extensive experiments and comparisons against an array of state-of-the-art learning from crowds methods on three real-world datasets proved the effectiveness of our data augmentation framework. It shows the potential of our algorithm for low-budget crowdsourcing in general.

【10】 Species Distribution Modeling for Machine Learning Practitioners: A Review 标题:机器学习实践者的物种分布建模研究进展

作者:Sara Beery,Elijah Cole,Joseph Parker,Pietro Perona,Kevin Winner 机构: California Institute of TechnologyJOSEPH PARKER, California Institute of TechnologyPIETRO PERONA, California Institute of TechnologyKEVIN WINNER, Yale UniversityFig 备注:ACM COMPASS 2021 链接:https://arxiv.org/abs/2107.10400 摘要:保护科学依赖于对给定生态系统中发生的事情的准确理解。有多少物种生活在那里?人口构成如何?随着时间的推移,这种情况如何变化?物种分布模型(SDM)旨在预测物种发生的空间(有时是时间)模式,即可能发现物种的位置。在过去的几年里,人们对应用强大的机器学习工具来解决生态学中具有挑战性的问题的兴趣激增。尽管SDM相当重要,但它在计算机科学界的关注相对较少。我们在这项工作中的目标是为计算机科学家提供必要的背景来阅读SDM文献和开发基于ML的生态有用的SDM算法。特别是,我们将介绍关键的SDM概念和术语,回顾标准模型,讨论数据可用性,并强调技术挑战和缺陷。 摘要:Conservation science depends on an accurate understanding of what's happening in a given ecosystem. How many species live there? What is the makeup of the population? How is that changing over time? Species Distribution Modeling (SDM) seeks to predict the spatial (and sometimes temporal) patterns of species occurrence, i.e. where a species is likely to be found. The last few years have seen a surge of interest in applying powerful machine learning tools to challenging problems in ecology. Despite its considerable importance, SDM has received relatively little attention from the computer science community. Our goal in this work is to provide computer scientists with the necessary background to read the SDM literature and develop ecologically useful ML-based SDM algorithms. In particular, we introduce key SDM concepts and terminology, review standard models, discuss data availability, and highlight technical challenges and pitfalls.

【11】 Quantifying machine learning-induced overdiagnosis in sepsis 标题:机器学习诱导脓毒症过度诊断的量化研究

作者:Anna Fedyukova,Douglas Pires,Daniel Capurro 机构:The University of Melbourne, Melbourne, VIC, Australia 备注:3 pages, 1 figure, Joint KDD 2021 Health Day and 2021 KDD Workshop on Applied Data Science for Healthcare, August 14-18, 2021 链接:https://arxiv.org/abs/2107.10399 摘要:早期诊断技术,包括自我监测系统和可穿戴设备的激增,再加上这些技术在大部分健康人群中的应用,可能大大加剧过度诊断的问题。这可能导致不必要的后果,如医疗保健系统超载和过度治疗,对健康人有潜在的危害。辅助诊断的机器学习工具的出现——同时有望实现快速和更个性化的患者管理和筛查——可能会促成这一问题。过度诊断的识别通常是事后的,经过长时间(从几年到几十年)和昂贵的随机对照试验后才得到证实。在本文中,我们提出了一种创新的方法,使我们能够在预测模型开发过程中预先发现潜在的过度诊断病例。这种方法是基于从预测模型和聚类医学轨迹中获得的标签的组合,使用成人败血症作为测试案例。这是第一次尝试量化机器学习引起的过度诊断,我们相信将作为一个进一步发展的平台,导致安全部署计算诊断工具的指导方针。 摘要:The proliferation of early diagnostic technologies, including self-monitoring systems and wearables, coupled with the application of these technologies on large segments of healthy populations may significantly aggravate the problem of overdiagnosis. This can lead to unwanted consequences such as overloading health care systems and overtreatment, with potential harms to healthy individuals. The advent of machine-learning tools to assist diagnosis -- while promising rapid and more personalised patient management and screening -- might contribute to this issue. The identification of overdiagnosis is usually post hoc and demonstrated after long periods (from years to decades) and costly randomised control trials. In this paper, we present an innovative approach that allows us to preemptively detect potential cases of overdiagnosis during predictive model development. This approach is based on the combination of labels obtained from a prediction model and clustered medical trajectories, using sepsis in adults as a test case. This is one of the first attempts to quantify machine-learning induced overdiagnosis and we believe will serves as a platform for further development, leading to guidelines for safe deployment of computational diagnostic tools.

【12】 Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks 标题:两层RELU神经网络中伪极小图族的解析研究

作者:Yossi Arjevani,Michael Field 机构:NYU, UCSB 链接:https://arxiv.org/abs/2107.10370 摘要:我们研究了两层ReLU神经网络在目标网络产生标签时,相对于平方损失的拟合优化问题。我们利用丰富的对称结构开发了一套新的工具来研究伪极小族。与现有的在极限状态下运行的方法不同,我们的技术直接处理有限数量的输入$d$和神经元$k$的非凸损失情况,并提供分析信息,而不是启发式信息。特别地,我们推导了不同极小值下损失的解析估计,并证明了除$Theta(d)$特征值随~$d$线性增长外,模O(d^{-1/2})$项Hessian谱集中在小的正常数附近。我们进一步证明了Hessian谱在全局极小值和伪极小值处与$O(d ^{-1/2})$阶重合,从而挑战了我们通过局部曲率进行统计推广的能力。最后,我们的技术提供了一个精确的emph{分数}维,在这个维上,临界点族从鞍变成了伪极小。这使得利用等变分岔理论的强大工具研究伪极小值的产生和消灭成为可能。 摘要:We study the optimization problem associated with fitting two-layer ReLU neural networks with respect to the squared loss, where labels are generated by a target network. We make use of the rich symmetry structure to develop a novel set of tools for studying families of spurious minima. In contrast to existing approaches which operate in limiting regimes, our technique directly addresses the nonconvex loss landscape for a finite number of inputs $d$ and neurons $k$, and provides analytic, rather than heuristic, information. In particular, we derive analytic estimates for the loss at different minima, and prove that modulo $O(d^{-1/2})$-terms the Hessian spectrum concentrates near small positive constants, with the exception of $Theta(d)$ eigenvalues which grow linearly with~$d$. We further show that the Hessian spectrum at global and spurious minima coincide to $O(d^{-1/2})$-order, thus challenging our ability to argue about statistical generalization through local curvature. Lastly, our technique provides the exact emph{fractional} dimensionality at which families of critical points turn from saddles into spurious minima. This makes possible the study of the creation and the annihilation of spurious minima using powerful tools from equivariant bifurcation theory.

【13】 How to Tell Deep Neural Networks What We Know 标题:如何告诉深度神经网络我们所知道的

作者:Tirtharaj Dash,Sharad Chitlangia,Aditya Ahuja,Ashwin Srinivasan 机构: Department of Computer Science & Information Systems, Department of Electrical and Electronics Engineering, Anuradha and Prashanth Palakurthi Centre for AI Research (APPCAIR), BITS Pilani, K.K. Birla Goa Campus, Goa , India 备注:12 pages (full version); substantial overlap with arXiv:2103.00180 链接:https://arxiv.org/abs/2107.10295 摘要:我们提出了一个简短的调查方式,在现有的科学知识,包括当建立模型与神经网络。领域知识的包含不仅对构建科学助手有特别的意义,而且对许多其他涉及使用人机协作来理解数据的领域也有特别的意义。在许多这样的实例中,基于机器的模型构造可以从以足够精确的形式编码的域的人类知识中获得显著的益处。本文通过对输入、损失函数和深层网络结构的改变来检验领域知识的包含。分类是为了便于阐述:在实践中,我们希望结合使用这些变化。在每一个类别中,我们描述的技术,已被证明产生重大变化的网络性能。 摘要:We present a short survey of ways in which existing scientific knowledge are included when constructing models with neural networks. The inclusion of domain-knowledge is of special interest not just to constructing scientific assistants, but also, many other areas that involve understanding data using human-machine collaboration. In many such instances, machine-based model construction may benefit significantly from being provided with human-knowledge of the domain encoded in a sufficiently precise form. This paper examines the inclusion of domain-knowledge by means of changes to: the input, the loss-function, and the architecture of deep networks. The categorisation is for ease of exposition: in practice we expect a combination of such changes will be employed. In each category, we describe techniques that have been shown to yield significant changes in network performance.

【14】 Physics-informed neural networks for solving Reynolds-averaged Navierunicode{x2013}Stokes equations

作者:Hamidreza Eivazi,Mojtaba Tahani,Philipp Schlatter,Ricardo Vinuesa 机构:University of Tehran, Tehran, Iran, SimExFLOW, Engineering Mechanics, KTH Royal Institute of Technology, SE-, Stockholm, Sweden 备注:Proc. 13th ERCOFTAC Symp. on Engineering Turbulence Modeling and Measurements (ETMM13), Rhodes, Greece, September 15-17, 2021 链接:https://arxiv.org/abs/2107.10711 摘要:物理信息神经网络(PINNs)是求解和辨识偏微分方程(pde)的一种成功的机器学习方法。我们使用PINNs来求解雷诺平均Navier$unicode{x2013}$Stokes(RANS)方程,不需要任何特定的湍流模型或假设,并且只取域边界上的数据。首先通过求解Falkner$unicode{x2013}$Skan边界层,证明了PINNs在求解层流Navier$unicode{x2013}$Stokes方程中的适用性。然后,我们应用PINNs模拟了四种湍流情况,即零压力梯度边界层、逆压力梯度边界层以及NACA4412翼型和周期性山丘上的湍流流动。结果表明,PINNs对强压力梯度层流具有很好的适用性,预测误差小于1%。对于湍流流动,即使是雷诺应力分量的模拟结果也有很好的精度。 摘要:Physics-informed neural networks (PINNs) are successful machine-learning methods for the solution and identification of partial differential equations (PDEs). We employ PINNs for solving the Reynolds-averaged Navier$unicode{x2013}$Stokes (RANS) equations for incompressible turbulent flows without any specific model or assumption for turbulence, and by taking only the data on the domain boundaries. We first show the applicability of PINNs for solving the Navier$unicode{x2013}$Stokes equations for laminar flows by solving the Falkner$unicode{x2013}$Skan boundary layer. We then apply PINNs for the simulation of four turbulent-flow cases, i.e., zero-pressure-gradient boundary layer, adverse-pressure-gradient boundary layer, and turbulent flows over a NACA4412 airfoil and the periodic hill. Our results show the excellent applicability of PINNs for laminar flows with strong pressure gradients, where predictions with less than 1% error can be obtained. For turbulent flows, we also obtain very good accuracy on simulation results even for the Reynolds-stress components.

其他(12篇)

【1】 Neural Variational Gradient Descent 标题:神经变分梯度下降

作者:Lauro Langosco di Langosco,Vincent Fortuin,Heiko Strathmann 链接:https://arxiv.org/abs/2107.10731 摘要:基于粒子的近似贝叶斯推理方法,如Stein变分梯度下降(SVGD)结合了采样方法的灵活性和收敛性保证以及变分推理的计算优势。在实践中,SVGD依赖于选择适当的核函数,这会影响其对目标分布建模的能力—这是一个仅使用启发式解决方案的具有挑战性的问题。我们提出了一种神经变分梯度下降(NVGD)算法,该算法通过一个深度神经网络对Stein差异的见证函数进行参数化,该神经网络的参数在推理过程中被并行学习,从而减少了进行核选择的必要性。我们对流行的综合推理问题、现实世界中的贝叶斯线性回归和贝叶斯神经网络推理进行了实证评估。 摘要:Particle-based approximate Bayesian inference approaches such as Stein Variational Gradient Descent (SVGD) combine the flexibility and convergence guarantees of sampling methods with the computational benefits of variational inference. In practice, SVGD relies on the choice of an appropriate kernel function, which impacts its ability to model the target distribution -- a challenging problem with only heuristic solutions. We propose Neural Variational Gradient Descent (NVGD), which is based on parameterizing the witness function of the Stein discrepancy by a deep neural network whose parameters are learned in parallel to the inference, mitigating the necessity to make any kernel choices whatsoever. We empirically evaluate our method on popular synthetic inference problems, real-world Bayesian linear regression, and Bayesian neural network inference.

【2】 Fast Low-Rank Tensor Decomposition by Ridge Leverage Score Sampling 标题:基于岭杠杆得分抽样的快速低秩张量分解

作者:Matthew Fahrbach,Mehrdad Ghadiri,Thomas Fu 机构:Google Research, Georgia Tech 备注:29 pages, 1 figure 链接:https://arxiv.org/abs/2107.10654 摘要:低秩张量分解是低秩矩阵逼近的推广,是在高维数据中发现低维结构的有力技术。在本文中,我们研究了Tucker分解,并使用随机数值线性代数中的工具ridge-leverage分数来加速交替最小二乘(ALS)算法中的核心张量更新步骤。更新核心张量是ALS的一个严重瓶颈,是一个高度结构化的岭回归问题,其中设计矩阵是因子矩阵的Kronecker积。我们展示了如何使用近似岭杠杆分数为任何岭回归问题构造一个草图实例,使得草图问题的解向量是原始实例的$(1 varepsilon)$-近似。此外,我们证明了经典杠杆分数足以作为近似值,从而允许我们利用Kronecker结构并及时更新主要依赖于秩和草图参数的核心张量(即输入张量的次线性)。当从设计矩阵中删除行时(例如,如果张量有丢失的条目),我们也给出了岭杠杆分数的上界,并且我们证明了我们的近似岭回归算法对于合成和真实数据上的大、低秩Tucker分解的有效性。 摘要:Low-rank tensor decomposition generalizes low-rank matrix approximation and is a powerful technique for discovering low-dimensional structure in high-dimensional data. In this paper, we study Tucker decompositions and use tools from randomized numerical linear algebra called ridge leverage scores to accelerate the core tensor update step in the widely-used alternating least squares (ALS) algorithm. Updating the core tensor, a severe bottleneck in ALS, is a highly-structured ridge regression problem where the design matrix is a Kronecker product of the factor matrices. We show how to use approximate ridge leverage scores to construct a sketched instance for any ridge regression problem such that the solution vector for the sketched problem is a $(1 varepsilon)$-approximation to the original instance. Moreover, we show that classical leverage scores suffice as an approximation, which then allows us to exploit the Kronecker structure and update the core tensor in time that depends predominantly on the rank and the sketching parameters (i.e., sublinear in the size of the input tensor). We also give upper bounds for ridge leverage scores as rows are removed from the design matrix (e.g., if the tensor has missing entries), and we demonstrate the effectiveness of our approximate ridge regressioni algorithm for large, low-rank Tucker decompositions on both synthetic and real-world data.

【3】 Análisis de Canasta de mercado en supermercados mediante mapas auto-organizados 标题:Canasta de Mercado an anlisis de Canasta de Mercado en supermercados mediante Mapas auto-Organados

作者:Joaquín Cordero,Alfredo Bolt,Mauricio Valle 备注:18 pages, in Spanish, 7 Figures, 5 tables, Research 链接:https://arxiv.org/abs/2107.10647 摘要:导言:智利首都西区一家重要的连锁超市,需要获取关键信息来做出决策,这些信息可以在数据库中获得,但由于信息的复杂性和数量变得难以可视化,因此需要进行处理,。方法:应用Kohonen的SOM方法,用人工神经网络建立了一种算法。要实现这一目标,必须遵循某些关键步骤来开发它,例如数据挖掘,它将负责过滤,然后仅使用相关数据进行市场篮子分析。过滤信息后,必须准备好数据。在数据准备完成后,我们准备了Python编程环境,使之适应样本数据,然后在测试结果出来之后,对SOM进行参数设置训练。结果:SOM的结果通过将大多数购买的产品定位在拓扑上紧密的位置来获得它们之间的关系,形成促销、包装和捆绑,供零售经理考虑,因为这些关系是通过SOM训练与客户的真实交易获得的。结论:在此基础上,对提供本研究所用数据的连锁超市提出了关于购物篮的建议 摘要:Introduction: An important chain of supermarkets in the western zone of the capital of Chile, needs to obtain key information to make decisions, this information is available in the databases but needs to be processed due to the complexity and quantity of information which becomes difficult to visualiz,. Method: For this purpose, an algorithm was developed using artificial neural networks applying Kohonen's SOM method. To carry it out, certain key procedures must be followed to develop it, such as data mining that will be responsible for filtering and then use only the relevant data for market basket analysis. After filtering the information, the data must be prepared. After data preparation, we prepared the Python programming environment to adapt it to the sample data, then proceed to train the SOM with its parameters set after test results. Result: the result of the SOM obtains the relationship between the products that were most purchased by positioning them topologically close, to form promotions, packs and bundles for the retail manager to take into consideration, because these relationships were obtained as a result of the SOM training with the real transactions of the clients. Conclusion: Based on this, recommendations on frequent shopping baskets have been made to the supermarket chain that provided the data used in the research

【4】 MobileCharger: an Autonomus Mobile Robot with Inverted Delta Actuator for Robust and Safe Robot Charging 标题:MobileCharger:一种具有倒置Delta执行器的自主移动机器人,可实现强健安全的机器人充电

作者:Iaroslav Okunevich,Daria Trinitatova,Pavel Kopanev,Dzmitry Tsetserukou 备注:Accepted to 26th International Conference on Emerging Technologies and Factory Automation (ETFA) 2021, IEEE copyright, 8 pages, 12 figures 链接:https://arxiv.org/abs/2107.10585 摘要:MobileCharger是一种新型的移动充电机器人,具有倒三角驱动器,可在两个移动机器人之间实现安全可靠的能量传输。基于RGB-D摄像机的计算机视觉系统可以利用卷积神经网络(CNN)检测目标移动机器人的电极。应用嵌入式高保真触觉传感器,基于接触面上的压力数据,利用CNN估计充电机构电极与主机器人电极之间的错位。因此,开发的视觉触觉感知系统允许执行器末端执行器的精确定位,并确保两个机器人电极之间的可靠连接。实验结果表明,CNN检测电极的平均准确率为84.2%。基于CNN的电极搜索算法试验成功率达83%,平均执行时间为60s。MobileCharger可以引入一种新的充电系统,提高自主移动机器人的普及率。 摘要:MobileCharger is a novel mobile charging robot with an Inverted Delta actuator for safe and robust energy transfer between two mobile robots. The RGB-D camera-based computer vision system allows to detect the electrodes on the target mobile robot using a convolutional neural network (CNN). The embedded high-fidelity tactile sensors are applied to estimate the misalignment between the electrodes on the charger mechanism and the electrodes on the main robot using CNN based on pressure data on the contact surfaces. Thus, the developed vision-tactile perception system allows precise positioning of the end effector of the actuator and ensures a reliable connection between the electrodes of the two robots. The experimental results showed high average precision (84.2%) for electrode detection using CNN. The percentage of successful trials of the CNN-based electrode search algorithm reached 83% and the average execution time accounted for 60 s. MobileCharger could introduce a new level of charging systems and increase the prevalence of autonomous mobile robots.

【5】 A Proactive Management Scheme for Data Synopses at the Edge 标题:一种边缘数据概要的主动管理方案

作者:Kostas Kolomvatsos,Christos Anagnostopoulos 机构:∗ Department of Informatics and Telecommunications, University of Thessaly, Papasiopoulou ,-, Lamia, Greece, † School of Computing Science, University of Glasgow, Lilybank Gardens , G,RZ, Glasgow UK 链接:https://arxiv.org/abs/2107.10558 摘要:物联网(IoT)提供的基础设施与边缘计算(EC)生态系统中的众多处理节点的结合开辟了支持智能应用的新途径。当物联网设备收集的大量数据通过网络传输到边缘节点时,就可以提供这样的应用。可以对所讨论的数据执行各种处理活动,并且EC节点之间的多个协作机会可以促进期望任务的执行。为了支持边缘节点之间的有效交互,需要共享关于地理分布数据的知识。显然,大量数据的迁移会损害网络的稳定性和性能。在本文中,我们建议在EC节点之间交换数据概要而不是真实数据,以便为它们提供关于拥有相似数据的对等节点的必要知识。在考虑数据/服务迁移和任务卸载等决策时,这些知识可能很有价值。我们描述了一个连续推理模型,该模型建立了可用数据集的时间相似性图,以使节点了解其对等节点中数据的演化。我们通过一个基于无监督机器学习模型的智能相似性提取方案来支持所提出的决策机制,同时,将其与表示所谓差异量趋势的统计度量相结合。我们的模型可以揭示交换概要中的差异,并提供一个数据集相似性图,该图成为支持所需处理活动的适当知识库。我们提出了所考虑的问题并提出了解决方案,同时通过大量的实验揭示了其优缺点。 摘要:The combination of the infrastructure provided by the Internet of Things (IoT) with numerous processing nodes present at the Edge Computing (EC) ecosystem opens up new pathways to support intelligent applications. Such applications can be provided upon humongous volumes of data collected by IoT devices being transferred to the edge nodes through the network. Various processing activities can be performed on the discussed data and multiple collaborative opportunities between EC nodes can facilitate the execution of the desired tasks. In order to support an effective interaction between edge nodes, the knowledge about the geographically distributed data should be shared. Obviously, the migration of large amounts of data will harm the stability of the network stability and its performance. In this paper, we recommend the exchange of data synopses than real data between EC nodes to provide them with the necessary knowledge about peer nodes owning similar data. This knowledge can be valuable when considering decisions such as data/service migration and tasks offloading. We describe an continuous reasoning model that builds a temporal similarity map of the available datasets to get nodes understanding the evolution of data in their peers. We support the proposed decision making mechanism through an intelligent similarity extraction scheme based on an unsupervised machine learning model, and, at the same time, combine it with a statistical measure that represents the trend of the so-called discrepancy quantum. Our model can reveal the differences in the exchanged synopses and provide a datasets similarity map which becomes the appropriate knowledge base to support the desired processing activities. We present the problem under consideration and suggest a solution for that, while, at the same time, we reveal its advantages and disadvantages through a large number of experiments.

【6】 Out of the Shadows: Analyzing Anonymous' Twitter Resurgence during the 2020 Black Lives Matter Protests 标题:走出阴影:分析匿名者在2020年黑人生命也是抗议活动期间的Twitter复兴

作者:Keenan Jones,Jason R. C. Nurse,Shujun Li 机构:Institute of Cyber Security for Society (iCSS) & School of Computing, University of Kent, UK 备注:12 pages, 9 figures, 3 tables. Accepted for publication in the proceedings of the sixteenth International AAAI Conference on Web and Social Media 链接:https://arxiv.org/abs/2107.10554 摘要:最近,曾经声名显赫的黑客行动组织“匿名者”几乎没有什么值得注意的活动。该组织负责针对主要企业和政府的维权网络攻击,在2013年主要成员被捕后,似乎已经支离破碎。然而,针对乔治·弗洛伊德被杀后发生的重大黑人生命事件(BLM)抗议活动,有报道称该组织又回来了。为了检验这种明显的复苏,我们对Twitter上的匿名分支机构进行了大规模研究。为此,我们首先使用机器学习来识别一个由33000多个匿名帐户组成的重要网络。通过对从这些账户收集的推特进行主题建模,我们发现了对土地管理局相关主题持续兴趣的证据。然后,我们对关注这些话题的推文进行情绪分析,发现该群体采取团结态度的证据,积极的推文通常用于表达对土地管理局的支持,消极的推文通常用于批评警方的行动。最后,我们检查了网络中自动化的存在,识别了大多数匿名帐户中类似bot的行为。这些调查结果表明,虽然该组织在抗议活动中看到了复苏,但机器人活动可能是夸大这种复苏程度的原因。 摘要:Recently, there had been little notable activity from the once prominent hacktivist group, Anonymous. The group, responsible for activist-based cyber attacks on major businesses and governments, appeared to have fragmented after key members were arrested in 2013. In response to the major Black Lives Matter (BLM) protests that occurred after the killing of George Floyd, however, reports indicated that the group was back. To examine this apparent resurgence, we conduct a large-scale study of Anonymous affiliates on Twitter. To this end, we first use machine learning to identify a significant network of more than 33,000 Anonymous accounts. Through topic modelling of tweets collected from these accounts, we find evidence of sustained interest in topics related to BLM. We then use sentiment analysis on tweets focused on these topics, finding evidence of a united approach amongst the group, with positive tweets typically being used to express support towards BLM, and negative tweets typically being used to criticize police actions. Finally, we examine the presence of automation in the network, identifying indications of bot-like behavior across the majority of Anonymous accounts. These findings show that whilst the group has seen a resurgence during the protests, bot activity may be responsible for exaggerating the extent of this resurgence.

【7】 Efficient Neural Causal Discovery without Acyclicity Constraints 标题:无循环性约束的高效神经因果发现

作者:Phillip Lippe,Taco Cohen,Efstratios Gavves 机构:University of Amsterdam, QUVA lab, Qualcomm AI Research∗ 备注:8th Causal Inference Workshop at UAI 2021 (contributed talk). 34 pages, 12 figures 链接:https://arxiv.org/abs/2107.10483 摘要:利用观测数据和介入数据学习因果图模型的结构是许多科学领域的一个基本问题。一个很有前途的方向是基于分数的方法的连续优化,它以数据驱动的方式有效地学习因果图。然而,到目前为止,这些方法需要约束优化来实现非循环性或缺乏收敛性保证。在这篇文章中,我们提出了一种有效的结构学习方法,利用观察和干预数据的有向,非循环因果图。ENCO将图搜索描述为一个独立边可能性的优化,边方向被建模为一个单独的参数。因此,我们可以在温和的条件下提供ENCO的收敛性保证,而不必限制分数函数的非循环性。实验表明,在处理确定性变量和潜在混杂因素的同时,ENCO可以有效地恢复具有数百个节点的图,其数量级比以前可能的要大。 摘要:Learning the structure of a causal graphical model using both observational and interventional data is a fundamental problem in many scientific fields. A promising direction is continuous optimization for score-based methods, which efficiently learn the causal graph in a data-driven manner. However, to date, those methods require constrained optimization to enforce acyclicity or lack convergence guarantees. In this paper, we present ENCO, an efficient structure learning method for directed, acyclic causal graphs leveraging observational and interventional data. ENCO formulates the graph search as an optimization of independent edge likelihoods, with the edge orientation being modeled as a separate parameter. Consequently, we can provide convergence guarantees of ENCO under mild conditions without constraining the score function with respect to acyclicity. In experiments, we show that ENCO can efficiently recover graphs with hundreds of nodes, an order of magnitude larger than what was previously possible, while handling deterministic variables and latent confounders.

【8】 Shedding some light on Light Up with Artificial Intelligence 标题:用人工智能点亮灯塔让人眼前一亮

作者:Libo Sun,James Browning,Roberto Perera 机构:Samuel Ginn College of Engineering, Auburn University, Auburn, AL USA, Department of Aerospace Engineering 备注:14 pages, 16 figures, for associated codes, see <this https URL> 链接:https://arxiv.org/abs/2107.10429 摘要:光明之谜,也被称为AKARI之谜,从未用现代人工智能(AI)方法解决过。目前,应用最广泛的自主求解方法是进化论算法。这个项目是一项努力,以应用新的人工智能技术来解决发光难题更快,更有效的计算。产生最优解的算法包括爬山法、模拟退火法、前馈神经网络(FNN)和卷积神经网络(CNN)。开发了两种爬山和模拟退火算法,分别使用2个动作(添加和移除灯泡)和3个动作(添加、移除或将灯泡移动到不同的单元)。爬山算法和模拟退火算法对3个动作都有较高的精度。模拟退火算法的性能明显优于爬山算法、模糊神经网络算法、CNN算法和进化理论算法,在30种独特的电路板配置中,其精度达到100%。最后,虽然FNN和CNN算法的精确度较低,但计算时间明显快于其他算法。此项目的GitHub存储库位于https://github.com/rperera12/AKARI-LightUp-GameSolver-with-DeepNeuralNetworks-and-HillClimb-or-SimulatedAnnealing. 摘要:The Light-Up puzzle, also known as the AKARI puzzle, has never been solved using modern artificial intelligence (AI) methods. Currently, the most widely used computational technique to autonomously develop solutions involve evolution theory algorithms. This project is an effort to apply new AI techniques for solving the Light-up puzzle faster and more computationally efficient. The algorithms explored for producing optimal solutions include hill climbing, simulated annealing, feed-forward neural network (FNN), and convolutional neural network (CNN). Two algorithms were developed for hill climbing and simulated annealing using 2 actions (add and remove light bulb) versus 3 actions(add, remove, or move light-bulb to a different cell). Both hill climbing and simulated annealing algorithms showed a higher accuracy for the case of 3 actions. The simulated annealing showed to significantly outperform hill climbing, FNN, CNN, and an evolutionary theory algorithm achieving 100% accuracy in 30 unique board configurations. Lastly, while FNN and CNN algorithms showed low accuracies, computational times were significantly faster compared to the remaining algorithms. The GitHub repository for this project can be found at https://github.com/rperera12/AKARI-LightUp-GameSolver-with-DeepNeuralNetworks-and-HillClimb-or-SimulatedAnnealing.

【9】 On the Use of Time Series Kernel and Dimensionality Reduction to Identify the Acquisition of Antimicrobial Multidrug Resistance in the Intensive Care Unit 标题:应用时间序列核函数和降维方法识别重症监护病房抗菌药物多药耐药获得性

作者:Óscar Escudero-Arnanz,Joaquín Rodríguez-Álvarez,Karl Øyvind Mikalsen,Robert Jenssen,Cristina Soguero-Ruiz 机构:Rey Juan Carlos University, Fuenlabrada, Madrid, Spain, University Hospital of Fuenlabrada, University Hospital of North-Norway, UiT The Arctic University of Norway, Tromsø, Norway 链接:https://arxiv.org/abs/2107.10398 摘要:在重症监护病房(ICU)患者中获得抗菌药物多药耐药性(AMR)是全球关注的主要问题。本研究以马德里FueLabRADA大学医院ICU记录的3476例患者的多变量时间序列(MTS)数据为研究对象,从2004至2020. 18的患者在ICU逗留期间获得AMR。本文的目的是对AMR的发展进行早期预测。为此,我们利用时间序列聚类核(TCK)来学习MTS之间的相似性。为了评估TCK作为核的有效性,我们将几种降维技术应用于可视化和分类任务。实验结果表明,TCK可以识别一组在ICU住院48小时内获得AMR的患者,并具有良好的分类能力。 摘要:The acquisition of Antimicrobial Multidrug Resistance (AMR) in patients admitted to the Intensive Care Units (ICU) is a major global concern. This study analyses data in the form of multivariate time series (MTS) from 3476 patients recorded at the ICU of University Hospital of Fuenlabrada (Madrid) from 2004 to 2020. 18% of the patients acquired AMR during their stay in the ICU. The goal of this paper is an early prediction of the development of AMR. Towards that end, we leverage the time-series cluster kernel (TCK) to learn similarities between MTS. To evaluate the effectiveness of TCK as a kernel, we applied several dimensionality reduction techniques for visualization and classification tasks. The experimental results show that TCK allows identifying a group of patients that acquire the AMR during the first 48 hours of their ICU stay, and it also provides good classification capabilities.

【10】 Distributed Saddle-Point Problems Under Similarity 标题:相似条件下的分布式鞍点问题

作者:Aleksandr Beznosikov,Gesualdo Scutari,Alexander Rogozin,Alexander Gasnikov 机构:MIPT, Russia, Purdue University, USA 链接:https://arxiv.org/abs/2107.10706 摘要:研究了两类主/从(集中式)和网状(分散式)网络上(强)凸(强)凹鞍点问题(SPPs)的求解方法。由于统计数据相似或其他原因,假设每个节点的局部函数相似。我们为求解SPP的一类相当一般的算法建立了复杂度下限。我们证明了给定的次优$epsilon>0$在$Omegabig(DeltacdotDelta/mucdotlog(1/varepsilon)big)$轮通信的主/工作者网络上实现,其中,$delta>0$度量局部函数的相似程度,$mu$是它们的强凸常数,$delta$是网络的直径。网状网络上的较低通信复杂度界限为$Omegabig(1/{sqrt{rho}}cdot{delta}/{mu}cdotlog(1/varepsilon)big)$,其中$rho$是用于相邻节点之间通信的八卦矩阵的(标准化)特征间隙。然后,我们提出了算法匹配的下限在任何类型的网络(高达日志因素)。我们评估了所提出的算法对一个鲁棒logistic回归问题的有效性。 摘要:We study solution methods for (strongly-)convex-(strongly)-concave Saddle-Point Problems (SPPs) over networks of two type - master/workers (thus centralized) architectures and meshed (thus decentralized) networks. The local functions at each node are assumed to be similar, due to statistical data similarity or otherwise. We establish lower complexity bounds for a fairly general class of algorithms solving the SPP. We show that a given suboptimality $epsilon>0$ is achieved over master/workers networks in $Omegabig(Deltacdot delta/mucdot log (1/varepsilon)big)$ rounds of communications, where $delta>0$ measures the degree of similarity of the local functions, $mu$ is their strong convexity constant, and $Delta$ is the diameter of the network. The lower communication complexity bound over meshed networks reads $Omegabig(1/{sqrt{rho}} cdot {delta}/{mu}cdotlog (1/varepsilon)big)$, where $rho$ is the (normalized) eigengap of the gossip matrix used for the communication between neighbouring nodes. We then propose algorithms matching the lower bounds over either types of networks (up to log-factors). We assess the effectiveness of the proposed algorithms on a robust logistic regression problem.

【11】 Digital Einstein Experience: Fast Text-to-Speech for Conversational AI 标题:数字爱因斯坦体验:对话式人工智能的快速文本到语音转换

作者:Joanna Rownicka,Kilian Sprenkamp,Antonio Tripiana,Volodymyr Gromoglasov,Timo P Kunz 机构:Aflorithmic Labs Ltd. 备注:accepted at Interspeech 2021 链接:https://arxiv.org/abs/2107.10658 摘要:我们描述了为会话人工智能用例创建和提供自定义语音的方法。更具体地说,我们为数字爱因斯坦角色提供了一个声音,以实现数字对话体验中的人机交互。为了生成符合上下文的语音,我们首先设计一个语音字符,然后生成与所需语音属性相对应的录音。然后我们模拟声音。我们的解决方案利用FastSpeech2从音素中预测对数标度的mel谱图,并利用并行WaveGAN生成波形。该系统支持字符输入,并在输出端提供语音波形。我们为选定的单词使用自定义词典,以确保它们的正确发音。我们提出的云架构能够实现快速的语音传输,使我们能够与数字版本的Albert Einstein进行实时对话。 摘要:We describe our approach to create and deliver a custom voice for a conversational AI use-case. More specifically, we provide a voice for a Digital Einstein character, to enable human-computer interaction within the digital conversation experience. To create the voice which fits the context well, we first design a voice character and we produce the recordings which correspond to the desired speech attributes. We then model the voice. Our solution utilizes Fastspeech 2 for log-scaled mel-spectrogram prediction from phonemes and Parallel WaveGAN to generate the waveforms. The system supports a character input and gives a speech waveform at the output. We use a custom dictionary for selected words to ensure their proper pronunciation. Our proposed cloud architecture enables for fast voice delivery, making it possible to talk to the digital version of Albert Einstein in real-time.

【12】 A Sparsity Algorithm with Applications to Corporate Credit Rating 标题:一种稀疏性算法及其在企业信用评级中的应用

作者:Dan Wang,Zhi Chen,Ionut Florescu 备注:16 pages, 11 tables, 3 figures 链接:https://arxiv.org/abs/2107.10306 摘要:在人工智能中,解释通常被称为黑盒的机器学习技术的结果是一项困难的任务。对特定“黑匣子”的反事实解释试图找出对输入值的最小变化,从而修改对特定输出的预测,而不是原始输出。在这项工作中,我们制定的问题,寻找一个反事实的解释作为一个优化问题。我们提出了一种新的“稀疏算法”来解决优化问题,同时也最大限度地提高了反事实解释的稀疏性。为了提高上市公司的信用评级,我们应用稀疏性算法为上市公司提供了一个简单的建议。我们用一个综合生成的数据集验证了稀疏性算法,并将其进一步应用于美国金融、医疗和it行业公司的季度财务报表。我们提供的证据表明,当评级提高时,反事实解释可以捕捉到当前季度和下一季度之间发生变化的真实报表特征的性质。实证结果表明,一家公司的评级越高,进一步提高信用评级所需的“努力”就越大。 摘要:In Artificial Intelligence, interpreting the results of a Machine Learning technique often termed as a black box is a difficult task. A counterfactual explanation of a particular "black box" attempts to find the smallest change to the input values that modifies the prediction to a particular output, other than the original one. In this work we formulate the problem of finding a counterfactual explanation as an optimization problem. We propose a new "sparsity algorithm" which solves the optimization problem, while also maximizing the sparsity of the counterfactual explanation. We apply the sparsity algorithm to provide a simple suggestion to publicly traded companies in order to improve their credit ratings. We validate the sparsity algorithm with a synthetically generated dataset and we further apply it to quarterly financial statements from companies in financial, healthcare and IT sectors of the US market. We provide evidence that the counterfactual explanation can capture the nature of the real statement features that changed between the current quarter and the following quarter when ratings improved. The empirical results show that the higher the rating of a company the greater the "effort" required to further improve credit rating.

0 人点赞