机器之心 & ArXiv Weekly Radiostation
参与:杜伟、楚航、罗若天
本周主要论文包括Bengio 等人用 GFlowNets 统一生成模型;清华 & Meta 提出 HorNet,用递归门控卷积进行高阶空间相互作用。
目录
- Unifying Generative Models with GFlowNets
- Transformers in Remote Sensing: A Survey
- Efficient Methods for Natural Language Processing: A Survey
- Interactive Disentanglement: Learning Concepts by Interacting with their Prototype Representations
- MM-RealSR: Metric Learning based Interactive Modulation for Real-World Super-Resolution
- HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions
- How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
- ArXiv Weekly Radiostation:NLP、CV、ML 更多精选论文(附音频)
论文 1:Unifying Generative Models with GFlowNets
- 作者:Dinghuai Zhang 等
- 论文地址:https://arxiv.org/pdf/2209.02606.pdf
摘要:生成流网络(GFlowNets)是图灵奖得主 Yoshua Bengio 对 AI 领域未来方向提出的想法。灵感来源于信息在时序差分 RL 方法中的传播方式,两者都依赖于 credit assignment 一致性原则,它们只有在训练收敛时才能实现渐近。由于状态空间中的路径数量呈指数级增长,因此实现梯度的精确计算比较困难,因此,这两种方法都依赖于不同组件之间的局部一致性和一个训练目标,即如果所有学习的组件相互之间都是局部一致性的,那么我们就得到了一个系统,该系统可以进行全局估计。
现在,Bengio 及其学生张鼎怀等发表了一篇新论文,简要介绍了现有深度生成模型与 GFlowNet 框架之间的联系,阐明了它们的重叠特征,并通过马尔可夫轨迹学习的视角提供了一个统一的观点,并进一步提供了一种统一训练和推理算法的方法。
推荐:用 GFlowNets 统一生成模型,Bengio 等人数页论文给讲通了。
论文 2:Transformers in Remote Sensing: A Survey
- 作者:Abdulaziz Amer Aleissaee 等
- 论文地址:https://arxiv.org/pdf/2209.01206.pdf
摘要:这篇综述论文涵盖了 60 多种最近基于 Transformer 的方法,用于解决遥感子领域中不同的遥感问题,包括超高分辨率 (VHR)、高光谱 (HSI) 和合成孔径雷达 (SAR) 图像。
最近遥感成像领域基于 Transformer 的技术概览。
推荐:内容详实,方法全面,对于想要了解该领域的读者来说值得阅读。
论文 3:Efficient Methods for Natural Language Processing: A Survey
- 作者:Marcos Treviso 等
- 论文地址:https://arxiv.org/pdf/2209.00099.pdf
摘要:这项综述论文涵盖了 NLP 高效方法的发现与实现,旨在指导该领域的新研究人员并激发新方法开发灵感。
高效 NLP 方法汇总。
推荐:本文根据传统的 NLP pipeline 组织了现有文献,并对现有提高效率的方法及其缺点进行了广泛的概述。
论文 4:Interactive Disentanglement: Learning Concepts by Interacting with their Prototype Representations
- 作者:Wolfgang Stammer 等
- 论文地址:https://arxiv.org/pdf/2112.02290v2.pdf
摘要:本文旨在通过弱监督和人机互动在原型离散的潜在空间上学习视觉概念。本文提出交互式概念交换网络(interactive Concept Swapping Networks, iCSNs),这是一个通过弱监督和隐性原型表征来学习以概念为基础的表征的新框架。这种以语义为基础的、离散的潜在空间有利于人类理解和人机互动。
交互式概念交换网络。
推荐:通过基于概念解释进行交互,被 CVPR 2022 会议接收。
论文 5:MM-RealSR: Metric Learning based Interactive Modulation for Real-World Super-Resolution
- 作者:Chong Mou 等
- 论文地址:https://arxiv.org/pdf/2205.05065.pdf
摘要:最近无监督的对比学习在底层视觉领域受到越来越多的关注。这类方法方便了复杂降质特征的提取,这给来自腾讯 ARC Lab 的研究者们提供了一个思路: 是否可以利用对比的方式无监督的构建现实场景下图像超分辨率的可调节交互机制?
这篇工作的核心是利用度量学习在高阶仿真退化中,通过对比不同样本退化强度大小的方式无监督地构建退化强度的度量空间。度量空间中的退化得分不代表真实的退化强度,但可以反映退化强度的相对大小。本篇文章提出的方法(MM-RealSR)通过度量空间中的退化得分来构建现实场景下图像超分辨率的可调节交互机制。文章已被 ECCV 2022 会议接收。
本文提出方案与现有方法的对比。
推荐:可调节的真实场景图像超分辨率, 腾讯 ARC Lab 利用度量学习来解决。
论文 6:HorNet: Efficient High-Order Spatial Interactions with Recursive Gated Convolutions
- 作者:Yongming Rao 等
- 论文地址:https://arxiv.org/pdf/2207.14284.pdf
摘要:视觉 Transformer 的最新进展表明,在基于点积自注意力的新空间建模机制驱动的各种任务中取得了巨大成功。在本文中,来自清华大学和 Meta AI 的研究者证明了视觉 Transformer 背后的关键成分,即输入自适应、长程和高阶空间交互,也可以通过基于卷积的框架有效实现。作者提出了递归门卷积(
),它用门卷积和递归设计进行高阶空间交互。新操作具有高度灵活性和可定制性,与卷积的各种变体兼容,并将自注意力中的二阶交互扩展到任意阶,而不引入显著的额外计算。
HorNet 基本构建块概览。
推荐:清华 & Meta 提出 HorNet,用递归门控卷积进行高阶空间相互作用。
论文 7:How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
- 作者:Yimeng Zhang 等
- 论文地址:https://openreview.net/pdf?id=W9G_ImpHlQd
摘要:这里介绍一篇密歇根州立大学 (Michigan State University) 和 MIT-IBM AI 实验室的一篇关于黑箱防御工作的文章,本文被 ICLR 2022 接收为 spotlight paper, 代码和模型均已开源。
ZO-AE-DS 的模型架构。
推荐:MSU 联合 MIT-IBM 提出首个黑箱防御框架。
ArXiv Weekly Radiostation
机器之心联合由楚航、罗若天发起的ArXiv Weekly Radiostation,在 7 Papers 的基础上,精选本周更多重要论文,包括NLP、CV、ML领域各10篇精选,并提供音频形式的论文摘要简介,详情如下:
本周 10 篇 NLP 精选论文是:
代码语言:javascript复制1. On the Complementarity between Pre-Training and Random-Initialization for Resource-Rich Machine Translation. (from Dacheng Tao)
2. Exploiting Hybrid Semantics of Relation Paths for Multi-hop Question Answering Over Knowledge Graphs. (from Tong Zhang)
3. Elaboration-Generating Commonsense Question Answering at Scale. (from Noah A. Smith)
4. FOLIO: Natural Language Reasoning with First-Order Logic. (from Dragomir Radev)
5. Improving Compositional Generalization in Math Word Problem Solving. (from Jing Jiang, Ee-Peng Lim)
6. Multilingual Bidirectional Unsupervised Translation Through Multilingual Finetuning and Back-Translation. (from Chris Callison-Burch)
7. Multi-modal Contrastive Representation Learning for Entity Alignment. (from Meng Wang)
8. That Slepen Al the Nyght with Open Ye! Cross-era Sequence Segmentation with Switch-memory. (from Jun Wang)
9. Investigating Reasons for Disagreement in Natural Language Inference. (from Marie-Catherine de Marneffe)
10. On the Effectiveness of Compact Biomedical Transformers. (from David A. Clifton)
本周 10 篇 CV 精选论文是:
代码语言:javascript复制1. Studying Bias in GANs through the Lens of Race. (from Trevor Darrell, Alexei A. Efros)
2. Prior Knowledge-Guided Attention in Self-Supervised Vision Transformers. (from Kurt Keutzer, Trevor Darrell)
3. Not All Instances Contribute Equally: Instance-adaptive Class Representation Learning for Few-Shot Visual Recognition. (from Dacheng Tao)
4. Measuring the Interpretability of Unsupervised Representations via Quantized Reverse Probing. (from Andrea Vedaldi)
5. Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations. (from Andrea Vedaldi)
6. Training Strategies for Improved Lip-reading. (from Maja Pantic)
7. Can GAN-induced Attribute Manipulations Impact Face Recognition?. (from Arun Ross)
8. Facial De-morphing: Extracting Component Faces from a Single Morph. (from Arun Ross)
9. 3D Textured Shape Recovery with Learned Geometric Priors. (from Marc Pollefeys)
10. Detection and Mapping of Specular Surfaces Using Multibounce Lidar Returns. (from Ramesh Raskar)
本周 10 篇 ML 精选论文是:
代码语言:javascript复制1. Self-supervised multimodal neuroimaging yields predictive representations for a spectrum of Alzheimer's phenotypes. (from Vince D. Calhoun)
2. On the Convergence of Monte Carlo UCB for Random-Length Episodic MDPs. (from Keith Ross)
3. Bispectral Neural Networks. (from Bruno Olshausen)
4. MaxWeight With Discounted UCB: A Provably Stable Scheduling Policy for Nonstationary Multi-Server Systems With Unknown Statistics. (from R. Srikant)
5. Learning Differential Operators for Interpretable Time Series Modeling. (from Yang Liu)
6. Self-supervised Representation Learning on Electronic Health Records with Graph Kernel Infomax. (from Der-Chen Chang, Ophir Frieder)
7. W-Transformers : A Wavelet-based Transformer Framework for Univariate Time Series Forecasting. (from Abdenour Hadid)
8. Recurrent Convolutional Neural Networks Learn Succinct Learning Algorithms. (from Sham Kakade)
9. Revisiting Outer Optimization in Adversarial Training. (from Nasser M. Nasrabadi)
10. When Bioprocess Engineering Meets Machine Learning: A Survey from the Perspective of Automated Bioprocess Development. (from Lars Schmidt-Thieme)
© THE END
转载请联系本公众号获得授权
投稿或寻求报道:content@jiqizhixin.com