自然语言处理学术速递[11.10]

2021-11-17 10:59:04 浏览数 (1)

cs.CL 方向,今日共计14篇

BERT(1篇)

【1】 DSBERT:Unsupervised Dialogue Structure learning with BERT 标题:DSBERT:基于BERT的无监督对话结构学习 链接:https://arxiv.org/abs/2111.04933

作者:Bingkun Chen,Shaobing Dai,Shenghua Zheng,Lei Liao,Yang Li 摘要:无监督对话结构学习是自然语言处理中一项重要而有意义的任务。提取的对话结构和过程有助于分析人类对话,并在对话系统的设计和评估中发挥重要作用。传统的对话系统需要专家手动设计对话结构,这非常昂贵。但通过无监督的对话结构学习,可以自动获得对话结构,降低开发人员构建对话过程的成本。学习到的对话结构可以促进下游任务系统的对话生成,提高对话机器人应答的逻辑性和一致性。本文提出了一种基于Bert的无监督对话结构学习算法DSBERT(dialogue structure Bert)。与以往的SOTA模型VRNN和SVRNN不同,我们将BERT和自动编码器相结合,可以有效地结合上下文信息。为了更好地防止模型陷入局部最优解,使对话状态分布更加均匀合理,我们还提出了三个平衡损失函数,可用于对话结构学习。实验结果表明,DSBERT能够生成更接近真实结构的对话结构,能够区分不同语义的句子,并将其映射到不同的隐藏状态。 摘要:Unsupervised dialogue structure learning is an important and meaningful task in natural language processing. The extracted dialogue structure and process can help analyze human dialogue, and play a vital role in the design and evaluation of dialogue systems. The traditional dialogue system requires experts to manually design the dialogue structure, which is very costly. But through unsupervised dialogue structure learning, dialogue structure can be automatically obtained, reducing the cost of developers constructing dialogue process. The learned dialogue structure can be used to promote the dialogue generation of the downstream task system, and improve the logic and consistency of the dialogue robot's reply.In this paper, we propose a Bert-based unsupervised dialogue structure learning algorithm DSBERT (Dialogue Structure BERT). Different from the previous SOTA models VRNN and SVRNN, we combine BERT and AutoEncoder, which can effectively combine context information. In order to better prevent the model from falling into the local optimal solution and make the dialogue state distribution more uniform and reasonable, we also propose three balanced loss functions that can be used for dialogue structure learning. Experimental results show that DSBERT can generate a dialogue structure closer to the real structure, can distinguish sentences with different semantics and map them to different hidden states.

语义分析(1篇)

【1】 Learning to Generalize Compositionally by Transferring Across Semantic Parsing Tasks 标题:通过跨语义分析任务迁移学习成分泛化 链接:https://arxiv.org/abs/2111.05013

作者:Wang Zhu,Peter Shaw,Tal Linzen,Fei Sha 机构:University of Southern California,Google Research,New York University 摘要:神经网络模型通常很难推广到不匹配的域或分布。在NLP中,这个问题特别是当模型被期望从组合上推广时,即,到熟悉的单词和结构的新组合。我们研究了促进从一个合成任务向另一个合成任务转移学习的学习表示法:模型的表示法和特定于任务的层在预微调任务中进行了不同的策略性训练,以便它们能够很好地概括需要合成性的不匹配拆分。我们将此方法应用于语义解析,使用三个非常不同的数据集,COGS、GeoQuery和SCAN,交替用作预优化和目标任务。我们的方法显著提高了目标任务测试集基线上的合成泛化,这是在微调过程中保持的。烧蚀研究描述了所提出算法中主要步骤的效用,并支持我们的假设。 摘要:Neural network models often generalize poorly to mismatched domains or distributions. In NLP, this issue arises in particular when models are expected to generalize compositionally, that is, to novel combinations of familiar words and constructions. We investigate learning representations that facilitate transfer learning from one compositional task to another: the representation and the task-specific layers of the models are strategically trained differently on a pre-finetuning task such that they generalize well on mismatched splits that require compositionality. We apply this method to semantic parsing, using three very different datasets, COGS, GeoQuery and SCAN, used alternately as the pre-finetuning and target task. Our method significantly improves compositional generalization over baselines on the test set of the target task, which is held out during fine-tuning. Ablation studies characterize the utility of the major steps in the proposed algorithm and support our hypothesis.

Graph|知识图谱|Knowledge(1篇)

【1】 Reason first, then respond: Modular Generation for Knowledge-infused Dialogue 标题:先理性后回应:知识灌输对话的模块化生成 链接:https://arxiv.org/abs/2111.05204

作者:Leonard Adolphs,Kurt Shuster,Jack Urbanek,Arthur Szlam,Jason Weston 机构:ETH Zürich, Facebook AI Research 摘要:大型语言模型可以产生流畅的对话,但往往会产生事实不准确的幻觉。虽然检索增强模型有助于缓解这一问题,但它们仍然面临着推理以提供正确知识和同时生成对话的困难挑战。在这项工作中,我们提出了一个模块化的模型,知识到响应(K2R),用于将知识整合到会话代理中,它将这个问题分解为两个简单的步骤。K2R首先在给定对话上下文的情况下生成一个知识序列,作为中间步骤。在这个“推理步骤”之后,模型会关注自己生成的知识序列以及对话上下文,以产生最终的响应。在详细的实验中,我们发现这种模型在基于知识的对话任务中产生的幻觉较少,并且在可解释性和模块性方面具有优势。特别是,它可以用于将QA和对话系统融合在一起,以使对话代理能够给出有知识的答案,或者QA模型能够在零触发设置下给出对话响应。 摘要:Large language models can produce fluent dialogue but often hallucinate factual inaccuracies. While retrieval-augmented models help alleviate this issue, they still face a difficult challenge of both reasoning to provide correct knowledge and generating conversation simultaneously. In this work, we propose a modular model, Knowledge to Response (K2R), for incorporating knowledge into conversational agents, which breaks down this problem into two easier steps. K2R first generates a knowledge sequence, given a dialogue context, as an intermediate step. After this "reasoning step", the model then attends to its own generated knowledge sequence, as well as the dialogue context, to produce a final response. In detailed experiments, we find that such a model hallucinates less in knowledge-grounded dialogue tasks, and has advantages in terms of interpretability and modularity. In particular, it can be used to fuse QA and dialogue systems together to enable dialogue agents to give knowledgeable answers, or QA models to give conversational responses in a zero-shot setting.

GAN|对抗|攻击|生成相关(1篇)

【1】 Speaker Generation 标题:扬声器生成 链接:https://arxiv.org/abs/2111.05095

作者:Daisy Stanton,Matt Shannon,Soroosh Mariooryad,RJ Skerry-Ryan,Eric Battenberg,Tom Bagby,David Kao 机构:Google Research, USA 备注:12 pages, 3 figures, 4 tables, appendix with 2 tables 摘要:这项工作探索了在不存在的人声中合成语音的任务。我们将这项任务称为“说话人生成”,并介绍了TacoSpown,一个在这项任务中具有竞争力的系统。TacoSpawn是一种基于反复注意的文本到语音模型,它学习说话人嵌入空间上的分布,从而能够对新颖多样的说话人进行采样。我们的方法易于实现,并且不需要从说话人ID系统进行转移学习。我们提出了客观和主观指标来评估这项任务的性能,并证明我们提出的客观指标与人类对说话人相似性的感知相关。音频样本可以在我们的演示页面上找到。 摘要:This work explores the task of synthesizing speech in nonexistent human-sounding voices. We call this task "speaker generation", and present TacoSpawn, a system that performs competitively at this task. TacoSpawn is a recurrent attention-based text-to-speech model that learns a distribution over a speaker embedding space, which enables sampling of novel and diverse speakers. Our method is easy to implement, and does not require transfer learning from speaker ID systems. We present objective and subjective metrics for evaluating performance on this task, and demonstrate that our proposed objective metrics correlate with human perception of speaker similarity. Audio samples are available on our demo page.

检测相关(1篇)

【1】 Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or Something Else? 标题:人在环路中的虚假信息检测:立场、情绪还是其他? 链接:https://arxiv.org/abs/2111.05139

作者:Alexander Michael Daniel 机构:Center for Operations Research and Analysis, Defence Research and Development Canada, Department of National Defence, Room , Building T, Carling Avenue, Ottawa, Ontario, Canada, K,K ,Y, Submitted to the ,th International Command and Control Research and 备注:15 pages references. Presented at the 26th International Command and Control Research and Technology Symposium, 18 October 2021 摘要:最近,政治和流行病为机器学习的虚假信息(又称假新闻)检测算法的发展提供了充足的动力。现有文献主要集中于全自动案例,但由此产生的技术无法可靠地检测军事应用所需的各种主题、来源和时间尺度上的虚假信息。然而,通过利用现有的分析师作为人在回路,情绪分析、基于方面的情绪分析和姿态检测等典型的机器学习技术成为用于部分自动虚假信息检测系统的可行方法。本文旨在确定这些技术中哪种最适合此目的,以及每种技术如何最好地用于此目的。每种方法都使用相同大小和几乎相同的神经结构的训练数据集(一个作为单词嵌入器的BERT变换器,其后有一个前馈层),然后在情绪和立场特定的数据集上进行测试,以确定每种方法在完成其他任务时的性能基线。四个与新冠病毒-19假信息相关的不同数据集用于测试每种技术检测训练数据集中未出现的主题假信息的能力。然后使用这些测试的定量和定性结果来深入了解如何在实践中最好地使用这些技术。 摘要:Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms. Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications. By leveraging an already-available analyst as a human-in-the-loop, however, the canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system. This paper aims to determine which of these techniques is best suited for this purpose and how each technique might best be used towards this end. Training datasets of the same size and nearly identical neural architectures (a BERT transformer as a word embedder with a single feed-forward layer thereafter) are used for each approach, which are then tested on sentiment- and stance-specific datasets to establish a baseline of how well each method can be used to do the other tasks. Four different datasets relating to COVID-19 disinformation are used to test the ability of each technique to detect disinformation on a topic that did not appear in the training data set. Quantitative and qualitative results from these tests are then used to provide insight into how best to employ these techniques in practice.

Word2Vec|文本|单词(1篇)

【1】 NATURE: Natural Auxiliary Text Utterances for Realistic Spoken Language Evaluation 标题:自然:现实口语评价的自然辅助文本话语 链接:https://arxiv.org/abs/2111.05196

作者:David Alfonso-Hermelo,Ahmad Rashid,Abbas Ghaddar,Philippe Langlais,Mehdi Rezagholizadeh 备注:20 pages, 4 figures, accepted to NeurIPS 2021 Track Datasets and Benchmarks 摘要:时隙填充和意图检测是语音助理等会话代理的主干,也是一个活跃的研究领域。尽管在公开的基准测试上最先进的技术表现出了令人印象深刻的性能,但它们推广到现实场景的能力仍有待证明。在这项工作中,我们介绍了NATURE,一组简单的面向口语的转换,应用于数据集的评估集,在保留话语语义的同时引入人类口语变体。我们将NATURE应用于常见的时隙填充和意图检测基准,并证明NATURE标准评估集的简单扰动会显著降低模型性能。通过我们的实验,我们证明了当自然算子应用于流行基准的评估集时,模型精度可以下降40%。 摘要:Slot-filling and intent detection are the backbone of conversational agents such as voice assistants, and are active areas of research. Even though state-of-the-art techniques on publicly available benchmarks show impressive performance, their ability to generalize to realistic scenarios is yet to be demonstrated. In this work, we present NATURE, a set of simple spoken-language oriented transformations, applied to the evaluation set of datasets, to introduce human spoken language variations while preserving the semantics of an utterance. We apply NATURE to common slot-filling and intent detection benchmarks and demonstrate that simple perturbations from the standard evaluation set by NATURE can deteriorate model performance significantly. Through our experiments we demonstrate that when NATURE operators are applied to evaluation set of popular benchmarks the model accuracy can drop by up to 40%.

其他神经网络|深度学习|模型|建模(4篇)

【1】 A Survey on Green Deep Learning 标题:绿色深度学习研究综述 链接:https://arxiv.org/abs/2111.05193

作者:Jingjing Xu,Wangchunshu Zhou,Zhiyi Fu,Hao Zhou,Lei Li 机构:ByteDance AI Lab, Peking University, University of California, Santa Barbara 摘要:近年来,更大、更深层次的模型如雨后春笋般涌现,并不断推动自然语言处理(NLP)和计算机视觉(CV)等各个领域的最新成果。然而,尽管结果很有希望,但需要注意的是,SOTA模型所需的计算量以指数速度增加。大规模计算不仅会产生惊人的巨大碳足迹,而且会对研究包容性和实际应用的部署产生负面影响。绿色深度学习是一个日益热门的研究领域,呼吁研究者在模型训练和推理过程中关注能源使用和碳排放。目标是利用轻量级和高效的技术产生新的结果。许多技术可以用来实现这一目标,比如模型压缩和知识提取。本文重点对绿色深度学习技术的发展进行了系统的回顾。我们将这些方法分为四类:(1)紧凑网络,(2)节能训练策略,(3)节能推理方法,和(4)高效数据使用。对于每个类别,我们讨论已经取得的进展和尚未解决的挑战。 摘要:In recent years, larger and deeper models are springing up and continuously pushing state-of-the-art (SOTA) results across various fields like natural language processing (NLP) and computer vision (CV). However, despite promising results, it needs to be noted that the computations required by SOTA models have been increased at an exponential rate. Massive computations not only have a surprisingly large carbon footprint but also have negative effects on research inclusiveness and deployment on real-world applications. Green deep learning is an increasingly hot research field that appeals to researchers to pay attention to energy usage and carbon emission during model training and inference. The target is to yield novel results with lightweight and efficient technologies. Many technologies can be used to achieve this goal, like model compression and knowledge distillation. This paper focuses on presenting a systematic review of the development of Green deep learning technologies. We classify these approaches into four categories: (1) compact networks, (2) energy-efficient training strategies, (3) energy-efficient inference approaches, and (4) efficient data usage. For each category, we discuss the progress that has been achieved and the unresolved challenges.

【2】 Tackling Morphological Analogies Using Deep Learning -- Extended Version 标题:利用深度学习解决形态类比问题--扩展版 链接:https://arxiv.org/abs/2111.05147

作者:Safa Alsaidi,Amandine Decker,Esteban Marquer,Pierre-Alexandre Murena,Miguel Couceiro 机构:Universit´e de Lorraine, CNRS, LORIA, F-, France, HIIT, Aalto University, Helsinki, Finland 摘要:类比比例是“A对B如同C对D”形式的陈述。它们构成了一种推理工具,为解决学习、迁移和解释性问题提供了逻辑框架,并在人工智能和自然语言处理中找到了有用的应用。在本文中,我们讨论了两个问题,即形态学中的类比检测和分辨。多种符号方法解决了形态学中的类比问题,并获得了具有竞争力的性能。我们证明了使用数据驱动策略来超越这些模型是可能的。我们提出了一种利用深度学习来检测和解决形态类比的方法。它编码类似比例的结构属性,并依赖于一个专门设计的嵌入模型来捕获单词的形态特征。我们展示了我们的模型在多语言类比检测和解析方面的竞争性能。我们提供了一个实证研究来分析平衡训练数据的影响,并评估我们的方法对输入扰动的鲁棒性。 摘要:Analogical proportions are statements of the form "A is to B as C is to D". They constitute an inference tool that provides a logical framework to address learning, transfer, and explainability concerns and that finds useful applications in artificial intelligence and natural language processing. In this paper, we address two problems, namely, analogy detection and resolution in morphology. Multiple symbolic approaches tackle the problem of analogies in morphology and achieve competitive performance. We show that it is possible to use a data-driven strategy to outperform those models. We propose an approach using deep learning to detect and solve morphological analogies. It encodes structural properties of analogical proportions and relies on a specifically designed embedding model capturing morphological characteristics of words. We demonstrate our model's competitive performance on analogy detection and resolution over multiple languages. We provide an empirical study to analyze the impact of balancing training data and evaluate the robustness of our approach to input perturbation.

【3】 FPM: A Collection of Large-scale Foundation Pre-trained Language Models 标题:FPM:大型基础预训练语言模型集合 链接:https://arxiv.org/abs/2111.04909

作者:Dezhou Shen 机构:Department of Computer Science, rct ai, Beijing, CN 摘要:最近在语言建模方面的工作表明,训练大规模转换器模型促进了自然语言处理应用程序的最新发展。然而,统一当前有效模型的工作很少。在这项工作中,我们使用当前有效的模型结构,通过当前最主流的技术启动一个模型集。我们认为这将成为未来的基本模式。对于中文,使用GPT-2[9]模型,在中文数据集上训练了103亿个参数的语言模型,特别是训练了基于对话数据的29亿个参数的语言模型;在具有4.95亿个参数的中国数据集上训练BERT模型;Transformer模型在中国数据集上训练了一个具有56亿个参数的语言模型。在英语方面,还开展了相应的训练工作。使用GPT-2模型,在英语数据集上训练了一个有64亿个参数的语言模型;BERT[3]模型在英语数据集上训练了12.4亿个参数的语言模型,特别是在单卡训练技术语言模型的基础上训练了6.88亿个参数;Transformer模型在英语数据集上训练了一个包含56亿个参数的语言模型。在线索[13]评估的TNEWS分类任务中,BERT-C模型的准确率超过了ALBERT xxlarge的59.46%,准确率为59.99%,增加了0.53%。在GLUE[11]评估的QQP分类任务中,78.95%的准确率超过了BERT Large的72.1%,增加了6.85%。与目前ERNIE的准确率相比,在胶水评价中排名第一的为75.2%,提高了3.75%。 摘要:Recent work in language modeling has shown that training large-scale Transformer models has promoted the latest developments in natural language processing applications. However, there is very little work to unify the current effective models. In this work, we use the current effective model structure to launch a model set through the current most mainstream technology. We think this will become the basic model in the future. For Chinese, using the GPT-2[9] model, a 10.3 billion parameter language model was trained on the Chinese dataset, and, in particular, a 2.9 billion parameter language model based on dialogue data was trained; the BERT model was trained on the Chinese dataset with 495 million parameters; the Transformer model has trained a language model with 5.6 billion parameters on the Chinese dataset. In English, corresponding training work has also been done. Using the GPT-2 model, a language model with 6.4 billion parameters was trained on the English dataset; the BERT[3] model trained a language model with 1.24 billion parameters on the English dataset, and in particular, it trained a 688 million parameter based on single card training technology Language model; Transformer model trained a language model with 5.6 billion parameters on the English dataset. In the TNEWS classification task evaluated by CLUE[13], the BERT-C model exceeded the 59.46% accuracy of ALBERT-xxlarge with an accuracy rate of 59.99%, an increase of 0.53%. In the QQP classification task evaluated by GLUE[11], the accuracy rate of 78.95% surpassed the accuracy rate of BERT-Large of 72.1%, an increase of 6.85%. Compared with the current accuracy rate of ERNIE, the first place in the GLUE evaluation of 75.2%, an increase of 3.75%.

【4】 Cascaded Multilingual Audio-Visual Learning from Videos 标题:级联多语种视频视听学习 链接:https://arxiv.org/abs/2111.04823

作者:Andrew Rouditchenko,Angie Boggust,David Harwath,Samuel Thomas,Hilde Kuehne,Brian Chen,Rameswar Panda,Rogerio Feris,Brian Kingsbury,Michael Picheny,James Glass 机构:MIT CSAIL, USA, UT Austin, USA, IBM Research AI, USA, Columbia University, USA, NYU, USA 备注:Presented at Interspeech 2021. This version contains updated results using the YouCook-Japanese dataset 摘要:在本文中,我们探讨了从教学视频中学习的自监督视听模型。之前的工作表明,这些模型在对大规模视频数据集进行训练后,可以将口语和声音与视觉内容联系起来,但它们仅在英语视频上进行训练和评估。为了学习多语言视听表示,我们提出了一种级联方法,该方法利用在英语视频上训练的模型,并将其应用于其他语言的视听数据,例如日语视频。通过我们的级联方法,我们显示检索性能比仅在日本视频上进行的训练提高了近10倍。我们还将经过英语视频训练的模型应用于日语和印地语口语图像字幕,实现了最先进的性能。 摘要:In this paper, we explore self-supervised audio-visual models that learn from instructional videos. Prior work has shown that these models can relate spoken words and sounds to visual content after training on a large-scale dataset of videos, but they were only trained and evaluated on videos in English. To learn multilingual audio-visual representations, we propose a cascaded approach that leverages a model trained on English videos and applies it to audio-visual data in other languages, such as Japanese videos. With our cascaded approach, we show an improvement in retrieval performance of nearly 10x compared to training on the Japanese videos solely. We also apply the model trained on English videos to Japanese and Hindi spoken captions of images, achieving state-of-the-art performance.

其他(4篇)

【1】 A Survey of NLP-Related Crowdsourcing HITs: what works and what does not 标题:与NLP相关的众包点击量调查:什么管用,什么不管用 链接:https://arxiv.org/abs/2111.05241

作者:Jessica Huynh,Jeffrey Bigham,Maxine Eskenazi 机构:Carnegie Mellon University 摘要:亚马逊机械土耳其公司(AMT)的众包申请者对工人的可靠性提出了质疑。AMT员工非常多样化,不可能对他们作为一个群体进行全面假设。一些请求者现在在没有得到他们期望的结果时大量拒绝工作。这会给每个员工(好员工或坏员工)一个较低的人类智能任务(HIT)批准分数,这对好员工是不公平的。这也会给请求者在工人论坛上留下坏名声。导致大规模拒绝的一些问题源于请求者没有花时间创建具有完整说明的格式良好的任务和/或没有支付合理的工资。为了探索这一假设,本文描述了一项研究,该研究考察了在给定时间跨度内AMT的众包点击率,并记录了这些点击率的相关信息。这项研究还记录了众包论坛上关于员工视角的信息,包括点击次数和相应请求者的信息。结果揭示了员工薪酬和演示方面的问题,如缺少说明或不可行的点击。 摘要:Crowdsourcing requesters on Amazon Mechanical Turk (AMT) have raised questions about the reliability of the workers. The AMT workforce is very diverse and it is not possible to make blanket assumptions about them as a group. Some requesters now reject work en mass when they do not get the results they expect. This has the effect of giving each worker (good or bad) a lower Human Intelligence Task (HIT) approval score, which is unfair to the good workers. It also has the effect of giving the requester a bad reputation on the workers' forums. Some of the issues causing the mass rejections stem from the requesters not taking the time to create a well-formed task with complete instructions and/or not paying a fair wage. To explore this assumption, this paper describes a study that looks at the crowdsourcing HITs on AMT that were available over a given span of time and records information about those HITs. This study also records information from a crowdsourcing forum on the worker perspective on both those HITs and on their corresponding requesters. Results reveal issues in worker payment and presentation issues such as missing instructions or HITs that are not doable.

【2】 Neural News Recommendation with Event Extraction 标题:基于事件抽取的神经新闻推荐 链接:https://arxiv.org/abs/2111.05068

作者:Songqiao Han,Hailiang Huang,Jiangwei Liu 机构:School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai , China 备注:11 pages, 4 figures, 2 tables 摘要:在线新闻推荐的一个关键挑战是帮助用户找到他们感兴趣的文章。传统的新闻推荐方法通常使用单一的新闻信息,这不足以对新闻进行编码和用户表示。最近的研究使用多渠道新闻信息,例如标题、类别和正文,以增强新闻和用户的代表性。然而,这些方法只使用各种注意机制来融合多视图嵌入,而没有考虑深入挖掘上下文中包含的高层信息。这些方法在词级对新闻内容进行编码,并在推荐网络中联合训练注意参数,从而需要更多的语料库来训练模型。我们提出了一个基于事件提取的新闻推荐(EENR)框架来克服这些缺点,利用事件提取来提取更高层次的信息。EENR还使用两阶段策略来减少推荐网络后续部分中的参数。第一阶段通过外部语料库对事件抽取模块进行训练,第二阶段将训练后的模型应用于新闻推荐数据集,预测事件级信息,包括事件类型、角色和参数。然后,我们融合多渠道信息,包括事件信息、新闻标题和类别,对新闻和用户进行编码。在真实数据集上的大量实验表明,我们的EENR方法可以有效地提高新闻推荐的性能。最后,我们还探讨了利用更高抽象层次的信息替代新闻正文内容的合理性。 摘要:A key challenge of online news recommendation is to help users find articles they are interested in. Traditional news recommendation methods usually use single news information, which is insufficient to encode news and user representation. Recent research uses multiple channel news information, e.g., title, category, and body, to enhance news and user representation. However, these methods only use various attention mechanisms to fuse multi-view embeddings without considering deep digging higher-level information contained in the context. These methods encode news content on the word level and jointly train the attention parameters in the recommendation network, leading to more corpora being required to train the model. We propose an Event Extraction-based News Recommendation (EENR) framework to overcome these shortcomings, utilizing event extraction to abstract higher-level information. EENR also uses a two-stage strategy to reduce parameters in subsequent parts of the recommendation network. We train the Event Extraction module by external corpora in the first stage and apply the trained model to the news recommendation dataset to predict event-level information, including event types, roles, and arguments, in the second stage. Then we fuse multiple channel information, including event information, news title, and category, to encode news and users. Extensive experiments on a real-world dataset show that our EENR method can effectively improve the performance of news recommendations. Finally, we also explore the reasonability of utilizing higher abstract level information to substitute news body content.

【3】 Multimodal intelligibility of scholarly hypertext: the documentalist's contribution. A required collaboration for serial documentisation in the scientific editorial process 标题:学术超文本的多模态可理解性:文献家的贡献。科学编辑过程中系列文献制作所需的协作 链接:https://arxiv.org/abs/2111.05039

作者:Gérald Kembellec 机构: Cnam, Direction de la Recherche, Laboratoire Dicen-IdF, rue Saint Martin, Paris, Institut historique allemand, département des humanités numériques, rue du Parc Royal, Paris 备注:in French, H2PTM, Oct 2021, Paris, France 摘要:这篇文章表明编辑和在线出版行业之间的界限正在失去它们的优势。在这种情况下,更新记录超文本的方式才有意义,尤其是面对Web的发展。我们特别想到了更棘手的学者超文本文档过程——特别是在科学或文化背景下。本文的目的是证明,考虑到网络的众多分支,只有通过作者、编辑和广播员之间的适当对话,才能提高文档的超文本质量。这将使读者满意,因为他们可以获得适当的信息。它还将表明,在这一拍卖编辑过程中,每一位参与者都将是赢家。事实上,定性的形式化工作将与强大的广播范围相结合。最后,我们将指出,这项调解工作必须由信息交流的参与者领导,以使文本对人和机器都能理解。这种冥想行为在这里被指定为系列纪录片。 摘要:This article shows that the boundaries between the editing and online publishingprofessions are losing their strength. In this context it would only make sense that the wayhypertexts are documented be renewed, especially facing of the Web's evolution. We arethinking in particular of the trickier scholar hypertexts documentation process - specifically inscientific or cultural contexts. The purpose of this article is to demonstrate that, consideringthe numerous branches of the Web, the hypertext enhance of a document of quality can onlybe done through a proper dialogue between authors, editors, and broadcasters. It would satisfythe readership as they could reach the appropriate information. It will also be shown that eachactor in this auctorial-editorial process would be a gainer. Indeed, a qualitative formalizationwork would be coupled with a strong broadcasting scope. Finally, we will point out that thiswork of mediating must be led by an actor of information-communication, to make the textunderstandable to both humans and machines. This meditative act is designated here under theterm of serial documentarisation.

【4】 American Hate Crime Trends Prediction with Event Extraction 标题:基于事件提取的美国仇恨犯罪趋势预测 链接:https://arxiv.org/abs/2111.04951

作者:Songqiao Han,Hailiang Huang,Jiangwei Liu,Shengsheng Xiao 机构:School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai , China 备注:12 pages, 5 figures, 4 tables 摘要:社交媒体平台可能为包含仇恨言论的话语提供潜在的空间,甚至更糟的是,可以作为仇恨犯罪的传播机制。FBI的统一犯罪报告(UCR)计划收集仇恨犯罪数据,并每年发布统计报告。这些统计数据为确定国家仇恨犯罪趋势提供了信息。这些统计数据还可以为执法机构提供有价值的整体和战略见解,或为立法者制定具体立法提供依据。然而,这些报告大多是在明年发布的,落后于许多迫切需要。最近的研究主要集中在社交媒体文本中仇恨言语的检测或对已确认犯罪的影响的实证研究。本文提出了一个框架,首先利用文本挖掘技术从《纽约时报》新闻中提取仇恨犯罪事件,然后利用结果预测美国国家和州层面的仇恨犯罪趋势。实验结果表明,与没有事件相关因素的时间序列或回归方法相比,我们的方法可以显著提高预测性能。我们的框架拓宽了国家层面和州层面仇恨犯罪趋势预测的方法。 摘要:Social media platforms may provide potential space for discourses that contain hate speech, and even worse, can act as a propagation mechanism for hate crimes. The FBI's Uniform Crime Reporting (UCR) Program collects hate crime data and releases statistic report yearly. These statistics provide information in determining national hate crime trends. The statistics can also provide valuable holistic and strategic insight for law enforcement agencies or justify lawmakers for specific legislation. However, the reports are mostly released next year and lag behind many immediate needs. Recent research mainly focuses on hate speech detection in social media text or empirical studies on the impact of a confirmed crime. This paper proposes a framework that first utilizes text mining techniques to extract hate crime events from New York Times news, then uses the results to facilitate predicting American national-level and state-level hate crime trends. Experimental results show that our method can significantly enhance the prediction performance compared with time series or regression methods without event-related factors. Our framework broadens the methods of national-level and state-level hate crime trends prediction.

0 人点赞