伤害性词语: 量化临床上下文词语嵌入中的偏见(CS CL)

2020-03-26 15:26:24 浏览数 (1)

在这项工作中,我们研究了嵌入在多大程度上可能以不同的方式对边缘化人群进行编码,以及这是如何导致偏见的持续存在和临床任务表现的恶化。 我们根据MIMIC-III 医院的数据集,对深度嵌入模型(BERT)进行预先训练,并用两种方法对潜在的差异进行量化。 首先,我们识别危险的潜在关系,所捕获的上下文词嵌入使用填补空白的方法,文字来自真实的临床记录和日志概率偏差评分量化。 第二,我们评估超过50个下游临床预测任务的公平性的不同定义的性能差距,包括急性和慢性疾病的检测。 我们发现从 BERT表征训练出来的分类器在表现上有统计学意义上的显著差异,在性别、语言、种族和保险状况方面往往偏向于大多数人群。 最后,我们探讨了在上下文字嵌入中使用对抗性消偏来模糊子群信息的缺点,并推荐了这种深嵌入模型在临床应用中的最佳实践。

原文题目:Hurtful Words: Quantifying Biases in Clinical Contextual Word Embeddings

原文:In this work, we examine the extent to which embeddings may encode marginalized populations differently, and how this may lead to a perpetuation of biases and worsened performance on clinical tasks. We pretrain deep embedding models (BERT) on medical notes from the MIMIC-III hospital dataset, and quantify potential disparities using two approaches. First, we identify dangerous latent relationships that are captured by the contextual word embeddings using a fill-in-the-blank method with text from real clinical notes and a log probability bias score quantification. Second, we evaluate performance gaps across different definitions of fairness on over 50 downstream clinical prediction tasks that include detection of acute and chronic conditions. We find that classifiers trained from BERT representations exhibit statistically significant differences in performance, often favoring the majority group with regards to gender, language, ethnicity, and insurance status. Finally, we explore shortcomings of using adversarial debiasing to obfuscate subgroup information in contextual word embeddings, and recommend best practices for such deep embedding models in clinical settings.

原文作者: Amy X. Lu

原文地址:https://arxiv.org/abs/2003.11515

伤害性词语 量化临床上下文词语嵌入中的偏见.pdf

0 人点赞