斯坦福NLP课程 | 第19讲 - AI安全偏见与公平

2022-05-23 18:27:16 浏览数 (1)

  • 作者:韩信子@ShowMeAI,路遥@ShowMeAI,奇异果@ShowMeAI
  • 教程地址:http://www.showmeai.tech/tutorials/36
  • 本文地址:http://www.showmeai.tech/article-detail/257
  • 声明:版权所有,转载请联系平台与作者并注明出处
  • 收藏ShowMeAI查看更多精彩内容

AI安全偏见与公平AI安全偏见与公平

ShowMeAI为斯坦福CS224n《自然语言处理与深度学习(Natural Language Processing with Deep Learning)》课程的全部课件,做了中文翻译和注释,并制作成了GIF动图!


1.Bias in the Vision and Language of Artificial Intelligence

Bias in the Vision and Language of Artificial IntelligenceBias in the Vision and Language of Artificial Intelligence

2.Prototype Theory

What do you see?What do you see?
  • Bananas
  • Stickers
  • Dole Bananas
  • Bananas at a store
  • Bananas on shelves
  • Bunches of bananas
  • Bananas with stickers on them
  • Bunches of bananas with stickers on them on shelves in a store

...We don’t tend to say Yellow Bananas

What do you see?What do you see?
What do you see?What do you see?
What do you see?What do you see?
Prototype TheoryPrototype Theory
  • Prototype Theory
    • 分类的目的之一是减少刺激行为和认知上可用的比例的无限差异
    • 物品的一些核心、原型概念可能来自于存储的对象类别的典型属性 (Rosch, 1975)
    • 也可以存储范例 (Wu & Barsalou, 2009)
Prototype TheoryPrototype Theory
  • Doctor —— Female Doctor
  • 大多数受试者忽视了医生是女性的可能性,包括男性、女性和自称女权主义者的人
Prototype TheoryPrototype Theory
World Learning from textWorld Learning from text
  • Human Reporting Bias
    • murdered 是 blinked 出现次数的十倍
    • 我们不倾向于提及眨眼和呼吸等事情
Human Reporting BiasHuman Reporting Bias
  • Human Reporting Bias
    • 人们写作中的行为、结果或属性的频率并不反映真实世界的频率,也不反映某一属性在多大程度上是某一类个体的特征。
    • 更多关于我们处理世界和我们认为非凡的东西的实际情况。这影响到我们学习的一切。
Human Reporting BiasHuman Reporting Bias
Human Reporting Bias in DataHuman Reporting Bias in Data
Human Reporting Bias in DataHuman Reporting Bias in Data
  • Data 数据
    • Reporting bias 报告偏见:人们分享的并不是真实世界频率的反映
    • Selection Bias 选择偏差:选择不反映随机样本
    • Out-group homogeneity bias 外群体同质性偏见:People tend to see outgroup members as more alike than ingroup members when comparing attitudes, values, personality traits, and other characteristics
  • Interpretation
    • Confirmation bias 确认偏见:倾向于寻找、解释、支持和回忆信息,以确认一个人先前存在的信念或假设
    • Overgeneralization 泛化过度:根据过于笼统和/或不够具体的信息得出结论
    • Correlation fallacy 相关性谬误:混淆相关性和因果关系
    • Automation bias 自动化偏差:人类倾向于喜欢来自自动化决策系统的建议,而不是没有自动化的相互矛盾的信息

3.Biases in Data

Biases in DataBiases in Data
Biases in DataBiases in Data
  • Selection Bias 选择偏差:选择不反映随机样本
Biases in DataBiases in Data
  • Out-group homogeneity bias 外群体同质性偏见:在比较态度、价值观、个性特征和其他特征时,往往群体外的成员认为比群体内的成员更相似
  • 这有些难以理解:意思就是左边的四只猫之间是非常不同的,但是在狗的眼里他们是相同的
Biases in Data → Biased Data RepresentationBiases in Data → Biased Data Representation
  • Biases in Data → Biased Data Representation
  • 你可能对你能想到的每一个群体都有适当数量的数据,但有些群体的表现不如其他群体积极。
Biases in Data → Biased LabelsBiases in Data → Biased Labels
  • Biases in Data → Biased Labels
  • 数据集中的注释将反映注释者的世界观

4.Biases in Interpretation

Biases in InterpretationBiases in Interpretation
Biases in InterpretationBiases in Interpretation
  • Biases in Interpretation
    • Confirmation bias 确认偏见:倾向于寻找、解释、支持和回忆信息,以确认一个人先前存在的信念或假设
Biases in InterpretationBiases in Interpretation
  • Biases in Interpretation
    • Overgeneralization 泛化过度:根据过于笼统和/或不够具体的信息得出结论(相关:过拟合)
Biases in InterpretationBiases in Interpretation
  • Biases in Interpretation
    • Correlation fallacy 相关性谬误:混淆相关性和因果关系
Biases in InterpretationBiases in Interpretation
  • Biases in Interpretation
    • Automation bias 自动化偏差:人类倾向于喜欢来自自动化决策系统的建议,而不是没有自动化的相互矛盾的信息
Biases in InterpretationBiases in Interpretation
  • 会形成反馈循环
  • 这被称为 Bias Network Effect 以及 Bias “Laundering”
Human data perpetuates human biases. As ML learns from human data, the result is a bias network effect.Human data perpetuates human biases. As ML learns from human data, the result is a bias network effect.
  • Human data perpetuates human biases. As ML learns from human data, the result is a bias network effect.
  • 人类数据延续了人类的偏见。当ML从人类数据中学习时,结果是一个偏置网络效应。

5.BIAS = BAD ??

BIAS = BAD ??BIAS = BAD ??
“Bias” can be Good, Bad, Neutral“Bias” can be Good, Bad, Neutral
  • 统计以及 ML中的偏差
    • 估计值的偏差:预测值与我们试图预测的正确值之间的差异
    • “偏差”一词b(如y = mx b)
  • 认知偏见
    • 确认性偏差、近因性偏差、乐观性偏差
  • 算法偏差
    • 对与种族、收入、性取向、宗教、性别和其他历史上与歧视和边缘化相关的特征相关的人的不公平、不公平或偏见待遇,何时何地在算法系统或算法辅助决策中体现出来”
amplify injusticeamplify injustice
  • 如何避免算法偏差,开发出不会放大差异的算法

6.Predicting Future Criminal Behavior

Predicting Future Criminal BehaviorPredicting Future Criminal Behavior
Predicting PolicingPredicting Policing
  • Predicting Future Criminal Behavior
    • 算法识别潜在的犯罪热点
    • 基于之前报道的犯罪的地方,而不是已知发生在哪里
    • 从过去预测未来事件
    • 预测的是逮捕的地方而不是犯罪的地方
Predicting SentencingPredicting Sentencing
  • Prater (白人)额定低风险入店行窃后,尽管两个武装抢劫;一次持械抢劫未遂。
  • Borden (黑色)额定高危后她和一个朋友(但在警察到来之前返回)一辆自行车和摩托车坐在外面。
  • 两年后,Borden没有被指控任何新的罪行。Prater因重大盗窃罪被判8年有期徒刑。
  • 系统默认认为黑人的犯罪风险高于白人

7.Automation Bias

Automation BiasAutomation Bias
Predicting CriminalityPredicting Criminality
  • 以色列启动 Faception
  • Faception是第一个科技领域的率先面市的,专有的计算机视觉和机器学习技术分析人员和揭示他们的个性只基于他们的面部图像。
  • 提供专业的引擎从脸的形象识别“高智商”、“白领犯罪”、“恋童癖”,和“恐怖分子”。
  • 主要客户为国土安全和公共安全。
Predicting CriminalityPredicting Criminality
  • “Automated Inference on Criminality using Face Images” Wu and Zhang, 2016. arXiv
  • 1856个紧密裁剪的面孔的图像,包括“通缉犯”ID特定区域的照片
  • 存在确认偏差和相关性偏差

8.Selection Bias Experimenter’s Bias Confirmation Bias Correlation Fallacy Feedback Loops

Selection Bias   Experimenter’s Bias  Confirmation Bias   Correlation Fallacy  Feedback LoopsSelection Bias Experimenter’s Bias Confirmation Bias Correlation Fallacy Feedback Loops
Predicting Criminality - The Media BlitzPredicting Criminality - The Media Blitz

9.(Claiming to) Predict Internal Qualities Subject To Discrimination

(Claiming to) Predict Internal Qualities Subject To Discrimination(Claiming to) Predict Internal Qualities Subject To Discrimination
Predicting HomosexualityPredicting Homosexuality
  • Wang and Kosinski, Deep neural networks are more accurate than humans at detecting sexual orientation from facial images, 2017.
  • “Sexual orientation detector” using 35,326 images from public profiles on a US dating website.
  • “与性取向的产前激素理论(PHT)相一致,男同性恋者和女同性恋者往往具有非典型的性别面部形态。”
Predicting HomosexualityPredicting Homosexuality
  • 在自拍中,同性恋和异性恋之间的差异与打扮、表现和生活方式有关,也就是说,文化差异,而不是面部结构的差异
  • See our longer response on Medium, “Do Algorithms Reveal Sexual Orientation or Just Expose our Stereotypes?”
  • Selection Bias Experimenter’s Bias Correlation Fallacy

10.Selection Bias Experimenter’s Bias Correlation Fallacy

Selection Bias   Experimenter’s Bias   Correlation FallacySelection Bias Experimenter’s Bias Correlation Fallacy

11.Measuring Algorithmic Bias

Measuring Algorithmic BiasMeasuring Algorithmic Bias
Evaluate for Fairness & InclusionEvaluate for Fairness & Inclusion
  • 评估公平性和包容性
    • 分类评估
      • 为每个创建(子组,预测)对。跨子组比较
    • 例如
      • 女性,面部检测
      • 男性,面部检测
Evaluate for Fairness & Inclusion: Confusion MatrixEvaluate for Fairness & Inclusion: Confusion Matrix
Evaluate for Fairness & InclusionEvaluate for Fairness & Inclusion
  • “机会平等”公平准则:子组的 recall 是相等的
  • “预测平价”公平准则:子组的 precision 是相等
  • 选择评价指标的可接受的假阳性和假阴性之间的权衡

12.False Positives and False Negatives

False Positives and False NegativesFalse Positives and False Negatives
False Positives Might be Better than False NegativesFalse Positives Might be Better than False Negatives
  • False Positives Might be Better than False Negatives
    • Privacy in Images
    • Spam Filtering
False Negatives Might be Better than False PositivesFalse Negatives Might be Better than False Positives
AI Can Unintentionally Lead to Unjust OutcomesAI Can Unintentionally Lead to Unjust Outcomes
  • 缺乏对数据和模型中的偏见来源的洞察力
  • 缺乏对反馈循环的洞察力
  • 缺乏细心,分类的评价
  • 人类偏见在解释和接受结果

13.It’s up to us to influence how AI evolves.

It’s up to us to influence how AI evolves.It’s up to us to influence how AI evolves.
Begin tracing out paths for the evolution of ethical AIBegin tracing out paths for the evolution of ethical AI

14.It’s up to us to influence how AI evolves. Here are some things we can do.

It’s up to us to influence how AI evolves. Here are some things we can do.It’s up to us to influence how AI evolves. Here are some things we can do.

15.Data

DataData
Data Really, Really MattersData Really, Really Matters
  • 了解您的数据:偏差,相关性
  • 从类似的分布放弃单一训练集/测试集
  • 结合来自多个来源的输入
  • 对于困难的用例使用held-out测试集
  • 与专家讨论其他信号
Understand Your Data SkewsUnderstand Your Data Skews
Understand Your Data SkewsUnderstand Your Data Skews
  • 没有一个数据集是没有偏差的,因为这是一个有偏差的世界。重点是知道是什么偏差。

16.Machine Learning

Machine LearningMachine Learning
Use ML Techniques for Bias Mitigation and InclusionUse ML Techniques for Bias Mitigation and Inclusion
  • Bias Mitigation 偏差缓解
    • 删除有问题的输出的信号
      • 刻板印象
      • 性别歧视,种族歧视,*-ism
      • 又称为“debiasing”
Use ML Techniques for Bias Mitigation and InclusionUse ML Techniques for Bias Mitigation and Inclusion
  • Inclusion
    • 添加信号所需的变量
      • 增加模型性能
      • 注意性能很差的子组或数据片

17.Multi-task Learning to Increase Inclusion

Multi-task Learning to Increase InclusionMulti-task Learning to Increase Inclusion
Multiple Tasks   Deep Learning for Inclusion: Multi-task Learning ExampleMultiple Tasks Deep Learning for Inclusion: Multi-task Learning Example
  • 与宾夕法尼亚大学WWP合作
  • 直接与临床医生合作
  • 目标
    • 系统警告临床医生如果企图自杀迫在眉睫
    • 几个训练实例可用时诊断的可行性
  • Benton, Mitchell, Hovy. Multi-task learning for Mental Health Conditions with Limited Social Media Data. EACL, 2017.
Multiple Tasks   Deep Learning for Inclusion: Multi-task Learning ExampleMultiple Tasks Deep Learning for Inclusion: Multi-task Learning Example
  • 内部数据
    • 电子健康记录
      • 病人或病人家属提供
      • 包括心理健康诊断,自杀企图,竞赛
    • 社交媒体数据
  • 代理数据
    • Twitter 媒体数据
    • 代理心理健康诊断中使用自称诊断
      • 我被诊断出患有 X
      • 我试图自杀
Single-Task: Logistic Regression, Deep LearningSingle-Task: Logistic Regression, Deep Learning
Multiple Tasks with Basic Logistic RegressionMultiple Tasks with Basic Logistic Regression
Multi-task LearningMulti-task Learning
Improved Performance across SubgroupsImproved Performance across Subgroups
Reading for the masses….Reading for the masses….

18.Adversarial Multi-task Learning to Mitigate Bias

Adversarial Multi-task Learning to Mitigate BiasAdversarial Multi-task Learning to Mitigate Bias
Multitask Adversarial LearningMultitask Adversarial Learning
Equality of Opportunity in Supervised LearningEquality of Opportunity in Supervised Learning
  • 考虑到真正正确的决策,分类器的输出决策应该在敏感特征之间是相同的。

19.Case Study: Conversation AI Toxicity

Case Study: Conversation AI ToxicityCase Study: Conversation AI Toxicity

19.1 Measuring and Mitigating Unintended Bias in Text Classification

Measuring and Mitigating Unintended Bias in Text ClassificationMeasuring and Mitigating Unintended Bias in Text Classification

19.2 Conversation-AI & Research Collaboration

Conversation-AI & Research CollaborationConversation-AI & Research Collaboration
  • Conversation-AI
    • ML 提高大规模在线对话
  • Research Collaboration
    • Jigsaw, CAT, several Google-internal teams, and external partners (NYTimes, Wikimedia, etc)

19.3 Perspective API

Perspective APIPerspective API

19.4 Unintended Bias

Unintended BiasUnintended Bias

19.5 Bias Source and Mitigation

Bias Source and MitigationBias Source and Mitigation
  • 偏见造成的数据不平衡
    • 经常袭击了有毒的身份所占比例评论
    • 长度问题
  • 添加维基百科文章中假定的无毒数据来修复这种不平衡
    • 原始数据集有127820个例子
    • 4620个补充的无毒例子

19.6 Measuring Unintended Bias - Synthetic Datasets

Measuring Unintended Bias - Synthetic DatasetsMeasuring Unintended Bias - Synthetic Datasets
  • 挑战与真实数据
    • 现有数据集是小 and/or 有错误的相关性
    • 每个例子是完全独特的
  • Approach:"bias madlibs”:一个综合生成的模板化数据集进行评估

19.7 Assumptions

AssumptionsAssumptions
  • 数据集是可靠的
    • 和产品相似的分布
    • 忽略注释器偏见
    • 没有因果分析

19.8 Deep Learning Model

Deep Learning ModelDeep Learning Model
  • 深度学习模型
  • CNN 架构
  • 预训练的 GloVe 嵌入
  • Keras 实现

19.9 Measuring Model Performance

Measuring Model PerformanceMeasuring Model Performance

19.10 Measuring Model Performance

Measuring Model PerformanceMeasuring Model Performance

19.11 Types of Bias

Types of BiasTypes of Bias
  • Low Subgroup Performance
    • 模型在子组注释上的性能比在总体注释上差
  • Metric : Subgroup AUC
Types of BiasTypes of Bias
  • Subgroup Shift (Right)
    • 该模型系统地对来自子组的评价打分更高
    • Metric: BPSN AUC
    • (Background Positive Subgroup Negative)
  • Subgroup Shift (Left)
    • 该模型系统地对来自子组的评价打分较低
    • Metric: BNSP AUC
    • (Background Negative Subgroup Positive)

19.12 Results

ResultsResults

20.Release Responsibly

Release ResponsiblyRelease Responsibly
Model Cards for Model ReportingModel Cards for Model Reporting
  • 目前还没有模型发布时报告模型效果的 common practice
  • What It Does
    • 一份关注模型性能透明度的报告,以鼓励负责任的人工智能的采用和应用
  • How It Works
    • 这是一个容易发现的和可用的工件在用户旅程中重要的步骤为一组不同的用户和公共利益相关者
  • Why It Matter
    • 它使模型开发人员有责任发布高质量和公平的模型
    • Intended Use, Factors and Subgroups
Intended Use, Factors and Subgroups, Metrics and Data, Considerations, RecommendationsIntended Use, Factors and Subgroups, Metrics and Data, Considerations, Recommendations
Disaggregated Intersectional EvaluationDisaggregated Intersectional Evaluation

21.Moving from majority representation... to diverse representation... for ethical AI

Moving from majority representation... to diverse representation... for ethical AIMoving from majority representation... to diverse representation... for ethical AI

22.Thanks

23.视频教程

可以点击 B站 查看视频的【双语字幕】版本

video(video-xLF8OkHI-1652089903151)(type-bilibili)(url-https://player.bilibili.com/player.html?aid=376755412&page=19)(image-https://img-blog.csdnimg.cn/img_convert/348c054984ca03db39d4b43924e02302.png)(title-【双语字幕 资料下载】斯坦福CS224n | 深度学习与自然语言处理(2019·全20讲))

24.参考资料

  • 本讲带学的在线阅翻页本
  • 《斯坦福CS224n深度学习与自然语言处理》课程学习指南
  • 《斯坦福CS224n深度学习与自然语言处理》课程大作业解析
  • 双语字幕视频】斯坦福CS224n | 深度学习与自然语言处理(2019·全20讲)
  • Stanford官网 | CS224n: Natural Language Processing with Deep Learning

0 人点赞