Importance Sampling 重要性采样

2019-10-24 23:25:06 浏览数 (1)

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。

本文链接:https://blog.csdn.net/Solo95/article/details/102673140

Importance Sampling(重要性采样),也是常用估计函数价值在某个概率分布下的期望的一个方法。这篇博文先简要介绍IS,再将其在策略评估中的应用。

Importance Sampling

  • 目标:估计一个函数f(x)f(x)f(x),在遵循某个概率分布p(x)p(x)p(x)条件下的期望值Ex∼p[f(x)]mathbb{E}_{xsim p}[f(x)]Ex∼p​[f(x)]
  • 有从分布q(s)q(s)q(s)上采样而来的数据x1,x2,...,xnx_1, x_2,...,x_nx1​,x2​,...,xn​
  • 处于一定假设之下,我们可以使用采样来得到一个无偏估计Ex∼q[f(x)]mathbb{E}_{xsim q}[f(x)]Ex∼q​[f(x)]

Ex∼q[f(x)]=∫xq(x)f(x)mathbb{E}_{xsim q}[f(x)] = int_xq(x)f(x)Ex∼q​[f(x)]=∫x​q(x)f(x)

Importance Sampling(IS) for Policy Evaluation

记hjh_jhj​为轮次jjj关于状态、动作、奖励的历史: hj=(sj,1,aj,1,rj,1,sj,2,aj,2,rj,2,...,sj,Lj(terminal))h_j=(s_{j,1},a_{j,1},r_{j,1},s_{j,2},a_{j,2},r_{j,2},...,s_j,L_j(terminal))hj​=(sj,1​,aj,1​,rj,1​,sj,2​,aj,2​,rj,2​,...,sj​,Lj​(terminal))

那么 p(hj∣π,s=sj,1)=p(aj,1∣sj,1)p(rj,1∣sj,1,aj,1)p(sj,2∣sj,1aj,1)p(aj,2∣sj,2)p(rj,2∣sj,2,aj,2)p(sj,3∣sj,2aj,2)...=∏t=1Lj−1p(aj,t∣sj,t)p(rj,t∣sj,t,aj,t)p(aj,t 1∣sj,t,aj,t)=∏t=1Lj−1π(aj,t∣sj,t)p(rj,t∣sj,t,aj,t)p(aj,t 1∣sj,t,aj,t) begin{aligned} p(h_j|pi,s=s_{j,1}) & =p(a_{j,1}|s_{j,1})p(r_{j,1}|s_{j,1},a_{j,1})p(s_{j,2}|s_{j,1}a_{j,1}) p(a_{j,2}|s_{j,2})p(r_{j,2}|s_{j,2},a_{j,2})p(s_{j,3}|s_{j,2}a_{j,2})... \ & = prod_{t=1}^{L_j-1} p(a_{j,t}|s_{j,t})p(r_{j,t}|s_{j,t},a_{j,t})p(a_{j,t 1}|s_{j,t},a_{j,t}) \ & = prod_{t=1}^{L_j-1} pi(a_{j,t}|s_{j,t})p(r_{j,t}|s_{j,t},a_{j,t})p(a_{j,t 1}|s_{j,t},a_{j,t}) end{aligned} p(hj​∣π,s=sj,1​)​=p(aj,1​∣sj,1​)p(rj,1​∣sj,1​,aj,1​)p(sj,2​∣sj,1​aj,1​)p(aj,2​∣sj,2​)p(rj,2​∣sj,2​,aj,2​)p(sj,3​∣sj,2​aj,2​)...=t=1∏Lj​−1​p(aj,t​∣sj,t​)p(rj,t​∣sj,t​,aj,t​)p(aj,t 1​∣sj,t​,aj,t​)=t=1∏Lj​−1​π(aj,t​∣sj,t​)p(rj,t​∣sj,t​,aj,t​)p(aj,t 1​∣sj,t​,aj,t​)​

如果记hjh_jhj​为轮次jjj关于状态、动作、奖励的历史,其中动作是从策略π2pi_2π2​采样而来: hj=(sj,1,aj,1,rj,1,sj,2,aj,2,rj,2,...,sj,Lj(terminal))h_j=(s_{j,1},a_{j,1},r_{j,1},s_{j,2},a_{j,2},r_{j,2},...,s_j,L_j(terminal))hj​=(sj,1​,aj,1​,rj,1​,sj,2​,aj,2​,rj,2​,...,sj​,Lj​(terminal))

那么 Vπ1(s)≈∑j=1np(hj∣π1,s)p(hj∣π2,s)G(hj)V^{pi_1}(s)approx sum_{j=1}^n frac{p(h_j|pi_1,s)}{p(h_j|pi_2,s)}G(h_j)Vπ1​(s)≈j=1∑n​p(hj​∣π2​,s)p(hj​∣π1​,s)​G(hj​)

Importance Sampling(IS) for Policy Evaluation

  • 目标:在给定由策略π2pi_2π2​产生的轮次(episodes)下,评估策略π1pi_1π1​的价值Vπ(s)V^pi(s)Vπ(s)
    • s1,a1,r1,s2,a2,r2,....s_1,a_1,r_1,s_2,a_2,r_2,....s1​,a1​,r1​,s2​,a2​,r2​,....其中的action是由π2pi_2π2​采样而来
  • 能够访问 MDP模型M在策略πpiπ下产生的收益为Gt=rt γrt 1 γ2rt 2 γ3rt 3 ....G_t=r_t gamma r_{t 1} gamma^2r_{t 2} gamma^3r_{t 3} ....Gt​=rt​ γrt 1​ γ2rt 2​ γ3rt 3​ ....
  • 想求 V1πs=Eπ1[Gt∣st=s]V^pi_1{s}=mathbb{E}_{pi_1}[G_t|s_t=s]V1π​s=Eπ1​​[Gt​∣st​=s]
  • IS = 蒙特·卡罗尔off policy估计数据
  • 不依赖模型的方法
  • 不需要马尔科夫假设
  • 在一定的假设下,无偏且一致的Vπ1V^{pi_1}Vπ1​的估计器
  • 可以被用于agent在和环境使用非agent控制策略进行交互的情况下估计策略的价值
  • 也可以使用批学习(batch learning)

0 人点赞