ABSTRACT
Recommender systems play a crucial role in mitigating the problem of information overload by suggesting users’ personalized items or services.
推荐系统的用处在于减轻信息过载的问题通过向人们推荐个性化的东西。
The vast majority of traditional recommender systems consider the recommendation procedure as a static process and make recommendations following a fixed strategy.
传统的推荐系统的局限性在于它推荐的策略是固定的。
In this paper, we propose a novel recommender system with the capability of continuously improving its strategies during the interactions with users. We model the sequential interactions between users and a recommender system as a Markov Decision Process (MDP) and leverage Reinforcement Learning (RL) to automatically learn the optimal strategies via recommending trial-and-error items and receiving reinforcements of these items from users’ feedbacks.
通过强化学习的方法推荐系统可以根据反馈实时更新策略,提高推荐效果。
In particular, we introduce an online user-agent interacting environment simulator, which can pre-train and evaluate model parameters offline before applying the model online.
贡献一,用户-agent模拟环境,可以用于预训练模型和评估模型在其用到线上推荐时。
Moreover, we validate the importance of list-wise recommendations during the interactions between users and agent, and develop a novel approach to incorporate them into the proposed framework LIRD for list-wide recommendations.
验证了list-wise recommendations 在用户和agent之间推荐的重要性,提出来新方法用于list-wise recommendation.
The experimental results based on a real-world e-commerce dataset demonstrate the effectiveness of the proposed framework.
然后是日常秀操作时间,实验证明了我们很厉害。
Online Environment Simulator
在基于强化学习的推荐系统中,offline training方法主要是通过离线数据来训练,把离线数据变成leave-one-out的方式,或是其他,但是的按照一个时间的顺序,每个截取窗口中最后一个当作action,把做好的数据用于训练一个模拟器,比如第四期中提到的acto-critic off-policy等。然后用提出的模型和模拟器做交互得到很多数据用于提出模型的训练。
那来讲一下文中所提到的模拟器。