迁移学习模型DANN实现

2021-09-08 11:05:06 浏览数 (1)

Individual differences in EEG signals lead to the poor generalization ability of EEG-based affective models. Transfer learning, as we introduced in the class, can eliminate the subject differences and achieve appreciable improvement in recognition performance. In this assignment, you are asked to build and evaluate a cross-subject affective model using Domain-Adversarial Neural Networks (DANN) with the SEED dataset.

You are required to apply leave-one-subject-out cross validation to classify different emotions with DANN model and compare the results of DANN with a baseline model (you can choose the baseline model on your own). Under Leave-one-subject-out cross validation configuration, for each subject, an affective model should be trained with one subject as target domain, and other subjects as source domain. In the end, there should be five DANN models for each of the subject, and you should report both the individual recognition accuracy and the mean recognition accuracy.

Here are some suggestions of parameter settings. The feature extractor has 2 layers, both with node number of 128. The label predictor and domain discriminator have 3 layers with node numbers of 64, 64, and C, respectively. C indicates the number of emotion classes to be classified.

Python

Python

代码语言:txt复制
# Name: DANN_1
# Author: Reacubeth
# Time: 2021/4/22 19:39
# Mail: noverfitting@gmail.com
# Site: www.omegaxyz.com
# *_*coding:utf-8 *_*
 
from torch import nn
import torch
 
 
class ReversalLayer(torch.autograd.Function):
    def __init__(self):
        super(ReversalLayer, self).__init__()
 
    @staticmethod
    def forward(ctx, x, alpha):
        ctx.alpha = alpha
        return x.view_as(x)
 
    @staticmethod
    def backward(ctx, grad_output):
        output = grad_output.neg() * ctx.alpha
        return output, None
 
 
class DANN(nn.Module):
    def __init__(self, input_dim, hid_dim_1, hid_dim_2, class_num, domain_num):
        super(DANN, self).__init__()
        self.feature_extractor = nn.Sequential(nn.Linear(input_dim, hid_dim_1 * 2),
                                               nn.ReLU(),
                                               nn.Linear(hid_dim_1 * 2, hid_dim_1),
                                               nn.ReLU(),
                                               )
 
        self.classifier = nn.Sequential(nn.Linear(hid_dim_1, hid_dim_2),
                                        nn.ReLU(),
                                        nn.Linear(hid_dim_2, class_num),
                                        # nn.Softmax(),
                                        )
 
        self.domain_classifier = nn.Sequential(nn.Linear(hid_dim_1, hid_dim_2),
                                               nn.ReLU(),
                                               nn.Linear(hid_dim_2, domain_num),
                                               # nn.Softmax(),
                                               )
 
    def forward(self, X, alpha):
        self.alpha = torch.tensor(alpha)
        feature = self.feature_extractor(X)
        class_res = self.classifier(feature)
        feature2 = ReversalLayer.apply(feature, self.alpha)
        domain_res = self.domain_classifier(feature2)
        return feature, class_res, domain_res

相关文章

  • 图神经网络(GNN)TensorFlow实现
  • 基于Embedding的实体对齐前瞻
  • GCC图神经网络预训练概述
  • 个人主页信息提取器
  • BERT-BiLSTM-CRF命名实体识别应用
  • 知识融合(实体对齐)笔记
  • 图注意力网络(GAT) TensorFlow实现
  • TensorFlow简单卷积神经(CNN)网络实现
  • TensorFlow实现简单神经网络分类问题
  • Tensor(张量)的简介与运用

0 人点赞