Visdom的介绍
Visdom是Facebook专为PyTorch开发的实时可视化工具包,其作用相当于TensorFlow中的Tensorboard,灵活高效且界面美观,下面就一起来学习下如何使用吧!如果想更多了解关于Visdom的使用可以参考官方
代码语言:javascript复制https://github.com/facebookresearch/visdom
首先来欣赏下官方提供的Visdom的可视化界面
Visdom的安装
- 安装非常简易,只需要打开cmd窗口,输入一下命令即可快速安装完成
1pip install visdom
Visdom的使用
类似于TensorFlow的TensorBoard,要使用Visdom,就要先在终端开启监听命令,根据显示的网址然后在浏览器里输入:http://localhost:8097 进行登录,此时如果报错,别怕,参考以下网站一定能轻松解决(新版visdom已经解决了可以使用pip install --upgrade visdom进行更新即可):
开启监听命令
代码语言:javascript复制1python -m visdom.server # 或者直接visdom
Visdom可视化函数及其参数一览
- 具体使用方法仍然可以参考上述网站,限于篇幅,这里主要列举最常用的line函数以及image函数的使用方法
visdom基本可视化函数
代码语言:javascript复制 1- vis.image : 图片
2- vis.line: 曲线
3- vis.images : 图片列表
4- vis.text : 抽象HTML 输出文字
5- vis.properties : 属性网格
6- vis.audio : 音频
7- vis.video : 视频
8- vis.svg : SVG对象
9- vis.matplot : matplotlib图
10- vis.save : 序列化状态服务端
上述函数参数
- 注意opt的参数都可以用python字典的格式传入,大家可以参考下方使用方法 1- opts.title : 图标题 2- opts.width : 图宽 3- opts.height : 图高 4- opts.showlegend : 显示图例 (true or false) 5- opts.xtype : x轴的类型 ('linear' or 'log') 6- opts.xlabel : x轴的标签 7- opts.xtick : 显示x轴上的刻度 (boolean) 8- opts.xtickmin : 指定x轴上的第一个刻度 (number) 9- opts.xtickmax : 指定x轴上的最后一个刻度 (number) 10- opts.xtickvals : x轴上刻度的位置(table of numbers) 11- opts.xticklabels : 在x轴上标记标签 (table of strings) 12- opts.xtickstep : x轴上刻度之间的距离 (number) 13- opts.xtickfont :x轴标签的字体 (dict of font information) 14- opts.ytype : type of y-axis ('linear' or 'log') 15- opts.ylabel : label of y-axis 16- opts.ytick : show ticks on y-axis (boolean) 17- opts.ytickmin : first tick on y-axis (number) 18- opts.ytickmax : last tick on y-axis (number) 19- opts.ytickvals : locations of ticks on y-axis (table of numbers) 20- opts.yticklabels : ticks labels on y-axis (table of strings) 21- opts.ytickstep : distances between ticks on y-axis (number) 22- opts.ytickfont : font for y-axis labels (dict of font information) 23- opts.marginleft : 左边框 (in pixels) 24- opts.marginright :右边框 (in pixels) 25- opts.margintop : 上边框 (in pixels) 26- opts.marginbottom: 下边框 (in pixels) 27- opts.lagent=['']: 显示图标
实时曲线绘制方法
- 方法是起点 数据点更新
1'''
2单条追踪曲线设置
3'''
4viz = Visdom() # 初始化visdom类
5viz.line([0.], ## Y的第一个点坐标
6 [0.], ## X的第一个点坐标
7 win="train loss", ##窗口名称
8 opts=dict(title='train_loss') ## 图像标例
9 ) #设置起始点
10'''
11模型数据
12'''
13viz.line([1.], ## Y的下一个点坐标
14 [1.], ## X的下一个点坐标
15 win="train loss", ## 窗口名称 与上个窗口同名表示显示在同一个表格里
16 update='append' ## 添加到上一个点后面
17 )
代码语言:javascript复制1Setting up a new session...
2
3
4
5
6
7'train loss'
此时界面显示如下
代码语言:javascript复制 1'''
2多条曲线绘制 实际上就是传入y值时为一个向量
3'''
4viz = Visdom(env='my_wind') # 注意此时我已经换了新环境
5#设置起始点
6viz.line([[0.0,0.0]], ## Y的起始点
7 [0.], ## X的起始点
8 win="test loss", ##窗口名称
9 opts=dict(title='test_loss') ## 图像标例
10 )
11'''
12模型数据
13'''
14viz.line([[1.1,1.5]], ## Y的下一个点
15 [1.], ## X的下一个点
16 win="test loss", ## 窗口名称
17 update='append' ## 添加到上一个点后面
18 )
代码语言:javascript复制1'test loss'
大家此时查看需要先切换environment窗口为my才能看到图像,如图所示:
图像显示
- 值得注意的是,Visdom支持图像的批量显示
1image = np.random.randn(6, 3, 200, 300) # 此时batch为6
2viz.images(image, win='x')
代码语言:javascript复制1'x'
可视化数据集
代码语言:javascript复制1train_loader = torch.utils.data.DataLoader(datasets.MNIST(
2 'D:/data/MNIST',
3 train=True,
4 download=True,
5 transform=transforms.Compose(
6 [transforms.ToTensor()])),batch_size=128,shuffle=True)
代码语言:javascript复制1sample=next(iter(train_loader)) # 通过迭代器获取样本
2# sample[0]为样本数据 sample[1]为类别 nrow=16表示每行显示16张图像
3viz.images(sample[0],nrow=16,win='mnist',opts=dict(title='mnist'))
代码语言:javascript复制1'mnist'
可视化结果如图所示
下面通过具体的训练过程通过visdom可视化
Visdom的使用案例
为了方便显示Visdom的功能,直接使用自带的MNist数据进行可视化。
代码语言:javascript复制 1'''
2导入库文件
3'''
4
5import torch
6import torch.nn as nn
7import torch.nn.functional as F
8import torch.optim as optim
9from torchvision import datasets, transforms
10from visdom import Visdom
11import numpy as np
代码语言:javascript复制 1'''
2构建简单的模型:简单线性层 Relu函数的多层感知机
3'''
4class MLP(nn.Module):
5
6 def __init__(self):
7 super(MLP, self).__init__()
8
9 self.model = nn.Sequential(
10 nn.Linear(784, 200),
11 nn.ReLU(inplace=True),
12 nn.Linear(200, 200),
13 nn.ReLU(inplace=True),
14 nn.Linear(200, 10),
15 nn.ReLU(inplace=True),
16 )
17
18 def forward(self, x):
19 x = self.model(x)
20
21 return x
代码语言:javascript复制 1batch_size = 128
2learning_rate = 0.01
3epochs = 10
4
5train_loader = torch.utils.data.DataLoader(datasets.MNIST(
6 'D:/data/MNIST', #
7 train=True,
8 download=True,
9 transform=transforms.Compose(
10 [transforms.ToTensor(),
11 transforms.Normalize((0.1307, ), (0.3081, ))])),
12 batch_size=batch_size,
13 shuffle=True)
14test_loader = torch.utils.data.DataLoader(datasets.MNIST(
15 'D:/Jupyter/工作准备/data/MNIST',
16 train=False,
17 transform=transforms.Compose(
18 [transforms.ToTensor(),
19 transforms.Normalize((0.1307, ), (0.3081, ))])),
20 batch_size=batch_size,
21 shuffle=True)
22
23# 注意此处初始化visdom类
24viz = Visdom()
25# 绘制起点
26viz.line([0.], [0.], win="train loss", opts=dict(title='train_loss'))
27device = torch.device('cuda:0')
28net = MLP().to(device)
29optimizer = optim.SGD(net.parameters(), lr=learning_rate)
30criteon = nn.CrossEntropyLoss().to(device)
31
32for epoch in range(epochs):
33
34 for batch_idx, (data, target) in enumerate(train_loader):
35 data = data.view(-1, 28 * 28)
36 data, target = data.to(device), target.cuda()
37 logits = net(data)
38 loss = criteon(logits, target)
39
40 optimizer.zero_grad()
41 loss.backward()
42 # print(w1.grad.norm(), w2.grad.norm())
43 optimizer.step()
44
45 if batch_idx % 100 == 0:
46 print('Train Epoch: {} [{}/{} ({:.0f}%)]tLoss: {:.6f}'.format(
47 epoch, batch_idx * len(data), len(train_loader.dataset),
48 100. * batch_idx / len(train_loader), loss.item()))
49
50 test_loss = 0
51 correct = 0
52 for data, target in test_loader:
53 data = data.view(-1, 28 * 28)
54 data, target = data.to(device), target.cuda()
55 logits = net(data)
56 test_loss = criteon(logits, target).item()
57
58 pred = logits.argmax(dim=1)
59 correct = pred.eq(target).float().sum().item()
60
61 test_loss /= len(test_loader.dataset)
62 # 绘制epoch以及对应的测试集损失loss
63 viz.line([test_loss], [epoch], win="train loss", update='append')
64 print(
65 'nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)n'.format(
66 test_loss, correct, len(test_loader.dataset),
67 100. * correct / len(test_loader.dataset)))
代码语言:javascript复制 1Setting up a new session...
2
3
4Train Epoch: 0 [0/60000 (0%)] Loss: 2.295465
5Train Epoch: 0 [12800/60000 (21%)] Loss: 2.186591
6Train Epoch: 0 [25600/60000 (43%)] Loss: 1.680299
7Train Epoch: 0 [38400/60000 (64%)] Loss: 1.233092
8Train Epoch: 0 [51200/60000 (85%)] Loss: 1.132240
9
10Test set: Average loss: 0.0079, Accuracy: 7151.0/10000 (72%)
11
12Train Epoch: 1 [0/60000 (0%)] Loss: 1.034136
13Train Epoch: 1 [12800/60000 (21%)] Loss: 0.717574
14Train Epoch: 1 [25600/60000 (43%)] Loss: 0.843303
15Train Epoch: 1 [38400/60000 (64%)] Loss: 0.908609
16Train Epoch: 1 [51200/60000 (85%)] Loss: 0.701709
17
18Test set: Average loss: 0.0062, Accuracy: 7341.0/10000 (73%)
19
20Train Epoch: 2 [0/60000 (0%)] Loss: 0.780809
21Train Epoch: 2 [12800/60000 (21%)] Loss: 0.847154
22Train Epoch: 2 [25600/60000 (43%)] Loss: 0.899906
23Train Epoch: 2 [38400/60000 (64%)] Loss: 0.665957
24Train Epoch: 2 [51200/60000 (85%)] Loss: 0.619249
25
26Test set: Average loss: 0.0058, Accuracy: 7412.0/10000 (74%)
27
28Train Epoch: 3 [0/60000 (0%)] Loss: 0.695548
29Train Epoch: 3 [12800/60000 (21%)] Loss: 0.658115
30Train Epoch: 3 [25600/60000 (43%)] Loss: 0.544909
31Train Epoch: 3 [38400/60000 (64%)] Loss: 0.553123
32Train Epoch: 3 [51200/60000 (85%)] Loss: 0.685904
33
34Test set: Average loss: 0.0055, Accuracy: 7458.0/10000 (75%)
35
36Train Epoch: 4 [0/60000 (0%)] Loss: 0.814670
37Train Epoch: 4 [12800/60000 (21%)] Loss: 0.752603
38Train Epoch: 4 [25600/60000 (43%)] Loss: 0.694026
39Train Epoch: 4 [38400/60000 (64%)] Loss: 0.641801
40Train Epoch: 4 [51200/60000 (85%)] Loss: 0.693593
41
42Test set: Average loss: 0.0054, Accuracy: 7479.0/10000 (75%)
43
44Train Epoch: 5 [0/60000 (0%)] Loss: 0.676913
45Train Epoch: 5 [12800/60000 (21%)] Loss: 0.465759
46Train Epoch: 5 [25600/60000 (43%)] Loss: 0.756419
47Train Epoch: 5 [38400/60000 (64%)] Loss: 0.573767
48Train Epoch: 5 [51200/60000 (85%)] Loss: 0.743377
49
50Test set: Average loss: 0.0053, Accuracy: 7527.0/10000 (75%)
51
52Train Epoch: 6 [0/60000 (0%)] Loss: 0.663292
53Train Epoch: 6 [12800/60000 (21%)] Loss: 0.555222
54Train Epoch: 6 [25600/60000 (43%)] Loss: 0.802179
55Train Epoch: 6 [38400/60000 (64%)] Loss: 0.828413
56Train Epoch: 6 [51200/60000 (85%)] Loss: 0.622156
57
58Test set: Average loss: 0.0053, Accuracy: 7551.0/10000 (76%)
59
60Train Epoch: 7 [0/60000 (0%)] Loss: 0.731522
61Train Epoch: 7 [12800/60000 (21%)] Loss: 0.637348
62Train Epoch: 7 [25600/60000 (43%)] Loss: 0.776924
63Train Epoch: 7 [38400/60000 (64%)] Loss: 0.648009
64Train Epoch: 7 [51200/60000 (85%)] Loss: 0.639944
65
66Test set: Average loss: 0.0052, Accuracy: 7561.0/10000 (76%)
67
68Train Epoch: 8 [0/60000 (0%)] Loss: 0.673641
69Train Epoch: 8 [12800/60000 (21%)] Loss: 0.667220
70Train Epoch: 8 [25600/60000 (43%)] Loss: 0.448928
71Train Epoch: 8 [38400/60000 (64%)] Loss: 0.593169
72Train Epoch: 8 [51200/60000 (85%)] Loss: 0.677707
73
74Test set: Average loss: 0.0051, Accuracy: 7580.0/10000 (76%)
75
76Train Epoch: 9 [0/60000 (0%)] Loss: 0.713350
77Train Epoch: 9 [12800/60000 (21%)] Loss: 0.622664
78Train Epoch: 9 [25600/60000 (43%)] Loss: 0.724408
79Train Epoch: 9 [38400/60000 (64%)] Loss: 0.661977
80Train Epoch: 9 [51200/60000 (85%)] Loss: 0.539243
81
82Test set: Average loss: 0.0051, Accuracy: 7602.0/10000 (76%)
loss曲线如图所示