本文正在参加秃头小苏,致力于用最通俗的语言描述问题
🍊专栏推荐:深度学习网络原理与实战
🍊近期目标:写好专栏的每一篇文章
🍊支持小苏:点赞👍🏼、收藏⭐、留言📩
pytorch中Tensorboard的使用
写在前面
近期我打算不定期更新一些pytorch的教程,记录一些经常遇到的方法,避免每次遇到后重复搜索浪费时间,在之前我主要写过两篇pytorch教材,感兴趣的可以看一下,分别如下:
本期为大家带来Tensorborad的使用,同样,本篇基于Pytorch官网为大家介绍,同时加入自己的理解,希望大家阅读后能够有所收获。🌾🌾🌾
准备好了的话,就让我们开始吧!!!🥂🥂🥂
导入相关包
首先我们需要导入相关包,针对这节内容最重要的包就是SummaryWriter
这个啦,大家没有安装Tensorboard记得先安装喔,怎么安装就不用我教了吧,大家动动勤快的小手儿百度一下,然后敲个指令就好咯。🍋🍋🍋
# PyTorch model and training necessities
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# Image datasets and image manipulation
import torchvision
import torchvision.transforms as transforms
# Image display
import matplotlib.pyplot as plt
import numpy as np
# PyTorch TensorBoard support
from torch.utils.tensorboard import SummaryWriter
加载数据集
本次使用的数据集是FashionMNIST数据集,它和MNIST数据集很像,都是单通道的28×28大小的图片,图片内容是各种衣服、鞋等一系列服饰。数据集加载的过程很简单,就不介绍了哈,不清楚这些的可以看看我往期的文章。需要注意的是官网使用DataLoader方法时num_workers设置为2,如果你使用CPU训练或者调试的话,请务必将num_workers设置为0。
# Gather datasets and prepare them for consumption
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Store separate training and validations splits in ./data
training_set = torchvision.datasets.FashionMNIST('./data',
download=True,
train=True,
transform=transform)
validation_set = torchvision.datasets.FashionMNIST('./data',
download=True,
train=False,
transform=transform)
training_loader = torch.utils.data.DataLoader(training_set,
batch_size=4,
shuffle=True,
num_workers=0)
validation_loader = torch.utils.data.DataLoader(validation_set,
batch_size=4,
shuffle=False,
num_workers=0)
使用matplotlib可视化图片
我们先来使用matplotlib展示一下图片,我们将其保存到result文件夹下,可以来看一下结果。
# Extract a batch of 4 images
dataiter = iter(training_loader)
images, labels = next(dataiter)
# Create a grid from the images and show them
img_grid = torchvision.utils.make_grid(images)
torchvision.utils.save_image(img_grid, "./result/img_grid.bmp")
使用Tensorboard可视化图片
首先我们需要通过SummaryWriter启动Tensorboard,后面的'runs/fashion_mnist_experiment_1'
为文件保存路径,接着我们使用add_image添加图片。flush()方法是为了保证文件写入磁盘。
# Default log_dir argument is "runs" - but it's good to be specific
# torch.utils.tensorboard.SummaryWriter is imported above
writer = SummaryWriter('runs/fashion_mnist_experiment_1')
# Write image data to TensorBoard log dir
writer.add_image('Four Fashion-MNIST Images', img_grid)
writer.flush()
# To view, start TensorBoard on the command line with:
# tensorboard --logdir=runs
# ...and open a browser tab to http://localhost:6006/
执行完命令后,我们可以在中断敲 tensorboard --logdir=runs
,其中runs为保存的路径,如下图所示:
此时我们进入http://localhost:6006/即可查看图片,如下图所示:
使用Tensorboard可视化模型
首先我们来创建模型一个简单的模型,如下:
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 4 * 4, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 4 * 4)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
这里我们的数据集用到仍然是FashionMNIST,可视化模型用到的函数为add_graph()
,其余部分基本和可视化图片一致。
# Again, grab a single mini-batch of images
dataiter = iter(training_loader)
images, labels = next(dataiter)
writer = SummaryWriter('runs/fashion_mnist_experiment_1')
# add_graph() will trace the sample input through your model,
# and render it as a graph.
writer.add_graph(net, images)
writer.flush()
此部分代码执行完成之后,刷新一下http://localhost:6006/ 这个网址,会出现一个GRAPHS列,里面保存了刚刚定义网络的信息。我们可以简单看一下,如下图所示:
使用Tensorboard可视化损失
此阶段模型采用的和上一小节一致。直接上训练代码吧,如下:
print(len(validation_loader))
for epoch in range(1): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(training_loader, 0):
# basic training loop
inputs, labels = data
optimizer.zero_grad()
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if i % 1000 == 999: # Every 1000 mini-batches...
print('Batch {}'.format(i + 1))
# Check against the validation set
running_vloss = 0.0
net.train(False) # Don't need to track gradents for validation
for j, vdata in enumerate(validation_loader, 0):
vinputs, vlabels = vdata
voutputs = net(vinputs)
vloss = criterion(voutputs, vlabels)
running_vloss += vloss.item()
net.train(True) # Turn gradients back on for training
avg_loss = running_loss / 1000
avg_vloss = running_vloss / len(validation_loader)
# Log the running loss averaged per batch
writer.add_scalars('Training vs. Validation Loss',
{ 'Training' : avg_loss, 'Validation' : avg_vloss },
epoch * len(training_loader) + i)
running_loss = 0.0
print('Finished Training')
writer.flush()
保存训练损失所用的函数为add_scalars
,其余部分也大差不差,我们此时刷新http://localhost:6006/ 网页,会得到训练损失和验证损失,如下图所示:
小结
是不是发现这部分还挺好玩的呢,快去试试吧。最后我想说一下在jupyter notebook
或者google clab
中怎么启动Tensorboard,也很简单,分两步,如下:
%load_ext tensorboard
%tensorboard --logdir logs
执行上述两行代码就可以直接在jupyter notebook
或者google clab
使用Tensorboard啦,去尝试尝试吧!!!🥝🥝🥝
如若文章对你有所帮助,那就🛴🛴🛴