網(wǎng)站首頁 編程語言 正文
一、實(shí)現(xiàn)過程
本文對(duì)經(jīng)典手寫數(shù)字?jǐn)?shù)據(jù)集進(jìn)行多分類,損失函數(shù)采用交叉熵,激活函數(shù)采用ReLU
,優(yōu)化器采用帶有動(dòng)量的mini-batchSGD
算法。
所有代碼如下:
0、導(dǎo)包
import torch from torchvision import transforms,datasets from torch.utils.data import DataLoader import torch.nn.functional as F import torch.optim as optim
1、準(zhǔn)備數(shù)據(jù)
batch_size = 64 transform = transforms.Compose([ ? ? transforms.ToTensor(), ? ? transforms.Normalize((0.1307,),(0.3081,)) ]) # 訓(xùn)練集 train_dataset = datasets.MNIST(root='G:/datasets/mnist',train=True,download=False,transform=transform) train_loader = DataLoader(train_dataset,shuffle=True,batch_size=batch_size) # 測試集 test_dataset = datasets.MNIST(root='G:/datasets/mnist',train=False,download=False,transform=transform) test_loader = DataLoader(test_dataset,shuffle=False,batch_size=batch_size)
2、設(shè)計(jì)模型
class Net(torch.nn.Module): ? ? def __init__(self): ? ? ? ? super(Net, self).__init__() ? ? ? ? self.l1 = torch.nn.Linear(784, 512) ? ? ? ? self.l2 = torch.nn.Linear(512, 256) ? ? ? ? self.l3 = torch.nn.Linear(256, 128) ? ? ? ? self.l4 = torch.nn.Linear(128, 64) ? ? ? ? self.l5 = torch.nn.Linear(64, 10) ? ? def forward(self, x): ? ? ? ? x = x.view(-1, 784) ? ? ? ? x = F.relu(self.l1(x)) ? ? ? ? x = F.relu(self.l2(x)) ? ? ? ? x = F.relu(self.l3(x)) ? ? ? ? x = F.relu(self.l4(x)) ? ? ? ? return self.l5(x) model = Net() # 模型加載到GPU上 device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model.to(device)
3、構(gòu)造損失函數(shù)和優(yōu)化器
criterion = torch.nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(),lr=0.01,momentum=0.5)
4、訓(xùn)練和測試
def train(epoch): ? ? running_loss = 0.0 ? ? for batch_idx, data in enumerate(train_loader, 0): ? ? ? ? inputs, target = data ? ? ? ? optimizer.zero_grad() ? ? ? ? # forward+backward+update ? ? ? ? outputs = model(inputs.to(device)) ? ? ? ? loss = criterion(outputs, target.to(device)) ? ? ? ? loss.backward() ? ? ? ? optimizer.step() ? ? ? ? running_loss += loss.item() ? ? ? ? if batch_idx % 300 == 299: ? ? ? ? ? ? print('[%d,%d] loss: %.3f' % (epoch + 1, batch_idx + 1, running_loss / 300)) ? ? ? ? ? ? running_loss = 0.0 def test(): ? ? correct = 0 ? ? total = 0 ? ? with torch.no_grad(): ? ? ? ? for data in test_loader: ? ? ? ? ? ? images, labels = data ? ? ? ? ? ? outputs = model(images.to(device)) ? ? ? ? ? ? _, predicted = torch.max(outputs.data, dim=1) ? ? ? ? ? ? total += labels.size(0) ? ? ? ? ? ? correct += (predicted.cpu() == labels).sum().item() ? ? print('Accuracy on test set: %d %%' % (100 * correct / total)) for epoch in range(10): ? ? train(epoch) ? ? test()
運(yùn)行結(jié)果如下:
[1,300] loss: 2.166
[1,600] loss: 0.797
[1,900] loss: 0.405
Accuracy on test set: 90 %
[2,300] loss: 0.303
[2,600] loss: 0.252
[2,900] loss: 0.218
Accuracy on test set: 94 %
[3,300] loss: 0.178
[3,600] loss: 0.168
[3,900] loss: 0.142
Accuracy on test set: 95 %
[4,300] loss: 0.129
[4,600] loss: 0.119
[4,900] loss: 0.110
Accuracy on test set: 96 %
[5,300] loss: 0.094
[5,600] loss: 0.092
[5,900] loss: 0.091
Accuracy on test set: 96 %
[6,300] loss: 0.077
[6,600] loss: 0.070
[6,900] loss: 0.075
Accuracy on test set: 97 %
[7,300] loss: 0.061
[7,600] loss: 0.058
[7,900] loss: 0.058
Accuracy on test set: 97 %
[8,300] loss: 0.043
[8,600] loss: 0.051
[8,900] loss: 0.050
Accuracy on test set: 97 %
[9,300] loss: 0.041
[9,600] loss: 0.038
[9,900] loss: 0.043
Accuracy on test set: 97 %
[10,300] loss: 0.030
[10,600] loss: 0.032
[10,900] loss: 0.033
Accuracy on test set: 97 %
二、參考文獻(xiàn)
- [1] https://www.bilibili.com/video/BV1Y7411d7Ys?p=9
原文鏈接:https://blog.csdn.net/weixin_43821559/article/details/123337335
相關(guān)推薦
- 2022-10-05 linux查看服務(wù)器開放的端口和啟用的端口多種方式_Linux
- 2022-07-01 .Net設(shè)計(jì)模式之單例模式(Singleton)_基礎(chǔ)應(yīng)用
- 2022-09-25 C語言數(shù)據(jù)存儲(chǔ)歸類介紹_C 語言
- 2022-04-21 C語言中隨機(jī)數(shù)rand()函數(shù)詳解_C 語言
- 2023-03-16 python中split()函數(shù)的用法詳解_python
- 2022-07-11 PostgreSQL連接到數(shù)據(jù)庫增刪改查
- 2022-04-12 在碼云遠(yuǎn)程倉庫提交時(shí)遇到的問題error: failed to push some refs to
- 2022-08-30 詳解Oracle控制文件及日志文件的管理問題_oracle
- 最近更新
-
- window11 系統(tǒng)安裝 yarn
- 超詳細(xì)win安裝深度學(xué)習(xí)環(huán)境2025年最新版(
- Linux 中運(yùn)行的top命令 怎么退出?
- MySQL 中decimal 的用法? 存儲(chǔ)小
- get 、set 、toString 方法的使
- @Resource和 @Autowired注解
- Java基礎(chǔ)操作-- 運(yùn)算符,流程控制 Flo
- 1. Int 和Integer 的區(qū)別,Jav
- spring @retryable不生效的一種
- Spring Security之認(rèn)證信息的處理
- Spring Security之認(rèn)證過濾器
- Spring Security概述快速入門
- Spring Security之配置體系
- 【SpringBoot】SpringCache
- Spring Security之基于方法配置權(quán)
- redisson分布式鎖中waittime的設(shè)
- maven:解決release錯(cuò)誤:Artif
- restTemplate使用總結(jié)
- Spring Security之安全異常處理
- MybatisPlus優(yōu)雅實(shí)現(xiàn)加密?
- Spring ioc容器與Bean的生命周期。
- 【探索SpringCloud】服務(wù)發(fā)現(xiàn)-Nac
- Spring Security之基于HttpR
- Redis 底層數(shù)據(jù)結(jié)構(gòu)-簡單動(dòng)態(tài)字符串(SD
- arthas操作spring被代理目標(biāo)對(duì)象命令
- Spring中的單例模式應(yīng)用詳解
- 聊聊消息隊(duì)列,發(fā)送消息的4種方式
- bootspring第三方資源配置管理
- GIT同步修改后的遠(yuǎn)程分支