日本免费高清视频-国产福利视频导航-黄色在线播放国产-天天操天天操天天操天天操|www.shdianci.com

學(xué)無(wú)先后,達(dá)者為師

網(wǎng)站首頁(yè) 編程語(yǔ)言 正文

pytorch簡(jiǎn)單實(shí)現(xiàn)神經(jīng)網(wǎng)絡(luò)功能_python

作者:那小子真混蛋 ? 更新時(shí)間: 2022-11-06 編程語(yǔ)言

一、基本

(1)利用pytorch建好的層進(jìn)行搭建

import torch
from torch import nn
from torch.nn import functional as F

#定義一個(gè)MLP網(wǎng)絡(luò)
class MLP(nn.Module):
    '''
    網(wǎng)絡(luò)里面主要是要定義__init__()、forward()
    '''
    def __init__(self):
        '''
        這里定義網(wǎng)絡(luò)有哪些層(比如nn.Linear,Conv2d……)[可不含激活函數(shù)]
        '''
        super().__init__()#調(diào)用Module(父)初始化
        self.hidden = nn.Linear(5,10)
        self.out = nn.Linear(10,2)
    
    def forward(self,x):
        '''
        這里定義前向傳播的順序,即__init__()中定義的層是按怎樣的順序進(jìn)行連接以及傳播的[在這里加上激活函數(shù),以構(gòu)造復(fù)雜函數(shù),提高擬合能力]
        '''
        return self.out(F.relu(self.hidden(x)))

  上面的3層感知器可以用于解決一個(gè)簡(jiǎn)單的現(xiàn)實(shí)問(wèn)題:給定5個(gè)特征,輸出0-1類別概率值,是一個(gè)簡(jiǎn)單的2分類解決方案。

  搭建一些簡(jiǎn)單的網(wǎng)絡(luò)時(shí),可以用nn.Sequence(層1,層2,……,層n)一步到位:

import torch
from torch import nn
from torch.nn import functional as F

net = nn.Sequential(nn.Linear(5,10),nn.ReLU(),nn.Linear(10,2))

  但是nn.Sequence僅局限于簡(jiǎn)單的網(wǎng)絡(luò)搭建,而自定義網(wǎng)絡(luò)可以實(shí)現(xiàn)復(fù)雜網(wǎng)絡(luò)結(jié)構(gòu)。

  (1)中定義的MLP大致如上(5個(gè)輸入->全連接->ReLU()->輸出)

(2)使用網(wǎng)絡(luò)

import torch
from torch import nn
from torch.nn import functional as F

net = MLP()
x = torch.randn((15,5))#15個(gè)samples,5個(gè)輸入屬性
out = net(x)
#也可調(diào)用forward->"out = net.forward(x)"
print(out)
#print(out.shape)
tensor([[-0.0760, -0.1026],
        [-0.3277, -0.2332],
        [-0.0314, -0.1921],
        [ 0.0131, -0.1473],
        [-0.0650, -0.2310],
        [ 0.3009, -0.5510],
        [ 0.1491, -0.0928],
        [-0.1438, -0.1304],
        [-0.1945, -0.1944],
        [ 0.1088, -0.2249],
        [ 0.0016, -0.2334],
        [ 0.1401, -0.3709],
        [-0.1864, -0.1764],
        [ 0.0775, -0.0160],
        [ 0.0150, -0.3198]], grad_fn=<AddmmBackward>)

二、進(jìn)階

(1)構(gòu)建較復(fù)雜的網(wǎng)絡(luò)結(jié)構(gòu)  

a. Sequence、net套娃

import torch
from torch import nn
from torch.nn import functional as F

class MLP2(nn.Module):
    def __init__(self):
        super().__init__()
        self.net = nn.Sequential(nn.Linear(5,10),nn.ReLU(),nn.Linear(10,5))
        self.out = nn.Linear(5,4)
   
    def forward(self,x):
        return self.out(F.relu(self.net(x)))

net2 = nn.Sequential(MLP2(),nn.ReLU(),nn.Linear(4,2))
net2.eval()
# eval()等價(jià)print(net2)
Sequential(
  (0): MLP2(
    (net): Sequential(
      (0): Linear(in_features=5, out_features=10, bias=True)
      (1): ReLU()
      (2): Linear(in_features=10, out_features=5, bias=True)
    )
    (out): Linear(in_features=5, out_features=4, bias=True)
  )
  (1): ReLU()
  (2): Linear(in_features=4, out_features=2, bias=True)
)

(2) 參數(shù)  

a. 權(quán)重、偏差的訪問(wèn)

#訪問(wèn)權(quán)重和偏差
print(net2[2].weight)#注意weight是parameter類型,.data訪問(wèn)數(shù)值
print(net2[2].bias.data)

#輸出所有權(quán)重、偏差
print(*[(name,param) for name,param in net2[2].parameters()])

  b. 不同網(wǎng)絡(luò)之間共享參數(shù)

shared = nn.Linear(8,8)

net = nn.Sequential(nn.Linear(5,8),nn.ReLU(),shared,nn.ReLU(),shared)
print(net[2].weight.data[0])
net[2].weight.data[0][0] = 100
print(net[2].weight.data[0][0])
print(net[2].weight.data[0] == net[4].weight.data[0])
net.eval()

  c. 參數(shù)初始化

def init_Linear(m):
    if type(m) == nn.Linear:
        nn.init.normal_(m.weight,mean = 0,std = 0.01)   #將權(quán)重按照均值為0,標(biāo)準(zhǔn)差為0.01的正態(tài)分布進(jìn)行初始化
        nn.init.zeros_(m.bias)  #將偏差置為0

def init_const(m):
    if type(m) == nn.Linear:
        nn.init.constant_(m.weight,42)   #將權(quán)重全部置為42
        
def my_init(m):
    if type(m) == nn.Linear:
        '''
        對(duì)weight和bias自定義初始化
        '''
        pass

#如何調(diào)用?
net2.apply(init_const)  #在net2中進(jìn)行遍歷,對(duì)每個(gè)Linear執(zhí)行初始化

(3)自定義層(__init__()中可含輸入輸出層)  

a. 不帶輸入輸出的自定義層(輸入輸出一致,x數(shù)進(jìn),x數(shù)出,對(duì)每個(gè)值進(jìn)行相同的操作,類似激活函數(shù))  

b. 帶輸入輸出的自定義層

import torch
from torch import nn
from torch.nn import functional as F

#a
class decentralized(nn.Module):
    def __init__(self):
        super().__init__()
    
    def forward(self,x):
        return x-x.mean()

#b
class my_Linear(nn.Module):
    def __init__(self,dim_in,dim_out):
        super().__init__()
        self.weight = nn.Parameter(torch.ones(dim_in,dim_out))  #由于x行數(shù)為dim_out,列數(shù)為dim_in,要做乘法,權(quán)重行列互換
        self.bias = nn.Parameter(torch.randn(dim_out))
    
    def forward(self,x):
        return F.relu(torch.matmul(x,self.weight.data)+self.bias.data)

tmp = my_Linear(5,3)
print(tmp.weight)

(4)讀寫(xiě)

#存取任意torch類型變量

x = torch.randn((20,20))
torch.save(x,'X')   #存
y = torch.load('X') #取

#存儲(chǔ)網(wǎng)絡(luò)

torch.save(net2.state_dict(),'Past_parameters') #把所有參數(shù)全部存儲(chǔ)
clone = nn.Sequential(MLP2(),nn.ReLU(),nn.Linear(4,2))   #存儲(chǔ)時(shí)同時(shí)存儲(chǔ)網(wǎng)絡(luò)定義(網(wǎng)絡(luò)結(jié)構(gòu))
clone.load_state_dict(torch.load('Past_parameters'))
clone.eval()

原文鏈接:https://www.cnblogs.com/BGM-WCJ/p/16695133.html

欄目分類
最近更新