網站首頁 編程語言 正文
I. 前言
在上一篇文章深入理解PyTorch中LSTM的輸入和輸出(從input輸入到Linear輸出)中,我詳細地解釋了如何利用PyTorch來搭建一個LSTM模型,本篇文章的主要目的是搭建一個LSTM模型用于時間序列預測。
系列文章:
PyTorch搭建LSTM實現多變量多步長時序負荷預測
PyTorch搭建LSTM實現多變量時序負荷預測
PyTorch深度學習LSTM從input輸入到Linear輸出
PyTorch搭建雙向LSTM實現時間序列負荷預測
II. 數據處理
數據集為某個地區某段時間內的電力負荷數據,除了負荷以外,還包括溫度、濕度等信息。
本篇文章暫時不考慮其它變量,只考慮用歷史負荷來預測未來負荷。
本文中,我們根據前24個時刻的負荷下一時刻的負荷。有關多變量預測請參考:PyTorch搭建LSTM實現多變量時間序列預測(負荷預測)。
def load_data(file_name):
global MAX, MIN
df = pd.read_csv('data/new_data/' + file_name, encoding='gbk')
columns = df.columns
df.fillna(df.mean(), inplace=True)
MAX = np.max(df[columns[1]])
MIN = np.min(df[columns[1]])
df[columns[1]] = (df[columns[1]] - MIN) / (MAX - MIN)
return df
class MyDataset(Dataset):
def __init__(self, data):
self.data = data
def __getitem__(self, item):
return self.data[item]
def __len__(self):
return len(self.data)
def nn_seq(file_name, B):
print('處理數據:')
data = load_data(file_name)
load = data[data.columns[1]]
load = load.tolist()
load = torch.FloatTensor(load).view(-1)
data = data.values.tolist()
seq = []
for i in range(len(data) - 24):
train_seq = []
train_label = []
for j in range(i, i + 24):
train_seq.append(load[j])
train_label.append(load[i + 24])
train_seq = torch.FloatTensor(train_seq).view(-1)
train_label = torch.FloatTensor(train_label).view(-1)
seq.append((train_seq, train_label))
# print(seq[:5])
Dtr = seq[0:int(len(seq) * 0.7)]
Dte = seq[int(len(seq) * 0.7):len(seq)]
train_len = int(len(Dtr) / B) * B
test_len = int(len(Dte) / B) * B
Dtr, Dte = Dtr[:train_len], Dte[:test_len]
train = MyDataset(Dtr)
test = MyDataset(Dte)
Dtr = DataLoader(dataset=train, batch_size=B, shuffle=False, num_workers=0)
Dte = DataLoader(dataset=test, batch_size=B, shuffle=False, num_workers=0)
return Dtr, Dte
上面代碼用了DataLoader來對原始數據進行處理,最終得到了batch_size=B的數據集Dtr和Dte,Dtr為訓練集,Dte為測試集。
III. LSTM模型
這里采用了深入理解PyTorch中LSTM的輸入和輸出(從input輸入到Linear輸出)中的模型:
class LSTM(nn.Module):
def __init__(self, input_size, hidden_size, num_layers, output_size, batch_size):
super().__init__()
self.input_size = input_size
self.hidden_size = hidden_size
self.num_layers = num_layers
self.output_size = output_size
self.num_directions = 1 # 單向LSTM
self.batch_size = batch_size
self.lstm = nn.LSTM(self.input_size, self.hidden_size, self.num_layers, batch_first=True)
self.linear = nn.Linear(self.hidden_size, self.output_size)
def forward(self, input_seq):
h_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size).to(device)
c_0 = torch.randn(self.num_directions * self.num_layers, self.batch_size, self.hidden_size).to(device)
seq_len = input_seq.shape[1] # (5, 24)
# input(batch_size, seq_len, input_size)
input_seq = input_seq.view(self.batch_size, seq_len, 1) # (5, 24, 1)
# output(batch_size, seq_len, num_directions * hidden_size)
output, _ = self.lstm(input_seq, (h_0, c_0)) # output(5, 24, 64)
output = output.contiguous().view(self.batch_size * seq_len, self.hidden_size) # (5 * 24, 64)
pred = self.linear(output) # pred(150, 1)
pred = pred.view(self.batch_size, seq_len, -1) # (5, 24, 1)
pred = pred[:, -1, :] # (5, 1)
return pred
IV. 訓練
def LSTM_train(name, b):
Dtr, Dte = nn_seq(file_name=name, B=b)
input_size, hidden_size, num_layers, output_size = 1, 64, 5, 1
model = LSTM(input_size, hidden_size, num_layers, output_size, batch_size=b).to(device)
loss_function = nn.MSELoss().to(device)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# 訓練
epochs = 15
cnt = 0
for i in range(epochs):
cnt = 0
print('當前', i)
for (seq, label) in Dtr:
cnt += 1
seq = seq.to(device)
label = label.to(device)
y_pred = model(seq)
loss = loss_function(y_pred, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if cnt % 100 == 0:
print('epoch', i, ':', cnt - 100, '~', cnt, loss.item())
state = {'model': model.state_dict(), 'optimizer': optimizer.state_dict()}
torch.save(state, LSTM_PATH)
一共訓練了15輪:
V. 測試
def test(name, b):
global MAX, MIN
Dtr, Dte = nn_seq(file_name=name, B=b)
pred = []
y = []
print('loading model...')
input_size, hidden_size, num_layers, output_size = 1, 64, 5, 1
model = LSTM(input_size, hidden_size, num_layers, output_size, batch_size=b).to(device)
model.load_state_dict(torch.load(LSTM_PATH)['model'])
model.eval()
print('predicting...')
for (seq, target) in Dte:
target = list(chain.from_iterable(target.data.tolist()))
y.extend(target)
seq = seq.to(device)
seq_len = seq.shape[1]
seq = seq.view(model.batch_size, seq_len, 1) # (5, 24, 1)
with torch.no_grad():
y_pred = model(seq)
y_pred = list(chain.from_iterable(y_pred.data.tolist()))
pred.extend(y_pred)
y, pred = np.array(y), np.array(pred)
y = (MAX - MIN) * y + MIN
pred = (MAX - MIN) * pred + MIN
print('accuracy:', get_mape(y, pred))
# plot
x = [i for i in range(1, 151)]
x_smooth = np.linspace(np.min(x), np.max(x), 600)
y_smooth = make_interp_spline(x, y[0:150])(x_smooth)
plt.plot(x_smooth, y_smooth, c='green', marker='*', ms=1, alpha=0.75, label='true')
y_smooth = make_interp_spline(x, pred[0:150])(x_smooth)
plt.plot(x_smooth, y_smooth, c='red', marker='o', ms=1, alpha=0.75, label='pred')
plt.grid(axis='y')
plt.legend()
plt.show()
MAPE為6.07%:
VI. 源碼及數據
源碼及數據我放在了GitHub上,LSTM-Load-Forecasting
原文鏈接:https://blog.csdn.net/Cyril_KI/article/details/122569775
相關推薦
- 2023-04-18 關于C#中async/await的用法實例詳解_C#教程
- 2022-08-17 create-react-app常用自定義配置教程示例_React
- 2022-06-21 Git基礎之git與SVN版本控制優缺點區別分析_其它綜合
- 2022-08-28 keil5仿真相關配置,解決相關bug
- 2022-07-29 PyTorch實現手寫數字的識別入門小白教程_python
- 2023-11-11 ValueError: (‘Unrecognized keyword arguments:‘, di
- 2023-10-16 element組件autofocus( 自動獲取焦點)失效
- 2023-04-06 C語言中雙鏈表的基本操作_C 語言
- 最近更新
-
- window11 系統安裝 yarn
- 超詳細win安裝深度學習環境2025年最新版(
- Linux 中運行的top命令 怎么退出?
- MySQL 中decimal 的用法? 存儲小
- get 、set 、toString 方法的使
- @Resource和 @Autowired注解
- Java基礎操作-- 運算符,流程控制 Flo
- 1. Int 和Integer 的區別,Jav
- spring @retryable不生效的一種
- Spring Security之認證信息的處理
- Spring Security之認證過濾器
- Spring Security概述快速入門
- Spring Security之配置體系
- 【SpringBoot】SpringCache
- Spring Security之基于方法配置權
- redisson分布式鎖中waittime的設
- maven:解決release錯誤:Artif
- restTemplate使用總結
- Spring Security之安全異常處理
- MybatisPlus優雅實現加密?
- Spring ioc容器與Bean的生命周期。
- 【探索SpringCloud】服務發現-Nac
- Spring Security之基于HttpR
- Redis 底層數據結構-簡單動態字符串(SD
- arthas操作spring被代理目標對象命令
- Spring中的單例模式應用詳解
- 聊聊消息隊列,發送消息的4種方式
- bootspring第三方資源配置管理
- GIT同步修改后的遠程分支