2019年7月6日 星期六

[股票] 以多層LSTM來訓練模型並預測股票價格

利用多層長短期記憶模型LSTM來預測股價走勢

訓練資料:
Features: m筆 連續look_back日的股價
Label: look_back的最後一天資料的後一天的股價
  1. 訓練方法: 我們要訓練機器讀look_back日的股價,預測look_back後一天的股價
  2. 測試方法: 測試時,我們只給機器look_back天數的股價,讓機器連續預測predict_length天數的股價走勢,方法是將每次預測的股價append到前一筆訓練資料,並將訓練資料裁減為predict_length的長度,這個過程稱為update訓練資料. 因此除了的一筆訓練資料外,第二筆之後的訓練資料皆包含預測資料在裡面. 我們將預測的資料與實際上的股價走勢來比較,檢查模型的可靠性與誤差.
    因為股價是時間序列資料,我們將建立多層LSTM模型.

import 函式庫


import os
import pandas as pd
import matplotlib.pyplot as plt
import sqlite3
import numpy as np 
import pandas as pd 
from subprocess import check_output
from keras.layers.core import Activation, Dropout, Flatten
from keras.layers import Dense
from keras.layers.recurrent import LSTM
from keras.models import Sequential
from sklearn.model_selection import  train_test_split
import time #helper libraries
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
from numpy import newaxis

載入資料庫


#看總共有幾日的資料
len(stock_price)

1221

#股價normalize
scaler = MinMaxScaler(feature_range=(0, 1))
stock_price = scaler.fit_transform(stock_price)
plt.plot(stock_price)
plt.show()


#準備訓練和測試資料: 將股價分隔成前80%為訓練資料,後20%為測試資料
train_size = int(len(stock_price) * 0.80)
test_size = len(stock_price) - train_size
train, test = stock_price[0:train_size,:], stock_price[train_size:len(stock_price),:]
print(len(train), len(test)) #訓練資料和測試資料長度

976 245

# 準備訓練資料的函數: 輸入為股價,輸出為dataX:m筆look_back天數的股價shape=(m,look_back)與dataY:m筆look_back後一天的股價shape=(m,1)
def create_dataset(dataset, look_back=1):
    dataX, dataY = [], []
    for i in np.arange(len(dataset)-look_back-1):
        a = dataset[i:(i+look_back), 0]
        dataX.append(a)
        b=dataset[i + look_back, 0]
        dataY.append(b)
    return np.array(dataX), np.array(dataY)

#將訓練和測試股價帶入上面的函數得到(trainX, trainY,),(testX, testY)
look_back = 30
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)

#將二維的訓練資料中間插入一軸來儲存LSTM的時間序列shape=(樣本,時間序列,股價)
trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1]))
testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1]))

#查看訓練資料trainX的shape
trainX.shape

(945, 1, 30)

#建立多層LSTM,最後用密集層輸出一維度的預測值
model = Sequential()

model.add(LSTM(input_dim=trainX[0].shape[-1],output_dim=50,return_sequences=True))
model.add(Dropout(0.2))


model.add(LSTM(32,return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(32,return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(32,return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(32,return_sequences=True))
model.add(Dropout(0.2))

model.add(LSTM(32,return_sequences=False))
model.add(Dropout(0.2))

model.add(Dense(output_dim=1))
model.add(Activation('linear'))

start = time.time()
model.compile(loss='mse', optimizer='rmsprop', metrics=['mae'])
print ('compilation time : ', time.time() - start)

compilation time :  0.035308122634887695

#檢視model摘要
model.summary()

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
lstm_1 (LSTM)                (None, None, 50)          16200     
_________________________________________________________________
dropout_1 (Dropout)          (None, None, 50)          0         
_________________________________________________________________
lstm_2 (LSTM)                (None, None, 32)          10624     
_________________________________________________________________
dropout_2 (Dropout)          (None, None, 32)          0         
_________________________________________________________________
lstm_3 (LSTM)                (None, None, 32)          8320      
_________________________________________________________________
dropout_3 (Dropout)          (None, None, 32)          0         
_________________________________________________________________
lstm_4 (LSTM)                (None, None, 32)          8320      
_________________________________________________________________
dropout_4 (Dropout)          (None, None, 32)          0         
_________________________________________________________________
lstm_5 (LSTM)                (None, None, 32)          8320      
_________________________________________________________________
dropout_5 (Dropout)          (None, None, 32)          0         
_________________________________________________________________
lstm_6 (LSTM)                (None, 32)                8320      
_________________________________________________________________
dropout_6 (Dropout)          (None, 32)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 33        
_________________________________________________________________
activation_1 (Activation)    (None, 1)                 0         
=================================================================
Total params: 60,137
Trainable params: 60,137
Non-trainable params: 0
_________________________________________________________________

#開始訓練,每個batch有128筆資料,總共有8 (945/128)個batch, 訓練20次
train_history=model.fit(trainX,trainY,batch_size=128,nb_epoch=20,validation_split=0.2)

Train on 756 samples, validate on 189 samples
Epoch 1/20
756/756 [==============================] - 9s 12ms/step - loss: 0.1698 - mean_absolute_error: 0.3723 - val_loss: 0.6218 - val_mean_absolute_error: 0.7865
Epoch 2/20
756/756 [==============================] - 0s 616us/step - loss: 0.1426 - mean_absolute_error: 0.3350 - val_loss: 0.5505 - val_mean_absolute_error: 0.7398
Epoch 3/20
756/756 [==============================] - 1s 763us/step - loss: 0.1098 - mean_absolute_error: 0.2854 - val_loss: 0.4126 - val_mean_absolute_error: 0.6398
Epoch 4/20
756/756 [==============================] - 1s 1ms/step - loss: 0.0585 - mean_absolute_error: 0.1964 - val_loss: 0.1762 - val_mean_absolute_error: 0.4159
Epoch 5/20
756/756 [==============================] - 1s 875us/step - loss: 0.0196 - mean_absolute_error: 0.1067 - val_loss: 0.0572 - val_mean_absolute_error: 0.2326
Epoch 6/20
756/756 [==============================] - 1s 869us/step - loss: 0.0139 - mean_absolute_error: 0.0926 - val_loss: 0.0372 - val_mean_absolute_error: 0.1846
Epoch 7/20
756/756 [==============================] - 1s 740us/step - loss: 0.0140 - mean_absolute_error: 0.0913 - val_loss: 0.0308 - val_mean_absolute_error: 0.1665
Epoch 8/20
756/756 [==============================] - 1s 990us/step - loss: 0.0119 - mean_absolute_error: 0.0838 - val_loss: 0.0257 - val_mean_absolute_error: 0.1505
Epoch 9/20
756/756 [==============================] - 1s 828us/step - loss: 0.0115 - mean_absolute_error: 0.0815 - val_loss: 0.0050 - val_mean_absolute_error: 0.0579
Epoch 10/20
756/756 [==============================] - 1s 909us/step - loss: 0.0096 - mean_absolute_error: 0.0777 - val_loss: 0.0044 - val_mean_absolute_error: 0.0514
Epoch 11/20
756/756 [==============================] - 0s 632us/step - loss: 0.0103 - mean_absolute_error: 0.0783 - val_loss: 0.0048 - val_mean_absolute_error: 0.0571
Epoch 12/20
756/756 [==============================] - 1s 731us/step - loss: 0.0095 - mean_absolute_error: 0.0765 - val_loss: 0.0048 - val_mean_absolute_error: 0.0569
Epoch 13/20
756/756 [==============================] - 0s 635us/step - loss: 0.0095 - mean_absolute_error: 0.0753 - val_loss: 0.0074 - val_mean_absolute_error: 0.0726
Epoch 14/20
756/756 [==============================] - 0s 646us/step - loss: 0.0099 - mean_absolute_error: 0.0769 - val_loss: 0.0043 - val_mean_absolute_error: 0.0539
Epoch 15/20
756/756 [==============================] - 0s 636us/step - loss: 0.0105 - mean_absolute_error: 0.0786 - val_loss: 0.0035 - val_mean_absolute_error: 0.0457
Epoch 16/20
756/756 [==============================] - 0s 654us/step - loss: 0.0097 - mean_absolute_error: 0.0758 - val_loss: 0.0054 - val_mean_absolute_error: 0.0611
Epoch 17/20
756/756 [==============================] - 0s 641us/step - loss: 0.0088 - mean_absolute_error: 0.0734 - val_loss: 0.0030 - val_mean_absolute_error: 0.0434
Epoch 18/20
756/756 [==============================] - 1s 669us/step - loss: 0.0084 - mean_absolute_error: 0.0713 - val_loss: 0.0038 - val_mean_absolute_error: 0.0506
Epoch 19/20
756/756 [==============================] - 0s 648us/step - loss: 0.0080 - mean_absolute_error: 0.0708 - val_loss: 0.0032 - val_mean_absolute_error: 0.0441
Epoch 20/20
756/756 [==============================] - 1s 672us/step - loss: 0.0083 - mean_absolute_error: 0.0713 - val_loss: 0.0037 - val_mean_absolute_error: 0.0472

#建立訓練過程視覺化函數
import matplotlib.pyplot as plt
def show_train_history(train_history, train, validation):
    plt.plot(train_history.history[train])
    plt.plot(train_history.history[validation])
    plt.title('Train history')
    plt.ylabel(train)
    plt.yscale('log')
    plt.xlabel('Epoch')
    plt.legend(['train', 'validation'], loc='upper left')
    plt.show()

#將訓練過程視覺化
show_train_history(train_history,'loss','val_loss')
show_train_history(train_history,'mean_absolute_error','val_mean_absolute_error')


發現在的10個epoch後出現accuracy開始收斂,MAE~0.06(~6%誤差)

#建立測試函數來預測length天的股價
def predict_sequences_multiple(model, firstValue,length):
    prediction_seqs = []
    curr_frame = firstValue
    for i in range(length):
        a=model.predict(curr_frame[newaxis,:])[0,0]
        curr_frame=np.insert(curr_frame,look_back,a)[-look_back:][newaxis,:]
        prediction_seqs.append(a)
    return prediction_seqs


#建立函數將預測結果轉換成normalize之前的實際scale
def inverse_transform(testY,predictions):
    predictions_origins=scaler.inverse_transform(np.array(predictions).reshape(-1, 1))
    answer=scaler.inverse_transform(testY.reshape(-1,1))
    return answer,predictions_origins

#建立函數畫出預測與實際的股價走勢
def stock_pred_plot(answer,predictions_origins):
    error=(predictions_origins-answer)/answer*100
    fig,axis=plt.subplots(2,1,figsize=(5,4))
    axis[0].plot(predictions_origins,label='predict')
    axis[0].plot(answer,label='real')
    axis[0].set_ylabel('price')
    axis[0].legend()
    axis[1].set_xlabel('Day')
    axis[1].set_ylabel('error(%)')
    axis[1].plot(error,label='error')
    axis[1].legend()
    plt.show()

#建立函數來計算誤差
def error_history(answer,predictions_origins):
    result=(predictions_origins-answer)/answer*100
    return result[:,0]

predict_length=30 #預測天數
#測試20筆資料
for i in np.arange(0,100,5):
    predictions = predict_sequences_multiple(model, testX[i], predict_length)
    answer=testY[i:(i+predict_length)]
    ans,pred=inverse_transform(answer,predictions)
    stock_pred_plot(ans,pred)
#計算誤差值
error_history_all=[]
for i in np.arange(0,100,5):
    predictions = predict_sequences_multiple(model, testX[i], predict_length)
    answer=testY[i:(i+predict_length)]
    ans,pred=inverse_transform(answer,predictions)
    error_history_all.append(error_history(ans,pred))
all_error=np.array(error_history_all)
mean=all_error.mean(axis=0)
std=all_error.std(axis=0)
df=pd.DataFrame(all_error)

#畫出測試誤差
fig,axis=plt.subplots(2,1, figsize=(10,5))
df.T.plot(ax=axis[0])
axis[0].set_ylabel('Error(%)')
import matplotlib.pyplot as plt
x=list(range(len(df.T)))
axis[1].errorbar(x, mean, std, linestyle='None', marker='^')
axis[1].set_xlabel('Day')
axis[1].set_ylabel('Error(%)')







三倍槓桿和一倍槓桿的長期定期定額報酬率分析

  以下是中國,美國股票債卷的三倍槓桿和一倍槓桿ETF分析.可以發現,三倍槓桿在下跌時期的跌幅遠比一倍槓桿的多 .且從時間軸來看,三倍槓桿由於下跌力道較強,因此會把之前的漲幅都吃掉,所以對於長期上身的市場,例如美國科技股,由於上升時間遠比下跌時間長,所以持有TQQQ的長期回報率會...