# Using LSTM Multivariate and Univariate in order to predict stock market.

Use case: Stock markets can be highly volatile and generally difficult to predict. The prediction model developed in this post only serves to illustrate a use case for `Time Series` models. That should assume which is a simple neural network as fully mapping the complexity of price development.

## Multivariate vs Univariate Time Series.

Basically, time series models can be distinguished whether they are one-dimensional (univariate) or multidimensional (multivariate).

• Univariate Time Series

This focuses on a single dependent variable. The basic assumption behind the univariate prediction approach is that the value of a time-series at time-step t is closely related to the values at the previous time-steps t-1, t-2, t-3 and so on.

Univariate models are easier to develop than multivariate models. The dependent variable in stock market forecasting is usually the closing or opening price of a finance asset. A forecasting model that is trained solely on the basis of price developments attempts.

• Multivariate Time Series

Time series forecasting is about estimating the future value of a time series on the basis of past data. Many time series problems can be solved by looking a single step in the future. Multi-step time series prediction models the distribution of future values of a signal over a prediction horizon. This approach predicts multiple output values at the same time which is forecasting approach to predict the further course of gradually rising sine wave. In addition, many of these variables are interdependent, which makes statistical modeling even are more complex.

A univariate forecast model reduces this complexity to a minimum- a single factor. Other dimensions are ignored. Although a multivariate model can only take into account a fraction of the influencing factors, it can still take several factors into account simultaneously.

Example, a multivariate stock market forecasting model can consider not only the relationship to the closing price, but also the opening price, moving averages, daily highs, the price of other stocks. Multivariate models are not able to be fully cover the complexity of the market. So, that can detail abstraction of reality than univariate models, which is provide more accurate predictions.

The model will be a recurrent neural network with Long short-term memory (LSTM) layers. Recurrent networks with LSTM layers have loops that enable them to pass output values from training instance to interactive instances.

`Neural Networks` are trained in multiple epochs. An epoch is a training iteration over the whole input data. During an epoch, the entire training dataset is passed forward and backward in multiple slices through the neural network. The weights of the network are adjusted throughout this process. In addition, the batch size determines after how many examples the model updates the weights between neurons.

However, after one epoch a network is typically under-fitting the data, resulting in bad prediction performance. Therefore, one iteration is typically not enough and we need to pass the whole dataset multiple times through the neural network to enable it to learn.

On the other hand, one should be careful not to choose the number of epochs to high. The reason is that a model tends to overfit after some time. Such a model will achieve great performance on the training data, but poor performance on any other data.

The LSTM architecture enables the network to preserve certain learnings throughout the whole training process. What the network learned in a previous iteration informs later epochs. This allows the network to consider information on patterns on different levels of abstraction. Because of this chain like nature, recurrent neural networks have achieved excellent results in these areas.

LSTM layers in combination with a rolling forecast approach to forecast the sinus curve with a linear slope. As illustrated below, this approach generates predictions for multiple time-steps by iteratively reusing the model outputs of the previous training run.

Multi-step time series forecasting model in Python.

That steps involved in multi-step time series forecasting. The steps covered are as follows:

1. Generate and EDA sample time series data

2. Configuring the time series prediction model. As well as setting up an early stop

3. Building and Training the model that include:

• Define function to fit model ( `train, val, time-steps, hl, lr, batch, epochs` )
• Evaluating the model
• Plotting the predictions
• Plotting the training errors
• Plotting the data prediction

Building model based on to extract by series `['Close', 'High', 'Volume']` for both Multivariate and Univariate time series.

4. Cross-Validation

• Setup hyper-parameters for the model as `time-steps, hl, lr, batch_size, num_epochs`
• Extracting the series
• Normalization series
• Spitting the data for initial model creation

To need extend for times-series by split method:

This module provides a class to split time-series data for back-testing and evaluation. The aim was to extended the current `sklearn` implementation and extend it’s use which is useful for some.

However that instance, I splited step by step for model creation. But, that improve by `TimeSeriesSplit` function. Detail as here:

Time Series cross-validator

Provide train/test indices to split time series data samples that are observed at fixed time intervals in train/test sets. In each split, test indices must be higher than before. Thus `shuffling` in cross-validator is inappropriate.

This cross_validation object is a variation of `Class TimeSeriesSplit` function from the popular `scikit-learn` package. It extends its base functionally to allow for expanding windows, and rolling windows with configurable train and test sizes and delay between each train on window time.
In this implementation we specifically force the test size to be equal across all splits.

Expanding Window as here:

Parameters:

`n_splits: int, default = 5` : Number of splits. Must be at least 4

`train_size: int, optional` : Size for a single training set

`test_size: int, optional, must be positive` : Size of a single testing set

`delay:int, default = 0, must be positive` : Number of index shifts to make between train and test sets. Example:

delay = 0 `TRAIN: [0 1 2 3] TEST: `

delay = 1 `TRAIN: [0 1 2 3] TEST: `

delay = 2 `TRAIN: [0 1 2 3] TEST: `

`force_step_size: int, optional` Ignore split logic and force the training data to shift by the step size forward for n_splits. Example

`TRAIN: [0 1 2 3] TEST: `

`TRAIN: [0 1 2 3 4] TEST: `

`TRAIN: [0 1 2 3 4 5] TEST: `

`TRAIN: [0 1 2 3 4 5 6] TEST: `

Specially with series that below:

The best of result training of Multivariate and Univariate method through split step that here:

`Split III with Multivariate-LSTM model`

`Split IV with Univariate-LSTM Model`

A mind that is stretched by a new experience can never go back to its over dimensions

## More from Van Nguyen

A mind that is stretched by a new experience can never go back to its over dimensions