Financial Time Series Forecasting
Usecases
- Stock price forecasting
- Index prediction
- Forex price prediction
- Commodity (oil, gold, etc) price prediction
- Bond price forecasting
- Volatility forecasting
- Cryptocurrency price forecasting
Patterns
- Price prediction and price movement (trend) prediction
- Researchers consider trend prediction
- Trend prediction becomes a classification problem - only up or down movements are taken into consideration. whereas up, down or neutral movements (3-class problem) also exist.
Time Aspects
- Period refers to the time period for training and testing
- Lag has the time length of the input vector (e.g. 30d means the input vector has a 30 day window)
- Horizon shows how far out into the future is predicted
Features
- Lagged stock returns
- Price data
- Turnover and number of trades.
- Daily closing prices
- Monthly and daily log-returns
- Price time series and emotional data from text posts for predicting the stock opening price of the next day
- Detecting the buy-sell pressure of movements
- GDP, Unemployment rate, Inventories, etc.
- Financial news
- Stock market data
- Volatility
- Technical indicators, Price data, News
- Twitter sentiment and stock prices
- Social media news, Index data
- Limit order book state
- Trades
- Buy/sell orders
- Order deletions
- Selected words in a news
- Weather conditions and various macroeconomic indicators
- Specific customer shipment patterns or the current competitive market situation
- A further example is the transformation of categorical variables into several binary values via so-called one-hot encoding trends or seasonal components
- When classifying the competitive market situation as “highly competitive”, “moderately competitive”, “not competitive” and the like
- When a particular weather situation coincides with a peak shipment date, nullifying or exacerbating the effect of the peak shipment date
Challenges
- Price disruptions, high volatility, bid-ask spread variations cause arbitrage opportunities across different platforms
What is the future direction for DL research for financial time series forecasting ?
- Response: NLP, semantics and text mining-based hybrid models ensembled with timeseries data might be more common in the near future.
Expert Aggregation for Financial Forecasting
- Aggregation with expert advice has the advantage of considering several forecasters instead of one, keeping the knowledge of each expert across time
- By weighting dynamically portfolios the mixture decreases the mean mixture excess risk, ensuring that on average the aggregation forecasting loss is close or better than the one of the best expert
- Stocks are then sorted according to their expert’s prediction, allowing to build two portfolios (stocks to be invested or shorted) for each expert.
- In a second step, the expert portfolios are aggregated based on the strategy returns, building an adaptive convex combination of the family of portfolios.
MACHINE LEARNING FOR FINANCIAL FORECASTING, PLANNING AND ANALYSIS: RECENT DEVELOPMENTS AND PITFALLS
- Fraud detection and financial forecasting. Planning and resource allocation
- Investments in research and development (R&D)
- Expansion of production capacity
- Financial obligations to debt holders or equity investors and tax authorities
- The time horizons considered for financial forecasts and plans usually range from one month to several years
- A practical example is to predict the sales of a product using input variables such as time of the year, price level, advertising expenditures and availability of competitor products.
Real-time Forecasting of Time Series in Financial Markets Using Sequentially Trained Many-to-one LSTMs
- LSTM is also suitable for complex data sequences such as stock time series extracted from financial markets because it has internal memory, has capability of customization, and is free from gradient-related issues
- Since we make predictions only for one time step ahead at a time for an input time series, the LSTM architecture implemented here is the many-to-one type
Time series workshop
- Dynamic Time Wrapping
- Common Periodicity Detection Algorithms
- Time domain: autocorrelation function
- Freq domain: Fisher’s test by Periodogram
- Short-term forecasting: predict the near future
- Long-term forecasting: predict the future with an extended period
- Extreme value forecasting: predict the extreme values
- Point or Probabilistic forecasting: predict point value or interval/probability distribution
Nbeats
DeepAR
TFT
Autoformer: Transformer with auto-correlation mechanism
FEDformer: frequency enhanced decomposed Transformer
Quatformer: Transformer with quaternions for periodic time series
Time-Series Works and Conferences
Transformers in Time Series
Dynamic Time Warping (DTW) variations
Kstacked LSTM
Ref link
Deep Time
- DeepTime: Using Deep Time-Index Meta-Learning to Improve Non-Stationary Time-Series Forecasting
- A time-series is a series of data measurements over time – a sequential collection of numerical data
- Non-Stationarity: When Time Series Changes Over Time
- Stationarity refers to time series data values that stay within a range, as well as regularity in the time series statistical patterns
- Meta-learning is a technique that aims to achieve the kind of quick learning exhibited by humans
- The inner learning loop learns very quickly from a small set of examples, called the support set.
- The outer learning loop ensures that the inner loop can perform this fast adaptation on new support sets. This is done by being trained on a query set - a set containing similar but distinct examples from the initial support set.
- Single-shot: Make the predictions all at once.
- Autoregressive: Make one prediction at a time and feed the output back to the model.
The main features of the input windows are:
- The width (number of time steps) of the input and label windows.
- The time offset between them.
- Which features are used as inputs, labels, or both.
Generates windows 24 hours of consecutive inputs and labels at a time
input_width=24, label_width=24, shift=1
Forecasting Notes
Ref Link
- RNNs/CNNs are able to extract the most relevant features without manual engineering
- Forecasting applications (e.g. retail demand, electricity load, weather, finance, etc.)
- Bregman Volatility allows us to compute the optimal volatility of a sequence of forecasts
MQTransformer: Multi-Horizon Forecasts with Context-Dependent and Feedback-Aware Attention
Time series forecasting the key fundamental questions
Time series forecasting is a statistical technique used to predict future values of a variable based on historical data. It is widely used in various fields, such as finance, economics, and weather forecasting. When working with time series forecasting, there are several key fundamental questions to consider:
What is the objective of the forecast? Clearly define the purpose of the forecast, such as predicting sales, stock prices, or weather conditions. This will help guide the selection of appropriate forecasting methods and evaluation metrics.
What is the frequency and length of the time series data? The frequency (e.g., daily, monthly, yearly) and length of the historical data will influence the choice of forecasting models and techniques. Longer and more frequent data can provide more accurate forecasts but may also require more complex models.
Is the time series stationary or non-stationary? Stationary time series have constant mean and variance over time, while non-stationary time series exhibit trends or seasonality. Different forecasting methods are suitable for stationary and non-stationary data, so it is essential to identify the nature of the time series.
Are there any seasonal patterns or trends in the data? Identifying and accounting for seasonality and trends can improve the accuracy of forecasts. Techniques such as decomposition, differencing, or using seasonal models like SARIMA can help address these patterns.
Are there any external factors or events that may influence the time series? Consider any external factors, such as economic conditions, holidays, or promotions, that may impact the variable being forecasted. Incorporating these factors into the forecasting model can improve its accuracy.
Which forecasting model(s) should be used? There are various time series forecasting models, such as ARIMA, Exponential Smoothing, and Neural Networks. Selecting the appropriate model(s) depends on the characteristics of the data and the forecasting objective.
How to evaluate the accuracy of the forecasts? Use appropriate evaluation metrics, such as Mean Absolute Error (MAE), Mean Squared Error (MSE), or Mean Absolute Percentage Error (MAPE), to assess the accuracy of the forecasts and compare different models.
How to handle uncertainty and confidence intervals? Forecasting is inherently uncertain, so it is essential to provide confidence intervals or prediction intervals to quantify the uncertainty associated with the forecasts.
How often should the forecasts be updated? Determine the frequency of updating the forecasts based on the needs of the decision-making process and the availability of new data.
How to communicate the forecasts and their uncertainty to stakeholders? Effectively communicate the forecasts, their accuracy, and associated uncertainties to stakeholders to support informed decision-making.
Addressing these fundamental questions will help ensure a robust and accurate time series forecasting process that meets the needs of the decision-makers and stakeholders.
Global models in time series forecasting refer to models that capture the overall structure and patterns in the entire time series data. These models consider the entire dataset as a single entity and attempt to identify and model the underlying patterns, trends, and seasonality that are consistent across the entire time series. Global models are in contrast to local models, which focus on capturing patterns and relationships within smaller segments or windows of the time series data.
Some common global models used in time series forecasting include:
Autoregressive Integrated Moving Average (ARIMA): ARIMA is a linear model that combines autoregressive (AR) and moving average (MA) components, along with differencing to make the time series stationary. It is a widely used global model for forecasting stationary time series data.
Exponential Smoothing State Space Model (ETS): ETS is a family of forecasting models that includes Simple Exponential Smoothing, Holt's Linear Trend, and Holt-Winters Seasonal models. These models use exponential smoothing to capture the level, trend, and seasonality components in the time series data.
Seasonal Decomposition of Time Series (STL): STL is a technique used to decompose a time series into its trend, seasonal, and residual components. The decomposed components can then be modeled separately and combined to generate forecasts.
Vector Autoregression (VAR): VAR is a multivariate extension of the ARIMA model, used for forecasting multiple interrelated time series simultaneously. It captures the linear dependencies between the variables in the system and can be used for global forecasting in a multivariate setting.
Prophet: Developed by Facebook, Prophet is a global forecasting model that combines additive regression with seasonal and holiday components. It is designed to handle time series data with strong seasonality and multiple seasonality patterns.
Global models are generally more straightforward to implement and interpret compared to local models, as they focus on capturing the overall structure of the time series data. However, they may not be as effective in capturing short-term fluctuations or non-linear patterns in the data. In such cases, local models or a combination of global and local models may be more appropriate for forecasting.
Keep Exploring!!!