Timeseries forecasting notes
Sessions - Timeseries forecasting notes1
Key Notes
- Things evolve based on a function of time
- Properties as a function of time
- Series of data indexed in time order (Year / Month / Seconds)
- Stationary - Statistical properties like mean, variance are constant over time
- Non-stationary - things change
- libraries - statsmodels, pmdarima
- ARIMA (For Non-stationary it works)
- Fully Connected Neural Network, RNN
ARIMA - p,d,q
- AR(p) - Autoregression
- i(d) - integration
- MA(q) - Moving Average
SARIMA - seasonal ARIMA
Sessions - Timeseries forecasting notes2
Key Notes
- Check data stationery or not
- Split data into months
- Additive time series = Base level + Trend + Seasonality + Error
- Multiplicative time series = Base level X Trend X Seasonality X Error
- Positive / Negative correlation (Scatterplot also provides similar feature correlation)
Sessions - Timeseries forecasting notes3
Key Notes
- SARIMA - seasonal dataset
- Pyramid Auto Arima - pmdarima - pick best models to choose
- pip install pmdarima
- pip install statsmodels --upgrade
- Aggregate data by week / month / dayofweek for insights
- Up & Down indicates - Trends / Seasonality
- It is changing not stationary - Upward trend yearly
- Seasonality - Repetitive every year ?
- auto_arima model
Sessions - Timeseries forecasting notes4
Key Notes
- Feed Forward Networks
- Sequential Layers
- Dense - Fully Connected Layer
- Preprocessing, minmax scaler transformation
- With every fifth point predict sixth point
- Timeseries generator in keras can be used
Session - BERT Neural Network - EXPLAINED!
Key Notes
- Transformers for Neural Machine Translation
- LSTM Challenges - sequential words, slow learning, Left to right, right to left context learning
- Transformer - Processed simultaneously
- Transformer has encoder / decoder
- Generate embedding for each word
- Decoder takes words of translated words and generate translations next word
- Encoder learns english + context
- Decoder how english maps to french words
- Stack just encoders - Bert
- Stack decoders - GPT architecture
- BERT for NMT, Question Answering, Sentiment Analysis, Text Summarizing
- Pretraining - BERT learns language, context. Masked language model, next sentence prediction
- Finetuning - Q&A / Answers. Replace last layer with output parameters
- Phase I output is Token embedding, segment (sentence) embedding, position embedding
- All word vectors have same size, generated simultaneously
- Training using cross entropy loss
- BERT base / BERT large models
Code Exercise - Link
To-do list
Transformer Neural Networks - EXPLAINED! (Attention is all you need)
More Revisions, More Revisions!!!
No comments:
Post a Comment