One-liners - Summary from link
- Lesson #1 - Sentences numerically represented a 1 and 0 on occurence of work, This is one hot encoding
- Lesson #2 - Sequences, when we setup, every next word would be possible combinations/ words divided by count, is the probability, First-word sequence
- Lesson #3 - If we know multiple words in sequence it would be easier / more confident to nail down on possible words
- Lesson #4 - When you consider word by word its is next word, When you keep sequences it becomes easier, Instead of one word consider it like triplets, bi-gram, tri-gram remember sequences in multiple combinations
- Lesson #5 - Create sequences skipping words
- Lesson #6 - Embeddings to leverage similar words
- Lesson #7 - Positional encoding to consider positions/location in the embedding space
I still need few more iterations but this this is first cut understanding.
Keep Exploring!!!
No comments:
Post a Comment