Note: The below is from my learning from https://www.cloudskillsboost.google/course_templates/536 (Introduction to Generative AI).
Large Language Models (LLMs) are a subset of Deep Learning.
They can be pre-trained and then fine tuned for specific purposes.
What do we mean by pre-trained and fine tuned?
Assume in our everyday life we train dogs basic commands such as sit/stand/walk etc. This is basic training. But if need to train a dog to be a police dog, we need more fine training apart from the basic ones. This is the difference between pre-trained and fine tuned.
Similar idea applies to LLMs.
LLMs are trained to solve common language problems like document summarization, text classification, text generation etc.
They can then be tailored to solve specific problems in the field of finance, retail etc.
Benefits of using LLMs:
- Single model can be used for various purposes
- Fine tuning a LLM requires minimum field data
- Performance grows continuously as more data and parameters are added.
- Generic (or Raw) Language Model
- Predict the next word
- Instruction Tuned
- Predict a response
- Dialog Tuned
- Have a dialog by predicting the next response
No comments:
Post a Comment