Thursday, August 17, 2023

Short explanation on PEFT: Parameter Efficient Fine Tuning

Many pretrained large language models are out there for us to use. However, they may not be accurate for our purpose. Thus, the model needs fine tuning. 

Since the model is large, the idea is to: make a copy of the existing model, and select a small percentage of trainable features to retrain. With the new copy of the model, train the new copy with your data.




Note that the library does not work with any random model that you created, as the parameter in LoraConfig task_type=TaskType.SEQ_2_SEQ_LM sets an expectation of the model.

LoRa applies the summation with the existing matrices with Low-Rank Matrices to adjust the weights, which is a trick to create a large matrix by adding small amount of parameters
(I explained it earlier in this post.  ) Since only a small percentage of the features are trainable, the training is relatively fast.


This video explains how the LoRA training works internally: 
https://www.coursera.org/learn/generative-ai-with-llms/lecture/NZOVw/peft-techniques-1-lora




No comments: