Episode notes
This paper proposes a new method for fine-tuning large language models (LLMs) called Aligned Supervised Fine-Tuning (ASFT). ASFT addresses limitations of existing Direct Preference Optimization (DPO) methods by optimizing the absolute likelihood of generating human-preferred responses rather than relying on relative likelihoods. Unlike DPO, ASFT does not require a reference model and is less sensitive to the initial state of the model, leading to more efficient and robust training. The authors demonstrate the effectiveness of ASFT through extensive experiments on various benchmark datasets, showing significant performance improvements compared to existing methods.
Keywords
Large Language ModelsDPO