Fine-tuning large models on local hardware — Benjamin Bossan

Conference: EuroPython 2024

Year: 2024

[EuroPython 2024 — Forum Hall on 2024-07-11] Fine-tuning large models on local hardware by Benjamin Bossan https://ep2024.europython.eu/session/fine-tuning-large-models-on-local-hardware Fine-tuning big neural nets like Large Language Models (LLMs) has traditionally been prohibitive due to high hardware requirements. However, Parameter-Efficient Fine-Tuning (PEFT) and quantization enable the training of large models on modest hardware. Thanks to the PEFT library and the Hugging Face ecosystem, these techniques are now accessible to a broad audience. Expect to learn: - what the challenges are of fine-tuning large models - what solutions have been proposed and how they work - practical examples of applying the PEFT library --- This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License: https://creativecommons.org/licenses/by-nc-sa/4.0/