We believe in a world where the default AI path for developers is trustworthy, safe, and open
Finetune an LLM using Federated AI
This Blueprint demonstrates how to finetune LLMs using Flower, a framework for federated AI. As access to high-quality public datasets declines, federated AI enables multiple data owners to collaboratively fine-tune models without sharing raw data, preserving privacy while leveraging distributed datasets.
We apply parameter-efficient fine-tuning (PEFT) with LoRA adapters to fine-tune the Qwen2-0.5B-Instruct model on the Alpaca-GPT4 dataset. This approach optimizes resource efficiency while maintaining model adaptability, making it a practical solution for decentralized AI development.
Trusted open source tools used for this Blueprint

HuggingFace Transformers is used for model finetuning and HuggingFace Datasets is used for loading the dataset.
Insights into our motivations and key technical decisions throughout the development process.
OS: Linux Python 3.10 or higher Minimum RAM: 8GB (recommended for LLM fine-tuning)
Learn MoreDetailed guidance on GitHub walking you through this project installation.
View MoreGet involved in improving the Blueprint by visiting the GitHub Blueprint issues.
Join inSee examples of extended blueprints unlocking new capabilities and adjusted configurations enabling tailored solutions—or try it yourself.