In deep learning, a core training strategy includes extensive pre-training on a large-scale dataset followed by its fine-tuning on a specific task. As deep models grow in size, the approach of full fine-tuning, i.e., updating all the weights, becomes progressively more challenging, i.e., resource- and data hungry.
Parameter-efficient fine-tuning (PEFT) methods seek to address this challenge by a subset of the model’s parameters.
Download as PDF
Target Group:
- Students in ICE, Computer Science, Software Eng.
Thesis Type:
- Master Project / Master Thesis
Goals and Tasks:
The goal of this work is to evaluate and compare the performance of PEFT methods when adapting pretrained detection models (e.g., YOLO) to distribution shifts. The project includes the following tasks:
- In-depth literature research and understanding of existing PEFT methods (e.g., LoRA and GaLore);
- Implement PEFT methods for object detection models, evaluate and investigate their adaptation performances;
- Possibly implementing the mobile adaptation through LiteRT framework;
- Summarize the results in a written report.
Required Prior Knowledge:
- Interest to explore and analyze the performance of PEFT methods for object detectio models;
- Programming skills in Python;
- Prior experience of machine learning frameworks (e.g., Tensorflow, Pytorch).
Start:
Contact: