Instance segmentation using adapter-finetuned foundation models
- AI research and applications have experienced a paradigm shift with the emergence of foundation models. Although these models deliver state-of-the-art results across a broad spectrum of tasks, their exponential increase in size necessitates methods for parameterefficient training. To tackle this, the integration of adapter modules represents a promising yet compact strategy, offering remarkable performance by incorporating a minimal number of parameters for each specific task. This research proposes the "repetitive adapter module", designed to add a layer of linear scalability to the traditional bottleneck architecture. By integrating these modules into the foundational models SEEM and Mask DINO and applying them to three downstream tasks, the approach is demonstrated nearly reaching the effectiveness of traditional fine-tuning methods while requiring significantly fewer trainable parameters. Furthermore, the thesis highlights the applicability of repetitive adapters within the modular meta-architecture underlying SEEM and Mask DINO, proving their effectiveness disregarding model size and multi-modality. This investigation also explores the limitations of adapter fine-tuning, laying the groundwork for future research in this domain.
Author: | David Rohrschneider |
---|---|
URN: | urn:nbn:de:hbz:1393-opus4-14447 |
Document Type: | Master's Thesis |
Language: | German |
Year of Completion: | 2024 |
Date of final exam: | 2024/03/18 |
Release Date: | 2024/09/30 |
Institutes: | Fachbereich 1 - Institut Informatik |
DDC class: | 300 Sozialwissenschaften / 330 Wirtschaft |
Licence (German): | No Creative Commons |