Volltext-Downloads (blau) und Frontdoor-Views (grau)

Instance segmentation using adapter-finetuned foundation models

  • AI research and applications have experienced a paradigm shift with the emergence of foundation models. Although these models deliver state-of-the-art results across a broad spectrum of tasks, their exponential increase in size necessitates methods for parameter­efficient training. To tackle this, the integration of adapter modules represents a promising yet compact strategy, offering remarkable performance by incorporating a minimal num­ber of parameters for each specific task. This research proposes the "repetitive adapter module", designed to add a layer of linear scalability to the traditional bottleneck ar­chitecture. By integrating these modules into the foundational models SEEM and Mask DINO and applying them to three downstream tasks, the approach is demonstrated nearly reaching the effectiveness of traditional fine-tuning methods while requiring significantly fewer trainable parameters. Furthermore, the thesis highlights the applicability of repet­itive adapters within the modular meta-architecture underlying SEEM and Mask DINO, proving their effectiveness disregarding model size and multi-modality. This investiga­tion also explores the limitations of adapter fine-tuning, laying the groundwork for future research in this domain.

Download full text files

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:David Rohrschneider
URN:urn:nbn:de:hbz:1393-opus4-14447
Document Type:Master's Thesis
Language:German
Year of Completion:2024
Date of final exam:2024/03/18
Release Date:2024/09/30
Institutes:Fachbereich 1 - Institut Informatik
DDC class:300 Sozialwissenschaften / 330 Wirtschaft
Licence (German):License LogoNo Creative Commons