Benchmarking Transformer Models for the Classification of Gravitational-Wave Detector Glitches

ORAL

Abstract

Gravitational-wave observatories such as LIGO have transformed astrophysics by detecting ripples in spacetime from cataclysmic cosmic events. Yet, their sensitivity is limited by transient noise artifacts, or glitches, which can mimic true signals in the data. Reliable image-based classification of these glitches is critical for improving data quality and enabling robust gravitational-wave discoveries. In this study, we investigate 19 glitch classes using spectrogram images generated from LIGO data. We present a benchmarking analysis of transformer-based deep learning architectures, including Vision Transformers (ViT), Swin, and DeiT, and compare their performance against convolutional neural network baselines. Our results demonstrate that transformers achieve competitive classification accuracy and F1 scores, while offering advantages in capturing global spectral features. We also highlight the importance of dataset balance and augmentation strategies in improving model robustness. This work establishes transformers as promising candidates for glitch image classification and contributes toward more reliable gravitational-wave data analysis in the next generation of observatories.

Presenters

  • Brian A Phillips

    Baylor University

Authors

  • Brian A Phillips

    Baylor University

  • Rudhresh Manoharan

    Baylor University

  • Gerald B. Cleaver

    Baylor University