DisenQ: Disentangling Q-Former for Activity-Biometrics

Center for Research in Computer Vision, University of Central Florida.

ICCV 2025 Highlight ⭐️

Scaling person identification beyond walking to diverse real-world activities; enabling practical recognition in surveillance, healthcare and many more etc.

Abstract

In this work, we address activity-biometrics, which involves identifying individuals across diverse set of activities. Unlike traditional person identification, this setting introduces additional challenges as identity cues become entangled with motion dynamics and appearance variations, making biometrics feature learning more complex. While additional visual data like pose and/or silhouette help, they often struggle from extraction inaccuracies. To overcome this, we propose a multimodal language-guided framework that replaces reliance on additional visual data with structured textual supervision. At its core, we introduce DisenQ (Disentangling Q-Former), a unified querying transformer that disentangles biometrics, motion, and non-biometrics features by leveraging structured language guidance. This ensures identity cues remain independent of appearance and motion variations, preventing misidentifications. We evaluate our approach on three activity-based video benchmarks, achieving state-of-the-art performance. Additionally, we demonstrate strong generalization to complex real-world scenario with competitive performance on a traditional video-based identification benchmark, showing the effectiveness of our framework.

Current Challenges and Our Solution

Method

Quantitative Analysis

Ablation Studies

Ablation shows step-wise gain, solving each of the challenges one-by-one.

Qualitative Analysis

BibTeX

@inproceedings{azad2025disenq,
  title={Disenq: Disentangling q-former for activity-biometrics},
  author={Azad, Shehreen and Rawat, Yogesh S},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2025}}