0

arXiv:2506.22809v3 Announce Type: replace
Abstract: Low-rank adaptation methods enable efficient task-specific updates in large neural networks, but provide no principled mechanism for uncertainty estimation or capacity control. We introduce Low-Rank Variational Dropout (LRVD), a Bayesian framework that operates directly in the space of low-rank adaptation. LRVD employs a scale-invariant, sparsity-inducing prior together with a structured variational family that ties uncertainty at the level of latent rank components, inducing rank-wise noise-to-signal ratios for automatic capacity selection. As a concrete instantiation, we apply LRVD to low-rank adaptation and obtain BayesLoRA, which jointly learns predictive uncertainty and the effective adapter rank with only O(r) additional parameters, where r is the adapter rank. We empirically show that BayesLoRA induces stable, non-arbitrary rank structure aligned with the intrinsic singular directions of the learned updates, and outperforms existing low-rank sparsification methods in accuracy at comparable training cost while delivering substantially improved predictive calibration at negligible additional overhead.
Be respectful and constructive. Comments are moderated.

No comments yet.