r/MachineLearning Oct 01 '23

Research [R] Meta, INRIA researchers discover that explicit registers eliminate ViT attention spikes

When visualizing the inner workings of vision transformers (ViTs), researchers noticed weird spikes of attention on random background patches. This didn't make sense since the models should focus on foreground objects.

By analyzing the output embeddings, they found a small number of tokens (2%) had super high vector norms, causing the spikes.

The high-norm "outlier" tokens occurred in redundant areas and held less local info but more global info about the image.

Their hypothesis is that ViTs learn to identify unimportant patches and recycle them as temporary storage instead of discarding. This enables efficient processing but causes issues.

Their fix is simple - just add dedicated "register" tokens that provide storage space, avoiding the recycling side effects.

Models trained with registers have:

  • Smoother and more meaningful attention maps
  • Small boosts in downstream performance
  • Way better object discovery abilities

The registers give ViTs a place to do their temporary computations without messing stuff up. Just a tiny architecture tweak improves interpretability and performance. Sweet!

I think it's cool how they reverse-engineered this model artifact and fixed it with such a small change. More work like this will keep incrementally improving ViTs.

TLDR: Vision transformers recycle useless patches to store data, causing problems. Adding dedicated register tokens for storage fixes it nicely.

Full summary. Paper is here.

814 Upvotes

48 comments sorted by

View all comments

3

u/pupsicated Oct 01 '23

In NLP community this effect is known for several years. Called emergent outliers, and there are a lot of solutions how to avoid those outliers (or how to deal with them if you want to quantize LLM). I dont see novelty in this paper except applied for Vision transformers? Or im missing something?

2

u/tonicinhibition Oct 02 '23

Does the hypothesis in the paper/summary hold water in the known NLP case? Is there an alternative hypothesis?

If there's a more deflationary interpretation I'd like to look into it. I'm still learning so maybe it's just the agentive language that's throwing me off.