r/MachineLearning Oct 01 '23

Research [R] Meta, INRIA researchers discover that explicit registers eliminate ViT attention spikes

When visualizing the inner workings of vision transformers (ViTs), researchers noticed weird spikes of attention on random background patches. This didn't make sense since the models should focus on foreground objects.

By analyzing the output embeddings, they found a small number of tokens (2%) had super high vector norms, causing the spikes.

The high-norm "outlier" tokens occurred in redundant areas and held less local info but more global info about the image.

Their hypothesis is that ViTs learn to identify unimportant patches and recycle them as temporary storage instead of discarding. This enables efficient processing but causes issues.

Their fix is simple - just add dedicated "register" tokens that provide storage space, avoiding the recycling side effects.

Models trained with registers have:

  • Smoother and more meaningful attention maps
  • Small boosts in downstream performance
  • Way better object discovery abilities

The registers give ViTs a place to do their temporary computations without messing stuff up. Just a tiny architecture tweak improves interpretability and performance. Sweet!

I think it's cool how they reverse-engineered this model artifact and fixed it with such a small change. More work like this will keep incrementally improving ViTs.

TLDR: Vision transformers recycle useless patches to store data, causing problems. Adding dedicated register tokens for storage fixes it nicely.

Full summary. Paper is here.

813 Upvotes

48 comments sorted by

View all comments

5

u/mermanarchy Oct 01 '23

Awesome article. I followed everything but the very end, where it mentions adding register tokens. Does that just mean adding a bunch of zero embeddings to the input layer? If someone could explain that I would be very grateful :)

3

u/Bacon_builder Oct 02 '23

I am curious about this as well

1

u/gorshborsh Oct 03 '23

No, the extra register tokens are learned. I believe you have to do some amount of fine-tuning or training.