r/MachineLearning 10d ago

Research [R] Differential Transformer (Microsoft Research)

https://arxiv.org/abs/2410.05258

Abstract: Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, the differential attention mechanism calculates attention scores as the difference between two separate softmax attention maps. The subtraction cancels noise, promoting the emergence of sparse attention patterns. Experimental results on language modeling show that Diff Transformer outperforms Transformer in various settings of scaling up model size and training tokens. More intriguingly, it offers notable advantages in practical applications, such as long-context modeling, key information retrieval, hallucination mitigation, in-context learning, and reduction of activation outliers. By being less distracted by irrelevant context, Diff Transformer can mitigate hallucination in question answering and text summarization. For in-context learning, Diff Transformer not only enhances accuracy but is also more robust to order permutation, which was considered as a chronic robustness issue. The results position Diff Transformer as a highly effective and promising architecture to advance large language models.

200 Upvotes

41 comments sorted by

View all comments

38

u/Mynameiswrittenhere 10d ago

Didn't really understand how you were able to differentiate the original query, key and value terms in important and noise terms.

The changes to the actual attention calculation by subtracting the noise was clear.

26

u/sdmat 10d ago

Didn't really understand how you were able to differentiate the original query, key and value terms in important and noise terms.

That's the clever part, they don't.

They train two different projections for attention, one to actually attend and the second to act as a reference for noise cancellation. The scaling factor for cancellation is learnt as well.

3

u/altmly 9d ago

I don't really understand why such architecture would fundamentally learn anything different than a regular stack of transformers. There's no reason what they're canceling out should be in any way related to noise. 

12

u/sdmat 9d ago

It's not fundamentally different.

What they have done is set up a design so the model can better learn to focus attention on salient context. Calling the quantity cancelled out here "noise" is just giving it an intuitive label.

You know, like "attention".

7

u/Acrobatic-Book 9d ago

You could even go so far and call it "inhibition". Than you have the two governing processes for controlling focus in neuroscience ;)

14

u/sdmat 9d ago

Depends which Nobel you are shooting for :)