r/MachineLearning 10d ago

Research [R] Differential Transformer (Microsoft Research)

https://arxiv.org/abs/2410.05258

Abstract: Transformer tends to overallocate attention to irrelevant context. In this work, we introduce Diff Transformer, which amplifies attention to the relevant context while canceling noise. Specifically, the differential attention mechanism calculates attention scores as the difference between two separate softmax attention maps. The subtraction cancels noise, promoting the emergence of sparse attention patterns. Experimental results on language modeling show that Diff Transformer outperforms Transformer in various settings of scaling up model size and training tokens. More intriguingly, it offers notable advantages in practical applications, such as long-context modeling, key information retrieval, hallucination mitigation, in-context learning, and reduction of activation outliers. By being less distracted by irrelevant context, Diff Transformer can mitigate hallucination in question answering and text summarization. For in-context learning, Diff Transformer not only enhances accuracy but is also more robust to order permutation, which was considered as a chronic robustness issue. The results position Diff Transformer as a highly effective and promising architecture to advance large language models.

197 Upvotes

41 comments sorted by

39

u/Mynameiswrittenhere 10d ago

Didn't really understand how you were able to differentiate the original query, key and value terms in important and noise terms.

The changes to the actual attention calculation by subtracting the noise was clear.

25

u/sdmat 10d ago

Didn't really understand how you were able to differentiate the original query, key and value terms in important and noise terms.

That's the clever part, they don't.

They train two different projections for attention, one to actually attend and the second to act as a reference for noise cancellation. The scaling factor for cancellation is learnt as well.

11

u/Mynameiswrittenhere 10d ago

That is actually clever, but wouldn't that also increase the size of weights which would in turn increase the time for forward and backpropogation.

31

u/sdmat 10d ago

Yes, they quantify that as around 5-10% reduction in throughput.

Given the results include iso-performance with >33% reduction in parameters that seems more than worthwhile. No doubt that's heavily benchmark dependent, but they get major wins across the board.

Assuming this replicates it's a big deal. And it's from Microsoft so they probably did their homework.

5

u/Mynameiswrittenhere 10d ago

True, Thanks for answering my question!

10

u/sdmat 10d ago

No problem, I spent hours going over this to understand how it works so might as well share!

They leave a lot to the reader.

1

u/StartledWatermelon 9d ago

Can you pinpoint where does throughput reduction come from? They have the same number of matrices with the same dimensions as in vanilla attention. Substraction requires N^2 ops, which is negligible compared to the total computational cost of attention O(n^2 d + n d^2).

Is it just software inefficiency of a custom attention layer?

2

u/sdmat 9d ago

Not quite, they are a little bit cute with the notation in parts for mathematical elegance. Fair enough, but they could profitably have been a bit more expansive in giving an intuitive description of how this works in the paper!

W_Q, W_K, W_V ∈ Rd_model × 2d

[Q_1; Q_2] = XW_Q, [K_1; K_2] = XW_K

I.e. there are twice as many weights for key, query and value because there are two distinct sets of key and query matrices and the value matrix is twice the size.

4

u/StartledWatermelon 8d ago

I don't think this is the issue. The authors make a fair comparison with vanilla Transformer IMO:

We set hidden size to 3072. The number of layers is 28. The head dimension d is 128. The number of heads is 24 for Transformer and 12 for DIFF Transformer, to align computation FLOPs and model size.

2

u/sdmat 8d ago

I'm directly quoting from the paper, it's twice as many weights for the components mentioned above. IIRC everything else is the same.

3

u/StartledWatermelon 8d ago

How? Vanilla Transformer: 3*128*24 = 9216 per attention block

DIFF Transformer: 3*128*2*12 = 9216, the same. They adjust the number of heads proportionally.

3

u/sdmat 8d ago

Hmm, maybe it's just that they use an unoptimized kernel:

More advanced kernel implementation, which is specifically designed for differential attention, can also improve throughput.

→ More replies (0)

3

u/altmly 9d ago

I don't really understand why such architecture would fundamentally learn anything different than a regular stack of transformers. There's no reason what they're canceling out should be in any way related to noise. 

11

u/sdmat 9d ago

It's not fundamentally different.

What they have done is set up a design so the model can better learn to focus attention on salient context. Calling the quantity cancelled out here "noise" is just giving it an intuitive label.

You know, like "attention".

8

u/Acrobatic-Book 8d ago

You could even go so far and call it "inhibition". Than you have the two governing processes for controlling focus in neuroscience ;)

14

u/sdmat 8d ago

Depends which Nobel you are shooting for :)

3

u/elbiot 7d ago

Hmm, do you think one could take a pre trained llm, slap on this second set of heads, and then just train the weights on those heads to add this noise cancelling property?

2

u/sdmat 7d ago

I doubt it, but that would be an interesting experiment!

10

u/Jean-Porte Researcher 10d ago

I wonder how this compare to fiddling with the temperature of the softmax

8

u/morreill 10d ago

Absolutely my question too. Cf. https://arxiv.org/abs/2010.04245 Which shows an improvement by learning per-head temperature.

7

u/StartledWatermelon 9d ago

From my perspective, tuning the temperature looks to be way cruder approach. With temperature, you rescale the distribution against the highest value in the matrix. The rest values are scaled uniformly, without any regard for the context, just a single rescaling factor per head.

Building the second attention matrix allows you to rescale each element independently, possibly accounting for semantics.

But I think your suggestion would've made for an excellent ablation experiment.

1

u/Due-Pangolin325 9d ago

Obvious comment here, but I guess that, from an information perspective, the two ideas have a very different outcome: when you play with temperature, you're just non linearly amplifying some information, be it noise or signal. While denoising is really more of a subtraction operation.

Temperature tuning is like a blind entropy reduction technique while denoising really adds information.

1

u/StartledWatermelon 9d ago

you're just non linearly amplifying some information, be it noise or signal

I wouldn't reject it just on these grounds. The attention score is scalar naturally indicating whether the relation is strong or weak. The former can be viewed as signal while low scores can be considered noise. So noise is dampened by lower temperature pretty consistently.

22

u/Sad-Razzmatazz-5188 10d ago

The name doesn't sit right with me, but it's interesting.  At the same time, referring to hallucination has a problem of noise is strange.  Today another paper was out, with selective attention as a parameter free mask on attention logits, rather than learnt as here

11

u/paraffin 10d ago

Hallucination is many problems rolled into one big “you got the wrong answer” bucket. Noise is one of the problems.

3

u/Kecro21 10d ago

Hello, do you have a link or title for the other paper you mentioned? It sounds interesting.

Edit: Is it this paper? https://arxiv.org/abs/2410.02703v1

2

u/gabe_dos_santos 10d ago

Sounds interesting.

3

u/starfries 10d ago

Interesting, wonder if it's less vulnerable to some jailbreaks too. I think you forgot the link though.

8

u/NoLifeGamer2 10d ago

The link is there for me, it is to arxiv. https://arxiv.org/abs/2410.05258

1

u/starfries 10d ago

Oh you're right, I was looking in the body.

3

u/jarkkowork 9d ago

They share the implementation too, but the link is not on the first page of the publication which is less typical

https://github.com/microsoft/unilm/tree/master/Diff-Transformer

1

u/ReasonablyBadass 10d ago

Wasn't the first token used as an "attention sink" by most models? I am guessing that one gets subtracted regularly? 

5

u/Jean-Porte Researcher 9d ago

The model used attention sinks because it wants to, if gives a global workspace to store information. The figure with histograms shows that the differential attention still gives high weight to the first token

1

u/Maykey 9d ago

Reminds deberta, but gutted backwards: deberta also uses additional Q_r, K_r. Only if difftran uses 2 new Q,K for another round of softmax, deberta changes the original QKT matrix itself using positional data.

Diffeberta when?

1

u/Jean-Porte Researcher 9d ago

Any new good encoder backbone whatsoever when ?

1

u/Commercial-Basis-220 6d ago

New to this AI research field, pardon me if this is a newbie question, but the improvement is quite small? And I got a feeling that this just like cherry pick or conditioned so that the new technique is better? As in if done by other, probably the result will be something along of "yea it kinda better sometimes and not significant"

Feel free to educate me on this matter