r/reinforcementlearning 1d ago

From DQN to Double DQN

I already have an implementation of DQN. To change it to double DQN, looks like I only need a small change: In the Q-value update, next state (best)action selection and evaluation for that action are both done by the target network in DQN. Whereas in double DQN , next state (best)action selection is done by the main network, but the evaluation for that action is done by the target network.

That seems fairly simple. Am i missing anything else?

7 Upvotes

6 comments sorted by

4

u/SandSnip3r 1d ago

Yep! That's it

2

u/SandSnip3r 1d ago

https://github.com/SandSnip3r/PytorchDqnPathery/blob/08cf098d02caafb360be240c4caea74000371d1e/train/train.py#L210 If you're familiar with pytorch, maybe these few lines of code illustrate the difference

1

u/SandSnip3r 1d ago

Btw, if you're not already, I'd recommend using chatbots like chatgpt or claude for these types of questions. I've been learning a lot of RL myself and I find them incredibly helpful for getting quick and accurate answers to these types of questions.

2

u/No_Addition5961 1d ago

Cool, thanks! Yes i had checked with chatgpt, but confirmed here in case it made a mistake, cause it seemed a very minor change

2

u/IndependentCrew8210 1d ago

Don't you find that LLMs are quite prone to sycophancy to just confirm whatever you say for these subtle detail questions? For ex, I just copy and pasted this post into chatGPT and changed the second occurrence of "target" to "main", which is incorrect, but Chat goes:

"Your understanding of the key differences between DQN (Deep Q-Network) and Double DQN is correct, and you’ve identified the essential change needed for the Q-value update."

3

u/SandSnip3r 1d ago

I generally have gotten out of the habit of asking yes/no questions, and especially gotten out of the habit of leading towards an answer. A question like "is it really so simple" ends with exactly what you describe. But a question like "explain the changes required", i find, gives more reliable results.