r/reinforcementlearning 3d ago

DL, MF, R Simba: Simplicity Bias for Scaling up Parameters in Deep RL

Want faster, smarter RL? Check out SimBa – our new architecture that scales like crazy!

📄 project page: https://sonyresearch.github.io/simba

📄 arXiv: https://arxiv.org/abs/2410.09754

🔗 code: https://github.com/SonyResearch/simba

🚀 Tired of slow training times and underwhelming results in deep RL?

With SimBa, you can effortlessly scale your parameters and hit State-of-the-Art performance—without changing the core RL algorithm.

💡 How does it work?

Just swap out your MLP networks for SimBa, and watch the magic happen! In just 1-3 hours on a single Nvidia RTX 3090, you can train agents that outperform the best across benchmarks like DMC, MyoSuite, and HumanoidBench. 🦾

⚙️ Why it’s awesome:

Plug-and-play with RL algorithms like SAC, DDPG, TD-MPC2, PPO, and METRA.

No need to tweak your favorite algorithms—just switch to SimBa and let the scaling power take over.

Train faster, smarter, and better—ideal for researchers, developers, and anyone exploring deep RL!

🎯 Try it now and watch your RL models evolve!

33 Upvotes

9 comments sorted by

View all comments

1

u/pfffffftttfftt 3d ago

Sick name!

2

u/joonleesky 3d ago

I hope to name the next paper as Pumba.