Donghu Kim

logo_uni_tue          logo_uni_tue

Hey! My name is Donghu Kim. I am on a Master's Degree program in KAIST (advised by Jaegul Choo), studying reinforcement learning with these splendid researchers: Byungkun Lee, Hojoon Lee, Dongyoon Hwang, Hyunseung Kim, and Kyungmin Lee.

My main interest at the moment is inclined towards building an agent that is capable of adapting to multiple environments, both sequentially and simultaneously.

I still have a long long way to go; if you want to discuss anything research related, I'd be more than happy to be engaged!

Email  /  Google Scholar  /  Github

profile photo

News


Publications

preprint2024dodont
Reinforcement Learning Skill Discovery
Do’s and Don’ts: Learning Desirable Skills with Instruction Videos
Hyunseung Kim, Byungkun Lee, Hojoon Lee, Dongyoon Hwang, Donghu Kim, Jaegul Choo
Preprint, Under Review.
project page / paper

We present DoDont, a skill discovery algorithm that learns diverse behaviors while following the behaviors in "do" videos while avoiding the behaviors in "don't" videos.

icml2024atari-pb
Reinforcement Learning Pre-training
ATARI-PB: Investigating Pre-Training Objectives for Generalization in Pixel-Based RL
Donghu Kim*, Hojoon Lee*, Kyungmin Lee*, Dongyoon Hwang, Jaegul Choo.
ICML'24.
project page / paper

We investigate which pre-training objectives are beneficial for in-distribution, near-out-of-distribution, and far-out-of-distribution generalization in visual reinforcement learning.

icml2024hnt
Reinforcement Learning Plasticity
Slow and Steady Wins the Race: Maintaining Plasticity with Hare and Tortoise Networks
Hojoon Lee, Hyeonseo Cho, Hyunseung Kim, Donghu Kim, Dugki Min, Jaegul Choo, Clare Lyle.
ICML'24.
paper

To allow the network to continually adapt and generalize, we introduce Hare and Tortoise architecture, inspired by the complementary learning system of the human brain.


Template based on Jon Barron's website.