Portrait of Inwoo Hwang

Inwoo Hwang

I am a final-year PhD candidate in the Electrical and Computer Engineering at Seoul National University (SNU), working in the 3D Vision Lab advised by Prof. Young Min Kim. Previously, I was a Research Scientist Intern at Snap Research.

My research focuses on generative motion modeling for Embodied Motion Intelligence, with an emphasis on controllable and robust motion generation under sparse, noisy, and causal conditions, alongside text-driven motion synthesis.

News

Latest updates
  1. Award Received the Young Researcher Award from SNU INMC.
  2. NeurIPS 2025 SnapMoGen accepted to NeurIPS 2025.
  3. ICCV 2025 Four papers at ICCV 2025: SceneMI Highlight, SFControl, Less is More, and Event-Driven Storytelling.
  4. CVPRW 2025 Goal-Driven Human Motion Synthesis in Diverse Tasks presented at CVPR 2025 Workshop.
  5. Eurographics 2025 Versatile Physics-based Character Control with Hybrid Latent Representation presented at Eurographics 2025.
  6. Started a research internship at Snap Research in New York City.
  7. CVPR 2023 Text2Scene selected as Highlight at CVPR 2023.
  8. Eurographics 2023 Text2PointCloud: Text-Driven Stylization for Sparse PointCloud presented at Eurographics 2023 (Short).
  9. WACV 2023 Ev-NeRF: Event Based Neural Radiance Field presented at WACV 2023.
  10. Award Awarded the Hyundai Motor Chung Mong-Koo Scholarship (Ph.D., Chung Mong-Koo Foundation).

Publications

* equal contribution
ScaleMoGen teaser
ScaleMoGen: Autoregressive Next-Scale Prediction for Human Motion Generation

Inwoo Hwang*, Hojun Jang*, Bing Zhou, Jian Wang, Young Min Kim, Chuan Guo

arXiv preprint, 2026

A next-scale token map prediction framework with a multi-scale skeletal-temporal hierarchy for human motion generation, enabling zero-shot motion editing.

EgoForce teaser
EgoForce: Robust Online Egocentric Motion Reconstruction via Diffusion Forcing

Inwoo Hwang, Donggeun Lim, Hojun Jang, Young Min Kim

arXiv preprint, 2026

An online causal framework for full-body motion reconstruction from sparse and noisy egocentric observations using diffusion forcing.

SnapMoGen teaser
SnapMoGen: Human Motion Generation from Expressive Texts

Chuan Guo, Inwoo Hwang, Jian Wang, Bing Zhou

NeurIPS, 2025

A large-scale text-motion dataset featuring high-quality motion capture and expressive textual annotations, alongside a masked modeling framework with multi-scale tokens.

SceneMI teaser
SceneMI: Motion In-betweening for Modeling Human-Scene Interaction

Inwoo Hwang, Bing Zhou, Young Min Kim, Jian Wang, Chuan Guo

ICCV, 2025 Highlight

Models human-scene interaction as in-betweening, while remaining robust to inaccurate keyframes and supporting practical applications such as video-based HSI reconstruction.

SFControl teaser
Motion Synthesis with Sparse and Flexible Keyjoint Control

Inwoo Hwang, Jinseok Bae, Donggeun Lim, Young Min Kim

ICCV, 2025

A controllable motion synthesis pipeline for high-quality motion generation from sparse control signals, including time-agnostic motion control without explicit timing signals.

Less is More teaser
Less is More: Improving Motion Diffusion Models with Sparse Keyframes

Jinseok Bae, Inwoo Hwang, Young Yoon Lee, Ziyu Guo, Joseph Liu, Yizhak Ben-Shabat, Young Min Kim, Mubbasir Kapadia

ICCV, 2025

A sparse keyframe-based motion diffusion model that better captures text prompts and improves overall motion quality.

Versatile Physics-based Control teaser
Versatile Physics-based Character Control with Hybrid Latent Representation

Jinseok Bae, Jungdam Won, Donggeun Lim, Inwoo Hwang, Young Min Kim

Eurographics, 2025

Integrates continuous and discrete latent representations so that physically simulated characters can efficiently use motion priors and adapt to diverse challenging control tasks.

Research Experience

Snap Research logo

Snap Research

Research Scientist Intern

New York City, NY · May 2024 – Sep 2024

Mentors: Bing Zhou, Chuan Guo, Jian Wang

Physically plausible reconstruction of human motion and scenes from real-world videos. This work resulted in SceneMI, published at ICCV 2025 as a Highlight.

Selected Honors & Awards

Education

Academic Activities

Teaching