About me

I am currently a second-year PhD student in computer science at the University of Massachusetts Amherst, advised by Prof. Chuang Gan. My primary research focus centers around Robotics and Embodied AI, with a particular emphasis on robot skill acquisition. My work primarily revolves around equipping robots with the capability to acquire versatile skills and improve their generalization abilities. The practical applications of my research span across various domains, including domestic robots and autonomous manipulation tasks. Previous to my graduate study, I obtained my bachelor degree in Peking University when I worked as an undergraduate member in Center on Frontiers of Computing Studies (CFCS), under the guidance of Prof. Hao Dong.

πŸ”₯ News

  • Sep 2024: Β  Our paper Architect has been accepted by Neurips 2024!
  • May 2024: Β  Internship in NVIDIA!

πŸ“ Publications

Arxiv
sym

Towards Generalist Robots: A Promising Paradigm via Generative Simulation

Zhou Xian, Theophile Gervet, Zhenjia Xu, Tsun-Hsuan Wang, Yian Wang, Yufei Wang

Paper

  • The purpose of this document is to share the excitement of the authors with the community and highlight a promising research direction in robotics and AI. The authors believe the proposed paradigm is a feasible path towards accomplishing the longstanding goal of robotics research: deploying robots, or embodied AI agents more broadly, in various non-factory real-world settings to perform diverse tasks.
ICML 2024
sym

ROBOGEN: TOWARDS UNLEASHING INFINITE DATA FOR AUTOMATED ROBOT LEARNING VIA GENERATIVE SIMULATION

Yufei Wang*, Zhou Xian*, Feng Chen*, Tsun-Hsuan Wang, Yian Wang, Katerina Fragkiadaki, Zackory Erickson, David Held, Chuang Gan

Paper Project Code

  • RoboGen leverages the latest advancements in foundation and generative models. Instead of directly using or adapting these models to produce policies or low-level actions, we advocate for a generative scheme, which uses these models to automatically generate diversified tasks, scenes, and training supervisions, thereby scaling up robotic skill learning with minimal human supervision.
ICLR 2024 Spotlight
sym

Thin-Shell Object Manipulations With Differentiable Physics Simulations

Yian Wang*, Juntian Zheng*, Zhehuan Chen, Zhou Xian, Gu Zhang, Chao Liu, Chuang Gan

Paper Project

  • We introduce ThinShellLab - a fully differentiable simulation platform tailored for robotic interactions with diverse thin-shell materials possessing varying material properties, enabling flexible thin-shell manipulation skill learning and evaluation.
ICRA 2024
sym

Articulated Object Manipulation with Coarse-to-fine Affordance for Mitigating the Effect of Point Cloud Noise

Suhan Ling*, Yian Wang*, Shiguang Wu, Yuzheng Zhuang, Tianyi Xu, Yu Li, Chang Liu, Hao Dong

Paper Project

  • We leverage the property of real-world scanned point cloud that, the point cloud becomes less noisy when the camera is closer to the object. Therefore, we propose a novel coarse-to-fine affordance learning pipeline to mitigate the effect of point cloud noise in two stages.
Arxiv
sym

MIXUP-AUGMENTED META-LEARNING FOR SAMPLE-EFFICIENT FINE-TUNING OF PROTEIN SIMULATORS

Jingbang Chen*, Yian Wang*, Xingwei Qv, ShuangJia Zheng, Yaodong Yang, Hao Dong, Jie Fu

Paper Code

  • We apply soft prompt-based learning to molecular dynamics simulations, achieving strong generalization across various conditions with limited training data. The framework includes pre-training with data mixing and prompts, followed by meta-learning-based fine-tuning.
ECCV 2022
sym

AdaAfford: Learning to Adapt Manipulation Affordance for 3D Articulated Objects via Few-shot Interactions

Yian Wang*, Ruihai Wu*, Kaichun Mo*, Jiaqi Ke, Qingnan Fan, Leonidas J. Guibas, Hao Dong

Paper Project Code

  • We propose a novel framework that learns to perform very few test-time interactions for quickly adapting the affordance priors to more accurate instance-specific posteriors by eliminating the dynamic and kinematic uncertainties.
ICLR 2022
sym

VAT-Mart: Learning Visual Action Trajectory Proposals for Manipulating 3D ARTiculated Objects

Ruihai Wu*, Yan Zhao*, Kaichun Mo*, Zizheng Guo, Yian Wang, Tianhao Wu, Qingnan Fan, Xuelin Chen, Leonidas J. Guibas, Hao Dong

Paper Project Code

  • We propose an interaction-for-perception framework to predict dense geometry-aware, interaction-aware, and task-aware visual action affordance for 3D articulated objects.

πŸŽ– Honors and Awards

  • SenseTime Scholarship 2021-2022
  • Pexpertmaker to Merit Student in Peking University 2021-2022
  • Yanhong Li Scholarship in Peking University 2021-2022
  • Benz Scholarship in Peking University 2020-2021
  • Lee Wai Wing Scholarship in Peking University 2019-2020
  • Merit Student in Peking University 2019-2020 and 2020-2021
  • Second class prize in PKU-CPC 2021
  • Silver medal in National Olympiad in Informatics (NOI),Β China Computer Federation, 2018

πŸ“– Educations

  • Sep 2023 - now, University of Massachusetts Amherst.
  • Aug 2019 - July 2023, undergraduate in Peking University.

πŸ’» Internships