Biography

Ling Yang is currently a final-year Ph.D. student at Peking University, advised by Prof. Bin Cui and Prof. Luxia Zhang. I am also an incoming postdoctoral research fellow at Princeton University, fortunately working with Prof. Mengdi Wang. My research interests are developing advanced algorithms and frameworks about LLMs and diffusion models. I previously worked with Yang Song, Guohao Li, Shuicheng Yan, Ming-Hsuan Yang, Bernard Ghanem, Stefano Ermon, and Jure Leskovec. I serve as a program committee member or reviewer for international conferences and journals including SIGGRAPH, TPAMI, ICML, ICLR, NeurIPS, CVPR, KDD, AAAI. Feel free to contact me for potential collaborations or discussions.
Email | WeChat | Github | Google Scholar | Twitter

We have opening positions for PhDs, Masters and Research Interns (not limited to PKU and Princeton University, work online). Also, I am in charge of a reasearch team and have led a series of works on Diffusion Models and LLMs, including RPG-DiffusionMasterGitHub stars, Buffer of ThoughtsGitHub stars, SupperCorrectGitHub stars, ReasonFluxGitHub stars, VideoTetrisGitHub stars, Consistency Flow MatchingGitHub stars, IterCompGitHub stars. Interested persons please contact me directly!

Research Summary

My goal is to build powerful AI models capable of understanding, generating and reasoning with high-dimensional data across diverse modalities. I currently focus on developing advanced generative models, including their training methodologies, architecture design, alignment, inference efficiency and applications. I am also interested in generative modeling as a tool for scientific discovery.

Generative Model Foundations

Generative Applications

What's New

  • I release ReasonFluxGitHub stars, beating OpenAI o1-preview and DeepSeek-V3 with hierarchical reinforcement learning on 8GPUs.
  • 6 papers about LLMs and Diffusion Models are accepted by ICLR 2025.
  • I propose SupperCorrectGitHub stars, achieving new SOTA LLM reasoning performance among all 7B models.
  • I propose IterCompGitHub stars, leveraging iterative RLHF to achieve fast and realistic T2I generation.
  • 5 papers about Diffusion Models and LLMs are accepted by NeurIPS 2024.
  • I propose Consistency Flow MatchingGitHub stars, converging 4.4x faster than Consistency Model and 1.7x faster than Rectified Flow while achieving better FID.
  • I propose a new RAG-based LLM reasoning framework, Buffer of ThoughtsGitHub stars (NeurIPS 2024 Spotlight).
  • I release the project VideoTetrisGitHub stars of first compositional text-to-video generation.
  • 2 papers about Diffusion Models and AI for Science are accepted by ICML 2024.
  • One paper about general/molecular graph diffusion is accepted by TKDE 2024.
  • One paper about improved training algorithm of Diffusion Transformers (DiT), DDPMs and Score SDEs is accepted by CVPR 2024.
  • Release our SOTA LLM-controlled diffusion model, RPG-DiffusionMasterGitHub stars.
  • 3 papers about Diffusion Models, GNN, AI for Science are accepted by ICLR 2024.
  • Our paper about protein-aware 3D molecular diffusion models is accepted by AAAI 2024.
  • Our survey about Diffusion ModelsGitHub stars is accepted by ACM Computing Surveys 2023, collaborating with OpenAI.
  • One paper about text-to-image diffusion is accepted by NeurIPS 2023.
  • I publish a book about Diffusion Models.
  • One paper is accepted by TNNLS 2023.
  • One paper is accepted by TKDE 2023.
  • 2 papers are accepted as ICML 2022 Spotlight.
  • One paper is accepted by CVPR 2020.

Selected Papers [Full List]

  • Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models
    Ling Yang, Zhaochen Yu, Tianjun Zhang, Shiyi Cao, Minkai Xu, Wentao Zhang, Joseph E Gonzalez, Bin Cui
    NeurIPS 2024 spotlight paper | repo | tweet

alt text

  • ReasonFlux: Hierarchical LLM Reasoning via Scaling Thought Templates
    Ling Yang, Zhaochen Yu, Bin Cui, Mengdi Wang
    paper | repo | tweet

alt text

  • Mastering Text-to-Image Diffusion: Recaptioning, Planning, and Generating with Multimodal LLMs.
    Ling Yang, Zhaochen Yu, Chenlin Meng, Minkai Xu, Stefano Ermon, Bin Cui
    ICML 2024 paper | repo | tweet

alt text

  • IterComp: Iterative Composition-Aware Feedback Learning from Model Gallery for Text-to-Image Generation
    Xinchen Zhang*, Ling Yang*, Guohao Li, Yaqi Cai, Jiake Xie, Yong Tang, Yujiu Yang, Mengdi Wang, Bin Cui
    paper | repo | tweet

alt text

  • Consistency Flow Matching: Defining Straight Flows with Velocity Consistency
    Ling Yang, Zixiang Zhang, Zhilong Zhang, Xingchao Liu, Minkai Xu, Wentao Zhang, Chenlin Meng, Stefano Ermon, Bin Cui
    paper | repo | tweet alt text
  • VideoTetris: Towards Compositional Text-to-Video Generation
    Ye Tian*, Ling Yang*, Haotian Yang, Yuan Gao, Yufan Deng, Jingmin Chen, Xintao Wang, Zhaochen Yu, Xin Tao, Pengfei Wan, Di Zhang, Bin Cui
    NeurIPS 2024 paper | repo | tweet

alt text

  • Dpgn: Distribution propagation graph network for few-shot learning
    Ling Yang, Liangliang Li, Zilun Zhang, Xinyu Zhou, Erjin Zhou, Yu Liu
    CVPR 2020 paper | repo

alt text

Advising Experience

Zhaochen Yu (Master student at National University of Singapore)

Xinchen Zhang (Master student at Tsinghua University)

Ye Tian (incoming Ph.D. student at PKU).

Bohan Zeng (incoming Ph.D. student at PKU).

Zhilin Huang (Ph.D. student at Tsinghua University).

Zhilong Zhang (incoming Ph.D. student at Tsinghua University)

Yinjie Wang (Ph.D. student at The University of Chicago).

Awards