Chunhui Liu

Staff ML Engineer · TikTok Video Search

I'm a Staff Machine Learning Engineer at TikTok's Video Search Team, where I serve as tech lead for relevance and pretraining. I work with a group of talented individuals to build the world's largest short video search engine with tens of billions of videos, serving billions of users daily. My team focuses on developing BERT/LLM models as part of the search engine, utilizing advanced techniques in CV/NLP/Multimodal learning and pretraining.

Previously, I was an Applied Scientist at Amazon AI, conducting cutting-edge research and developing real-world applications for video and action understanding. I hold a Master's degree in Computer Vision from CMU and a Bachelor's degree in Computer Science, Summa Cum Laude, from Peking University, under the supervision of Prof. Jiaying Liu.

Chunhui Liu
We are hiring! Interested in working together? Reach out via email.

Research & Publications

I remain deeply interested in deep learning, computer vision, multi-modality learning, and video understanding — passionate about building models that understand the physical world the way humans do.

TubeR
CVPR 2022

TubeR: Tubelet Transformer for Video Action Detection

Jiaojiao Zhao, Yanyi Zhang, Xinyu Li, Hao Chen, Bing Shuai, Mingze Xu, Chunhui Liu, Kaustav Kundu, Yuanjun Xiong, Davide Modolo, Ivan Marsic, Cees GM Snoek, Joseph Tighe

The first SOTA transformer model for action detection, using learnable queries as tubelet proposals.

SFC
ICCV 2021

Selective Feature Compression for Efficient Activity Recognition Inference

Chunhui Liu, Xinyu Li, Hao Chen, Davide Modolo, Joseph Tighe

Using transformers as spatial feature samplers to achieve 6× faster inference without performance drop.

VidTr
ICCV 2021

VidTr: Video Transformer Without Convolutions

Yanyi Zhang, Xinyu Li, Chunhui Liu, Bing Shuai, Yi Zhu, Biagio Brattoli, Hao Chen, Ivan Marsic, Joseph Tighe

One of the earliest works to apply the transformer architecture for action recognition.

Survey
arXiv 2021

A Comprehensive Study of Deep Video Action Recognition

Yi Zhu, Xinyu Li, Chunhui Liu, Mohammadreza Zolfaghari, Yuanjun Xiong, Chongruo Wu, Zhi Zhang, Joseph Tighe, R Manmatha, Mu Li

A survey covering 16 datasets and 200+ papers on action understanding, with a full codebase and tutorial workshops.

PKU-MMD
ACM MM Workshop 2017

PKU-MMD: A Large Scale Benchmark for Continuous Multi-Modal Human Action Understanding

Chunhui Liu, Yueyu Hu, Yanghao Li, Sijie Song, Jiaying Liu

A skeleton-based action detection dataset with continuous multi-modal recordings.

Blog

Occasional thoughts on machine learning, research, and engineering.

No posts yet — stay tuned.

View all posts →