László A. Jeni

I am a Systems Scientist (faculty) in the Robotics Institute at Carnegie-Mellon University. I focus on advancing the state-of-the-art in multi-modal methods for computational behavior science, specifically in areas of modelling, analysis, and synthesis of human behavior and emotion using diverse sensors. Currently I am directing the Computational Behavior Lab.


April 12, 2021 – Our paper on estimating 3D Human Pose, Shape and Texture from Low Resolution Images and Videos is now available as early access at IEEE TPAMI!

March 10, 2021 – Attending and presenting our work on A New Paradigm for Geometric Reasoning through Structure from Category at the 2021 NSF National Robotics Initiative Principal Investigators' Meeting!

January 5, 2021 – We are presenting our work on Synthetic Expressions are Better Than Real for Learning to Detect Facial Actions at the 2021 Winter Conference on Applications of Computer Vision! See the project page for more details.

September 1, 2020 – Late-Breaking Results Paper accepted to 22nd ACM International Conference on Multimodal Interaction titled Automated Detection of Optimal Deep Brain Stimulation Device Settings!

August 23, 2020 – We are presenting our work on 3D Human Shape and Pose from a Single Low-Resolution Image with Self-Supervised Learning at the 2020 European Conference on Computer Vision! See the project page for more details.

June 1, 2020 – Our book chapter on automated facial action coding is published in What the Face Reveals (3rd eds)!

February 2, 2020 – Journal Article accepted to IEEE Transactions on Biometrics, Behavior, and Identity Science titled Crossing Domains for AU Coding: Perspectives, Approaches, and Measures!

November 2, 2019 – Paper on the ICCV 2019 - Dense 3D Face Reconstruction in the Wild from Video (3DFAW-Video) workshop & challenge is online: The 2nd 3D Face Alignment in the Wild Challenge (3DFAW-Video): Dense Reconstruction From Video!!

May 1, 2019 – I am organizing a workshop & challenge on Dense 3D Face Reconstruction in the Wild from Video (3DFAW-Video), In conjunction with ICCV 2019, Seoul, Korea!!

March 3, 2019 – Demo Paper accepted to Face & Gesture 2019: AFAR: A deep learning based tool for automated facial affect recognition!

January 1, 2019 – Paper accepted as oral presentation to Face & Gesture 2019: Cross-domain AU Detection: Domains, Learning Approaches, and Measures!