The Computational Behavior Lab focuses broadly on multi-modal methods for computational behavior science, specifically in areas of modelling, analysis, and synthesis of human behavior and emotion using diverse sensors..


Members

PhD:

Mosamkumar Dabhi (CMU RI) with Simon Lucey
Rohan Choudhury (CMU RI) with Kris Kitani
Yaohan Ding (UPitt) with Jeff Cohn
Maneesh Bilalpur (UPitt) with Jeff Cohn

Masters:

Ambareesh Revanur (RI MSR)
Aarush Gupta (RI MSR)
Heng Yu (RI MSR)

Visitors:

Koichiro Niinuma (Fujitsu)

Lab Alumni:

Rahul Mysore Venkatesh (RI MSCV)
Dai Li (RI MSCV)
Zhuoqian Yang (RI MSCV)
Itir Onal (postdoc, with Jeff Cohn)
Xiangyu Xu (postdoc, with Fernando De la Torre)
Rohith Krishnan Pillai (RI MSR)
Bhavan Jasani (RI MSR'19, with Jeff Cohn)
Chenxi Xu (RI MSCV'19)
Neeraj Sajjan (RI MSCV'19)

Research Topics

Dense 3D Face Alignment

Real-time, dense 3D face alignment is a challenging problem for computer vision. To afford real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person’s face, a dense 3D shape is registered in real time for each frame.

Project: ZFace


Dense Body Pose

Low-resolution 3D human shape and pose estimation is a challenging problem. We propose a resolution-aware neural network which can deal with different resolution images with a single model. For training the network, we propose a directional self-supervision loss which can exploit the output consistency across different resolutions to remedy the issue of lacking high-quality 3D labels. In addition, we introduce a contrastive feature loss which is more effective than MSE for measuring high-dimensional vectors and helps learn better feature representations.

Project: Low-resolution dense pose estimation


Cognitive Assistant for the Visually Impaired

We developed a prototype mobile vision system for the visually impaired that performs both person and emotion recognition in diverse environments.

Project: ZFace

TED Talk: How New Technology Helps Blind People Explore the World


Automated Facial Action Unit Coding

This study addressed how design choices influence performance in facial AU coding using deep learning systems, by evaluating the combinations of different components and their parameters present in such systems.



Facial Expression Synthesis

This study proposed a generative approach that achieves 3D geometry based AU manipulation with idiosyncratic loss to synthesize facial expressions. With the semantic resampling, this approach provides a balanced distribution of AU intensity labels, which is crucial to train AU intensity estimators. We have shown that using the balanced synthetic set for training performs better than using the real training dataset on the same test set. The method generalizes to non-frontal views and to unseen domains.



Smartphone-based physiology measurements