Shuangquan Feng

[Click the names for pronunciation]

About

I am a Ph.D. student in the Neurosciences Graduate Program at University of California San Diego (UCSD), advised by Virginia R. de Sa. I previously received my B.S. in Cognitive Science with a specialization in Machine Learning and Neural Computation also from UCSD.

My research interests lie in computer vision, cognitive modeling, and computational neuroscience. My current research focuses on facial expression recognition and its applications. Active projects include: (1) improvement of automated facial expression recognition systems; (2) incorporation of facial expression recognition into large language models (LLMs); (3) application of facial expression recognition in education; (4) application of facial expression recognition in assisting the improvement of image generation algorithms.

Selected Publications & Preprints
One-Frame Calibration with Siamese Network in Facial Action Unit Recognition
arXiv, 2024
We propose to perform one-frame calibration (OFC) with a novel Calibrating Siamese Network (CSN) architecture design for AU recognition and show that it substantially improves the performance of the baseline model by mitigating facial attribute biases (including biases due to wrinkles, eyebrow positions, facial hair, etc.).
FERGI: Automatic Scoring of User Preferences for Text-to-Image Generation from Spontaneous Facial Expression Reaction
Shuangquan Feng*, Junhua Ma*, Virginia R. de Sa (*equal contribution)
arXiv, 2023
Users' activation of multiple facial action units (AUs) are highly correlated with their evaluations of text-to-image generation, which can be used to automatically score user preferences for images generated by text-to-image generative models.
Improving Robustness in Motor Imagery Brain-Computer Interfaces
NeurIPS DistShift Workshop 2021
We propose a novel algorithm called Spectrally Adaptive Common Spatial Patterns (SACSP) that improves Common Spatial Patterns (CSP) by learning a temporal/spectral filter for each spatial filter so that the spatial filters are concentrated on the most relevant temporal frequencies for each user.