Shuangquan Feng

[Click the names for pronunciation]

About

I am a Ph.D. student in the Neurosciences Graduate Program at University of California San Diego (UCSD), advised by Virginia R. de Sa. I previously received my B.S. in Cognitive Science with a specialization in Machine Learning and Neural Computation also from UCSD.

My research interests lie in computer vision, cognitive modeling, and computational neuroscience. My current research focuses on facial expression recognition and its applications. Active projects include: (1) application of facial expression recognition in assisting the improvement of image generation algorithms; (2) application of facial expression recognition in education; (3) incorporation of facial expression recognition into large language models (LLMs); (4) decoding of perceived images/videos from brain activity.

Selected Publications & Preprints
FERGI: Automatic Annotation of User Preferences for Text-to-Image Generation from Spontaneous Facial Expression Reaction
Shuangquan Feng*, Junhua Ma*, Virginia R. de Sa (*equal contribution)
arXiv, 2023
Users' activation of action unit 4 (brow lowerer) consistently reflects negative evaluations of text-to-image generation, which can be used to automatically annotate user preferences for images generated by text-to-image generative models.
Improving Robustness in Motor Imagery Brain-Computer Interfaces
NeurIPS DistShift Workshop 2021
We propose a novel algorithm called Spectrally Adaptive Common Spatial Patterns (SACSP) that improves Common Spatial Patterns (CSP) by learning a temporal/spectral filter for each spatial filter so that the spatial filters are concentrated on the most relevant temporal frequencies for each user.