From a brief glimpse of a complex scene, we recognize people and objects, their relationships to each other, and the overall gist of the scene – all within a few hundred milliseconds and with no apparent effort. What are the computations underlying this remarkable ability and how are they implemented in the brain? To address these questions, my research bridges recent advances in machine learning with human behavioral and neural data to provide a computationally precise account of how visual recognition works in humans.
I am currently a postdoctoral researcher at MIT where I work with Nancy Kanwisher. I completed my PhD at the Max Planck Institute for Biological Cybernetics under the supervision of Isabelle Bülthoff and Johannes Schultz, investigating behavioral and neural correlates of dynamic face perception. During my first Postdoc at CNRS-CerCo working with Leila Reddy and Weiji Ma, I used a combination of Bayesian modelling, psychophysics and neuroimaging to characterize the integration of facial form and motion information during face perception.