I am broadly interested in embedding fundamental physical laws, such as complex light transport and 3D geometry, into computational frameworks and generative algorithms to better understand and interact with the visual world. My work in novel programmable imaging and 3D display systems demonstrates how strong physical priors integrated with programmability can overcome the limits of traditional sensing and perception. Specifically, my research introduces spatially adaptive cameras and displays, building the foundation for next-generation machine vision, computational imaging, and immersive displays. My research area involves a fusion of computer vision, 3D perception, signal processing, optics, and machine learning.
I obtained my B.S. in Computer Science from Columbia University, focused on Artificial Intelligence, and my B.A. in Physics from Colgate University. I was a research intern at Meta Reality Labs Display Systems Research (2024, 2025) and Snap Research Computational Imaging (2020). I was also a software engineering intern at Google Search (2019).
[Aug. 2023] Our demo Split-Lohmann won the Best Demo Award at ICCP 2023!
[Jun. 2023] Our paper Split-Lohmann won the Best Paper Award at SIGGRAPH 2023!
[May. 2023] We will showcase the binocular demo for Split-Lohmann at SIGGRAPH 2023 Emerging Technologies Aug 6-10. The demo will include static, video, and interactive 3D experience. Join us in Los Angeles!
[May. 2023] Our demo Single-Shot VR is accepted to SIGGRAPH 2023 Emerging Technologies!
[May. 2023] Our paper Split-Lohmann Multifocal Displays is accepted to SIGGRAPH 2023!
ACM Transactions on Graphics (TOG), IEEE Transactions on Visualization and Computer Graphics (TVCG), Optics Express, SIGGRAPH, International Conference on Computer Vision (ICCV), Computer Vision and Pattern Recognition (CVPR), Association for the Advancement of Artificial Intelligence (AAAI)