Pioneering research at the intersection of deep learning, computer vision, and natural language processing. Committed to advancing AI for social good.
I am an AI researcher with a passion for developing innovative machine learning solutions to complex real-world problems. My research focuses on advancing the frontiers of artificial intelligence through interdisciplinary approaches.
Currently, I lead the AI Research Lab at Stanford University, where my team explores novel architectures for multimodal learning, interpretable AI, and robust machine learning systems.
My work has been recognized with several awards including the NSF CAREER Award and the AAAI Outstanding Paper Award. I'm committed to mentoring the next generation of AI researchers and promoting ethical AI development.
Stanford University | 2015-2019
Thesis: "Advancing Multimodal Learning Through Neural Architecture Search"
Google Brain | 2019-2021
Led research on self-supervised learning for computer vision applications
Stanford University | 2021-Present
Director of the AI Research Lab, focusing on interpretable and robust AI systems
Developing novel architectures that effectively combine vision, language, and other modalities for more comprehensive AI understanding.
Learn moreCreating methods to make complex neural networks more transparent and explainable without sacrificing performance.
Learn moreBuilding machine learning systems that maintain performance under distribution shifts, adversarial attacks, and real-world noise.
Learn moreAdvancing neural language models with better contextual understanding, reasoning capabilities, and multilingual performance.
Learn moreDeveloping AI systems that learn through interaction with physical environments, bridging the gap between virtual and real-world learning.
Learn moreApplying cutting-edge AI techniques to address global challenges in healthcare, education, environmental sustainability, and more.
Learn moreNeurIPS 2022
We propose a novel attention mechanism that dynamically learns to attend to relevant information across vision and language modalities, achieving state-of-the-art results on multiple benchmarks.
Read PaperICML 2021
This work introduces a novel NAS framework that not only discovers high-performing architectures but also provides interpretable insights into why certain architectural choices perform better.
Read PaperCVPR 2020
We present a new framework for self-supervised learning that is robust to distribution shifts and adversarial perturbations, significantly outperforming previous approaches in challenging real-world scenarios.
Read PaperAAAI 2019
This paper introduces a new class of dynamic neural networks that adapt their computation based on input complexity, achieving significant efficiency gains without sacrificing accuracy.
Read PaperGates Computer Science Building, Room 392
353 Serra Mall, Stanford, CA 94305
(650) 725-1234