Welcome!

I currently work on machine learning research and engineering at Google, focusing on representation learning on structured data, automated feature engineering, and multimodal representation learning. Previously, I worked in similar areas at Microsoft Research AI, such as building vision-language representation models to improve core relevance algorithms for Bing. Before that, I worked at Wolfram Research on large scale natural language processing and data mining of technical literature.

My interests primarily lie in: 1) geometric representation learning, and 2) build deep models to learn language and multimodal representations.

Towards 1) I have developed algorithms for fast optimal transport, graph representation learning, efficient outlier detection in high dimensions, and methods for neural locality sensitive hashing. Towards 2) I have designed and developed various large language models and multimodal representation models. In general, I enjoy applying mathematical principles to improve model building. My papers contain more details on my research.

I studied math at Princeton (undergraduate) and the University of Wisconsin - Madison (graduate).

I am also broadly interested in a wide range of current research developments in machine learning. I enjoy designing and implementing new models and algorithms.