I graduated from Yale with a B.S. in Applied Mathematics and a concentration in Computer Science in May 2019. At Yale, I conducted NLP research under the supervision of Professor Dragomir Radev in the Language, Information, and Learning at Yale (LILY) Lab.
Since then, I’ve been doing technical AI safety and alignment research. I co-led a project at the AI Safety Camp that produced the first empirical demonstrations of a new kind of robustness failure, failures of objective robustness. I recently wrapped up a summer fellowship at the Center on Long-term Risk, where I sought to understand takeaways from Dennett’s intentional stance and the human brain itself for thinking about agency in the context of AI safety. Now, I’m applying to PhD programs, where I hope to do empirical ML alignment research.