About
At Stanford, I’m advised by Dan Jurafsky and Christopher Potts. My research is focused on interpretability.
I want to understand how neural networks (particularly language models) work. To that end, I’m excited about causal approaches and insights from linguistics—however, I’m not committed to any particular line of work and am always excited to try new things!
I completed my B.S. in Computer Science and Linguistics at Georgetown University, where I worked with Nathan Schneider. I interned at ETH Zรผrich with Ryan Cotterell working on information theory, as well as at Apple and Redwood Research. (See my CV for more.)
Hit me up on Twitter or at aryamana [at] stanford [dot] edu.
Greatest hits
News
2024-11-04
Just dropped the paper for my rotation project with Noah Goodman: Bayesian scaling laws for in-context learning.2024-09-26
ReFT will be a spotlight paper at NeurIPS 2024 ๐จ๐ฆ2024-08-16
CausalGym won an outstanding paper award at ACL 2024 ๐น๐ญ2024-06-21
Presented pyvene and IruMozhi at NAACL 2024 ๐ฒ๐ฝ2024-04-05
New interp-inspired ultra-efficient finetuning method out: ReFT (repo, tweet).2024-03-13
We released the paper for pyvene, a new library for intervening on the internal states of neural networks!2024-02-19
My first lead-author project as a Ph.D. student is out: CausalGym: Benchmarking causal interpretability methods on linguistic tasks.2023-09-14
Moved to the San Francisco Bay Area ๐ to start my Ph.D. ๐ซก2023-07-31
Back from the Leiden University Summer School in Languages and Linguistics in the Netherlands!2023-02-08
Accepted to the Ph.D. program at Stanford CS!