Aryaman Arora papers blog projects

Search IconIcon to open search

Hi! I'm Aryaman.

email scholar twitter github

I am a first-year Ph.D. student at Stanford NLP working with Dan Jurafsky and Chris Potts as a rotation student.

I desperately want to understand how language models work. What mechanisms are encoded in their weights that allow human-like command of natural language, recall of facts, and the ability to learn in-context? How are these and other skills learned in training? To that end, I am inspired by work in NLP, ML, causality, information theory, and psycholinguistics.

Before coming to Stanford, I completed my B.S. in Computer Science and Linguistics at Georgetown University, where I was mentored by Nathan Schneider. I also work closely with Ryan Cotterell at ETH Zürich. In 2022, I spent the summer at Apple working with Robert Daland, and winter at Redwood Research.

In the old days, I worked largely on computational linguistics.

My résumé and blog should tell you a little more about me. If you want to chat about research (or really anything else), feel free to reach out!

# Selected papers

# News

  • 2023-09-14: Moved to the San Francisco Bay Area.
  • 2023-08-11: Workshop paper at CLVL on causal tracing for vision-language models accepted.
  • 2023-07-31: Back from the Leiden University Summer School in Languages and Linguistics!
  • 2023-05-24: Two workshop papers accepted:
    • Jambu: A historical linguistic database for South Asian languages (SIGMORPHON)
    • Unified syntactic annotation of English in the CGEL framework (LAW)
  • 2023-02-08: Accepted to the Ph.D. program at Stanford CS!
  • 2022-12-18: Headed to Berkeley, CA for a 5-week internship at Redwood Research on mechanistic interpretability.