March 2026

I'm working on a research project exploring preference falsification in LLM multi-agent networks: simulating how AI agents navigate the gap between private beliefs and public expression under social pressure. We're building a dual-channel simulation framework where agents maintain honest internal beliefs but decide what to say publicly based on social costs, testing whether LLM agents reproduce the cascade dynamics from Kuran's preference falsification theory. Building this with a team at USC for CSCI 544, with the goal of publishing.

Proposal

If you're working on something interesting, I'd love to talk.

recent

For everything else, see my essays and projects.