About
I’m a third-year Ph.D. student in the Department of Linguistics at New York University. I study how neural language models learn language, and how that is similar to or different from how humans acquire language. Before coming to NYU, I earned my bachelors degree in Linguistics from Yale University; my undergraduate thesis on algebraic generalization in neural networks was advised by Bob Frank as part of the CLAY Lab.
Research Interests
- Applications of pure mathematics to linguistics and machine learning. How can we precisely describe how difficult a given task is for a neural language model? What are the mathematical properties of natural language when viewed as an algebraic object? Can we quantify how difficult natural language is for a given neural architecture?
- Properties of neural network generalization. How do neural networks generalize to new inputs? What inductive biases do neural architectures have, and what biases induce human-like generalization? How can we build neural networks with data efficiency comparable to humans? Can we guarantee that neural networks will generalize in a consistent, interpretable, and safe manner? How do neural networks learn to generalize compositionally?
- NLP for Jewish languages. How can we build the best NLP systems for Hebrew, Yiddish, and Aramaic? How can we improve tokenization for non-concatenative morphology?
Contact Information
- NYU Email
petty@nyu.edu
- Permanent Email
research@jacksonpetty.org
- Office
- Room 507, 10 Washington Place
New York, NY 10003
Elsewhere on the Internet
- GitHub
@jopetty
- arXiv
petty_j_1
@jowenpetty
@jowenpetty
- VSCO
@jowenpetty
- Mastodon
@jowenpetty@mastodon.social
- YouTube
@jacksonpetty
- Google Scholar
- Jackson Petty
- Semantic Scholar
- Jackson Petty
- ORCID
0000-0002-9492-0144
in/jackson-petty
Colophon
This site is built using Hugo and is hosted on GitHub pages. Type is set in Sebastian Kosch’s Cochineal, Matthew Butterick’s Heliotrope, and Neil Panchal’s Berkeley Mono.