Jackson Petty - ArchiveZola2022-04-12T00:00:00+00:00https://jacksonpetty.org/feed/atom.xmlNearer to G-d are We2022-04-12T00:00:00+00:002022-04-12T00:00:00+00:00https://jacksonpetty.org/feed/writing/nearer/
A brief reflection on the roots of sacrifice for Pesaḥ. Published in the “Passover 2022” issue of Shibboleth.
<hr />
<p>In observance of Pesaḥ, we read in the coming days of the deliverance from Egypt, the plagues sent upon Pharaoh, and, centrally, the Passover Sacrifice offered by our ancestors to G-d in the struggle for freedom and self-determination. Sacrifice—<span lang="he">קרבן</span> <em>korban</em> in Hebrew—is a recurring motif through Tanakh, though its practice is one that holds little salience to Jews today, save for the learning we read each morning of its practice and the prayers we offer in its place. Its ritual seems to many arcane and prosaic, distant indeed from the daily practice of contemporary Judaism, which substitutes prayer in lieu of offerings of animals and grain. What, then, can we learn from studying these rites whose direct relevance is contingent on the consecration of a new temple?</p>
<p>If you’ll permit a linguist his indulgences, I believe an answer—one among many—may be found in the word <span lang="he">קרבן</span> <em>korban</em> itself, and in its various senses. We most commonly translate <em>korban</em> as “offering” or its latinate equivalent <em>sacrifice</em>, from the contraction of <em>sacer</em> and <em>facio</em>, meaning literally “that which sanctifies.” This idea is not foreign to Judaism, and is found quite liberally throughout our prayers and liturgy, from the <em>asher kiddeshanu</em> of our daily blessings to the <em>mekadeish</em> of the Kiddush spoken every Shabbat. These holy phrases all share a root: <span lang="he">קד״ש</span>, meaning “holy,” found also in the name of the temple itself: <em>beit hamikdash</em>. Yet korban does not derive from this root, but from another: <span lang="he">קר״ב</span>. Students of Hebrew may recognize this root from the common adjective <span lang="he">קרוב</span> <em>karov</em>, meaning “near.” </p>
<p>The <em>korbanot</em> then are not only sacrifices offered to G-d in the hopes of making life more holy; they are instruments of intimacy which draw us nearer, as a people and as individuals. But nearer to what? Certainly, in one sense, to G-d. In offering sacrifices, we give back a small token of thanks for what G-d gives us and in doing so bind ourselves closer to our creator. But equally, we are drawn closer to one another. In action, in prayer, and in love, we become closer to each other. As a nation, as a people, and as a community.</p>
<p>We no longer have a temple at which to offer <em>korbanot</em>, and we will have to wait until redemption to offer at its altars once again. For the past several years, our own community has struggled without its home. But in the absence of place, we still have ourselves and our dedication to better the world in which we live. In our actions and prayers, we draw ourselves closer, to G-d and each other.</p>
Do Language Models Learn Position-Role Mappings?2022-01-30T00:00:00+00:002022-01-30T00:00:00+00:00https://jacksonpetty.org/feed/publications/petty-do-2022/
<strong>Abstract.</strong> How is knowledge of position-role mappings in natural language learned? We explore this question in a computational setting, testing whether a variety of well-performing pertained language models (BERT, RoBERTa, and DistilBERT) exhibit knowledge of these mappings, and whether this knowledge persists across alternations in syntactic, structural, and lexical alternations. In Experiment 1, we show that these neural models do indeed recognize distinctions between theme and recipient roles in ditransitive constructions, and that these distinct patterns are shared across construction type. We strengthen this finding in Experiment 2 by showing that fine-tuning these language models on novel theme- and recipient-like tokens in one paradigm allows the models to make correct predictions about their placement in other paradigms, suggesting that the knowledge of these mappings is shared rather than independently learned. We do, however, observe some limitations of this generalization when tasks involve constructions with novel ditransitive verbs, hinting at a degree of lexical specificity which underlies model performance.
<hr />
<pre><code>@inproceedings{petty-2022-do,
title="Do Language Models Learn Position-Role Mappings?",
author="Jackson Petty and Michael Wilson and Robert Frank",
booktitle="BUCLD 46: Proceedings of the 46th annual Boston University Conference on Language Development",
volume="2",
pages="657--671",
year="2022",
url="http://www.lingref.com/bucld/46/BUCLD46-50.pdf"
}
</code></pre>
The Optimal Double Bubble for Density $r^p$2021-12-28T00:00:00+00:002021-12-28T00:00:00+00:00https://jacksonpetty.org/feed/publications/hirsch-optimal-2021/
<strong>Abstract.</strong> In 2008 Reichardt proved that the optimal Euclidean double bubble—the least-perimeter way to enclose and separate two given volumes—is three spherical caps meeting along a sphere at $120$ degrees. We consider $\mathbb{R}^n$ with density $r^p$, joining the surge of research on manifolds with density after their appearance in Perelman’s 2006 proof of the Poincaré Conjecture. Boyer et al. proved that the best single bubble is a sphere through the origin. We conjecture that the best double bubble is the Euclidean solution with the singular sphere passing through the origin, for which we have verified equilibrium (first variation or “first derivative” zero). To prove the exterior of the minimizer connected, it would suffice to show that least perimeter is increasing as a function of the prescribed areas. We give the first direct proof of such monotonicity in the Euclidean plane. Such arguments were important in the 2002 Annals proof of the double bubble in Euclidean $3$-space.
<hr />
<pre><code>@article{hirsch-2021-certain,
title="The Optimal Double Bubble for Density $r^p$",
author="Jack Hirsch and Kevin Li and Jackson Petty and Christopher Xue",
journal="Rose-Hulman Undergraduate Mathematics Journal",
volume="22",
issue="2",
number="4",
year="2021",
url="https://scholar.rose-hulman.edu/rhumj/vol22/iss2/4/"
}
</code></pre>
Transformers Generalize Linearly2021-09-24T00:00:00+00:002021-09-24T00:00:00+00:00https://jacksonpetty.org/feed/publications/petty-transformers-2021/
<strong>Abstract.</strong> Natural language exhibits patterns of hierarchically governed dependencies, in which relations between words are sensitive to syntactic structure rather than linear ordering. While recurrent network models often fail to generalize in a hierarchically sensitive way (McCoy et al., 2020) when trained on ambiguous data, the improvement in performance of newer Transformer language models (Vaswani et al., 2017)on a range of syntactic benchmarks trained on large data sets (Goldberg, 2019; Warstadt et al., 2019) opens the question of whether these models might exhibit hierarchical generalization in the face of impoverished data. In this paper we examine patterns of structural generalization for Transformer sequence-to-sequence models and find that not only do Transformers fail to generalize hierarchically across a wide variety of grammatical mapping tasks, but they exhibit an even stronger preference for linear generalization than comparable recurrent networks.
<hr />
Certain hyperbolic regular polygonal tiles are isoperimetric2021-02-20T00:00:00+00:002021-02-20T00:00:00+00:00https://jacksonpetty.org/feed/publications/hirsch-certain-2021/
<strong>Abstract.</strong> In 2008 Reichardt proved that the optimal Euclidean double bubble—the least-perimeter way to enclose and separate two given volumes—is three spherical caps meeting along a sphere at $120$ degrees. We consider $\mathbb{R}^n$ with density $r^p$, joining the surge of research on manifolds with density after their appearance in Perelman’s 2006 proof of the Poincaré Conjecture. Boyer et al. proved that the best single bubble is a sphere through the origin. We conjecture that the best double bubble is the Euclidean solution with the singular sphere passing through the origin, for which we have verified equilibrium (first variation or “first derivative” zero). To prove the exterior of the minimizer connected, it would suffice to show that least perimeter is increasing as a function of the prescribed areas. We give the first direct proof of such monotonicity in the Euclidean plane. Such arguments were important in the 2002 Annals proof of the double bubble in Euclidean $3$-space.
<hr />
<pre><code>@article{hirsch-2021-certain,
title="Certain hyperbolic regular polygonal tiles are isoperimetric",
author="Hirsch, Jack and Li, Kevin and Petty, Jackson and Xue, Christopher",
journal="Geometriae Dedicata",
volume="214",
number="1",
pages="65--77",
year="2021",
publisher="Springer"
}
</code></pre>
Sequence to sequence networks learn the meaning of reflexive anaphora2020-11-02T00:00:00+00:002020-11-02T00:00:00+00:00https://jacksonpetty.org/feed/publications/frank-sequence-2020/
<strong>Abstract.</strong> Reflexive anaphora present a challenge for semantic interpretation: their meaning varies depending on context in a way that appears to require abstract variables. Past work has raised doubts about the ability of recurrent networks to meet this challenge. In this paper, we explore this question in the context of a fragment of English that incorporates the relevant sort of contextual variability. We consider sequence-to-sequence architectures with recurrent units and show that such networks are capable of learning semantic interpretations for reflexive anaphora which generalize to novel antecedents. We explore the effect of attention mechanisms and different recurrent unit types on the type of training data that is needed for success as measured in two ways: how much lexical support is needed to induce an abstract reflexive meaning (i.e., how many distinct reflexive antecedents must occur during training) and what contexts must a noun phrase occur in to support generalization of reflexive interpretation to this noun phrase?
<hr />
<pre><code>@inproceedings{frank-petty-2020-sequence,
title = "Sequence-to-Sequence Networks Learn the Meaning of Reflexive Anaphora",
author = "Robert Frank and Jackson Petty",
booktitle = "Proceedings of the Third Workshop on Computational Models of Reference, Anaphora and Coreference",
month = dec,
year = "2020",
address = "Barcelona, Spain (online)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2020.crac-1.16",
pages = "154--164",
}
</code></pre>
Optimal monohedral tilings of hyperbolic surfaces2019-11-11T00:00:00+00:002019-11-11T00:00:00+00:00https://jacksonpetty.org/feed/publications/digiosa-optimal-2019/
<strong>Abstract.</strong> The hexagon is the least-perimeter tile in the Euclidean plane for any given area. On hyperbolic surfaces, this ‘isoperimetric’ problem differs for every given area, as solutions do not scale. Cox conjectured that a regular $k$-gonal tile with $120$-degree angles is isoperimetric. For area $\pi/3$, the regular heptagon has $120$-degree angles and therefore tiles many hyperbolic surfaces. For other areas, we show the existence of many tiles but provide no conjectured optima. On closed hyperbolic surfaces, we verify via a reduction argument using cutting and pasting transformations and convex hulls that the regular $7$-gon is the optimal $n$-gonal tile of area $\pi/3$ for $3\leq n\leq 10$. However, for $n>10$, it is difficult to rule out non-convex $n$-gons that tile irregularly.
<hr />