Transformers Generalize Linearly

Jackson Petty and Robert Frank

[arXiv]

Abstract

Natural language exhibits patterns of hierarchically governed dependencies, in which relations between words are sensitive to syntactic structure rather than linear ordering. While recurrent network models often fail to generalize in a hierarchically sensitive way (McCoy et al., 2020) when trained on ambiguous data, the improvement in performance of newer Transformer language models (Vaswani et al., 2017) on a range of syntactic benchmarks trained on large data sets (Goldberg, 2019; Warstadt et al., 2019) opens the question of whether these models might exhibit hierarchical generalization in the face of impoverished data. In this paper we examine patterns of structural generalization for Transformer sequence-to-sequence models and find that not only do Transformers fail to generalize hierarchically across a wide variety of grammatical mapping tasks, but they exhibit an even stronger preference for linear generalization than comparable recurrent networks.

Bib(La)TeX Citation