Since August 2022 I have been a Data Scientist/Language Engineer as a part of the Transcribe team at AWS! I focus on the NLU side of things with a focus on Question Answering and Summarization NLP tasks. My work here involves acquiring high quality language data and training conversational AI models based on LLMs.

I received my PhD in psycholinguistics in 2022 at UMass Amherst, primarily under the supervision of Dr. Adrian Staub and Dr. Brian Dillon. Before that, I received my BA from the University of Maryland.

Check out my Research and/or Publications and Presentations pages to read about my work!

Research

I study sentence processing and take insights from linguistic theory, cognitive science and natural language processing (NLP). My research can be broken down into the following interconnected streams:

Predictability in Humans and Language Models

We can often guess what words are and are not likely to show up next in a sentence. My dissertation is a deep dive into this phenomenon.

I study two aspects of this: what is it that is being predicted and what information is used to generate those predictions? I've used EEG, eye tracking while reading and a battery of behavioral experiments to answer these questions.

I am also deeply interested in comparing predictions that people make to predictions that computers (Language Models) make.

Thematic Role Processing

Sentences are rich with information for us to extract and interpret. One core piece of information we must interpret is how the noun(s) in the sentence relate to the verb(s); e.g. who is doing what to whom. While these thematic relations are often ultimately unambiguous, they can be challenging to interpret incrementally.

A lot of my graduate research investigated what intermediate representations arise in clauses with non-canonical word orders like passives and relative clauses and how we ultimately understand them.

The Implementation of the Binding Principles

Syntacticians and Semanticists have formulated various rules and constraints that ensure that when we get a sentence with a pronoun or a reflexive, we are guided to interpreting it in particular ways. For example, we are dissuaded from interpreting "her" as referring back to "Callie" in sentence 1, but not in sentence 2:

  1. Callie loves her. (her ≠ Callie)
  2. Callie wonders if Nora loves her. (her = Callie)

"Principle B," which postulates which entities a pronoun such as "her" in the example above can refer back to, is one such descriptive rule linguists have come up with. In work with Dr. Brian Dillon, I am investigating how exactly this rule is implemented algorithmically by using the visual world paradigm.

Publications and Presentations

Clicking on the citation will either download a PDF, take you to the journal or take you to an OSF repository.

Doctoral Dissertation

Burnsky, Jon, "What Did You Expect? An Investigation of Lexical Preactivation in Sentence Processing" (2022)

Peer Reviewed Publications

Conference Presentations

Download my CV here!

jburnsky{at}umass{dot}edu