Events
LMSS @ Cornell Tech: Sam Bowman (New York University)
Task-Independent Language Understanding
This talk deals with the goal of task-independent language understanding: building machine learning models that can learn to do most of the hard work of language understanding before they see a single example of the language understanding task they’re meant to solve, in service of making the best of modern NLP systems both better and more data-efficient. I’ll survey the (dramatic!) progress that the NLP research community has made toward this goal in the last year. In particular, I’ll dwell on GLUE and SuperGLUE—two open-ended shared task competitions that measure progress toward this goal for sentence understanding tasks—and I’ll preview a few recent analysis papers that attempt to offer a bit of perspective on this progress.
Speaker Bio
Sam Bowman has been on the faculty at NYU since 2016, when he completed PhD with Chris Manning and Chris Potts at Stanford. At NYU, Sam is jointly appointed between the new school-level Center for Data Science, which focuses on machine learning, and the Department of Linguistics, and is also a co-PI of the CILVR machine learning lab and an affiliate member of the Courant Institute’s Department of Computer Science. Sam’s research focuses on data, evaluation techniques, and modeling techniques for sentence and paragraph understanding in natural language processing, and on applications of machine learning to scientific questions in linguistic syntax and semantics. Sam organized a twenty-three person research team at JSALT 2018 and received a 2015 EMNLP Best Resource Paper Award and a 2017 Google Faculty Research Award.