The resurgence of deep neural networks has resulted in impressive advances in natural language processing (NLP). However, this success is dependent on access to large amounts of structured supervision, often manually constructed and unavailable for many applications and domains. In this talk, I will present novel computational models that integrate reinforcement learning with language understanding to induce grounded representations of semantics using unstructured feedback. These techniques not only enable task-optimized representations which reduce dependence on high quality annotations, but also exploit language in adapting control policies across different environments. First, I will describe an approach for learning to play text-based games, where all interaction is through natural language and the only source of feedback is in-game rewards. Second, I will exhibit a framework for utilizing textual descriptions to assist cross-domain policy transfer for reinforcement learning. Finally, I will demonstrate how reinforcement learning can enhance traditional NLP systems in low resource scenarios. In particular, I describe an autonomous agent that can learn to acquire and integrate external information to improve information extraction.
Karthik Narasimhan is a PhD candidate working with Prof. Regina Barzilay at CSAIL, MIT. His research interests are in natural language understanding and deep reinforcement learning. His current focus is on developing autonomous systems that can acquire language understanding through interaction with their environment while also utilizing textual knowledge to drive their decision making. His work has received a best paper award at EMNLP 2016 and an honorable mention for best paper at EMNLP 2015. Karthik received a B.Tech in Computer Science and Engineering from IIT Madras in 2012 and an S.M in Computer Science from MIT in 2014.