Quick links

Ushering Machines into the World of Human Knowledge

By Doug Hulette
Photos by David Kelly Crow

 
Danqi Chen stands at the intersection of machine learning and everyday language, and she is paving a fast lane toward a long-sought goal: opening the full expanse of human knowledge to computers that can think like people do — or maybe better.

Danqi Chen

Chen, who joined Princeton’s Computer Science department this fall as an assistant professor, works in natural language processing or NLP— a fast-moving field that uses artificial intelligence to create machines that not only can read documents written by humans but also can assimilate and manipulate the knowledge that the documents contain.

With free rein to explore across the internet, such machines not only will have access to all human knowledge but will have the power to comprehend, reason, and make decisions and judgments with little or no outside guidance. She calls it “deep understanding”.

“There is a tremendous amount of information and knowledge stored in free text that has been generated by humans,” says Chen, who earned her doctorate in computer science from Stanford in 2018. “We hope to build machines that are able to search for information effectively, answer our questions, summarize their findings, and even converse with us when we seek information.”

Danqi Chen speaking with Zexuan Zhong, CS graduate student.

Danqi Chen advising Zexuan Zhong, Computer Science graduate student

“Machines and humans have very different capabilities,” she says. “We humans are good at logical reasoning and identifying subtle nuances of language and its implications, while machines are good at processing huge amounts of data at scale.”

Combining the two is no simple task. “Teaching machines to understand human language documents is one of the most elusive and long-standing challenges in artificial intelligence,” Chen wrote in her Ph.D. thesis.

“Language is used to describe and represent the world, and we have to go deep into the complexity of the world as well as the complexity of languages,” she says. “Even if we build systems to understand a novel, for instance, those systems must be able to see the imaginary world inside.”

“This includes understanding factual knowledge, conceptual knowledge, and also procedural knowledge. I work on things like how to extract knowledge from the text, how to reason between text and structured knowledge, and how to use structured knowledge in our language representations.” She concedes, though, that the goal remains distant. “I think we are still very far away from encoding and representing knowledge of all these types to improve computers’ understanding abilities.”

Chen, who spent seven months visiting Facebook AI Research (FAIR) and the University of Washington before coming to Princeton, says she finds NLP fascinating because it lies at the intersection between concepts from mathematics and statistics and theories from linguistics, also with a close connection with humanities and social sciences.

“When I was younger, I was actually very interested in humanities subjects but I also excelled at mathematics. Later I learned about programming, and I spent lots of time coding algorithms and thinking about efficient solutions to some logical or mathematical problems. Since I started doing research in NLP in graduate school, I immediately realized that would be really the best of both worlds for me.”

Danqi Chen standing at a blackboard with Zexuan Zhong, CS graduate student.

Danqi Chen with Zexuan Zhong.

Her research is based on two watchwords — simplicity and practicality. “I am most excited about those simple yet fundamental approaches that actually work in practice. I deeply care about building practical systems (and always have fun with that), and I hope that my research results are not just a demonstration of nice ideas but can be useful and viable in real applications,” she says. “If we can build high-performing reading-comprehension systems, the technology will be crucial for applications such as question answering and dialogue systems.” 

She and Karthik Narasimhan, who came to Princeton in fall 2018, are leading the Princeton Natural Language Processing group, which also includes professor Sanjeev Arora. “This was also one reason that I am very excited to join Princeton — we are building a new group from scratch,” she says.

Arora describes Chen as “one of the young stars in natural language processing, especially in applying deep learning to central problems such as parsing, knowledge-base completion, and design of agents for conversation and question-answering. She is also a gifted teacher who has won awards for her teaching as a grad student at Stanford. Her arrival has proved a big boost to our research and teaching in natural language processing."

Follow us: Facebook Twitter Linkedin