The event will take place at George Mason University, Van Metre Hall (Arlington Campus) Room 125/126. See Directions for more details. The video recordings of all invited talks can be found in the end of this page.

Schedule

Invited Speakers

Rachel Rudinger
University of Maryland, College Park

Title:
“Not so fast!”: Revisiting assumptions in (and about) Natural Language Reasoning
Abstract:
In recent years, the field of Natural Language Processing has seen a profusion of tasks, datasets, and systems that facilitate reasoning about real-world situations through language (e.g., RTE, MNLI, COMET). Such systems might, for example, be trained to consider a situation where “somebody dropped a glass on the floor,” and conclude it is likely that “the glass shattered” as a result. In this talk, I will discuss three pieces of work that revisit assumptions made by or about these systems. In the first work, I develop a Defeasible Inference task, which enables a system to recognize when a prior assumption it has made may no longer be true in light of new evidence it receives. The second work I will discuss revisits partial-input baselines, which have highlighted issues of spurious correlations in natural language reasoning datasets and led to unfavorable assumptions about models’ reasoning abilities. In particular, I will discuss experiments that show models may still learn to reason in the presence of spurious dataset artifacts. Finally, I will touch on work analyzing harmful assumptions made by reasoning models in the form of social stereotypes, and how these stereotypes may be uncovered automatically.
Brief Bio:
Rachel Rudinger is an Assistant Professor in the Department of Computer Science at the University of Maryland, College Park. She holds joint appointments in the Department of Linguistics and the Institute for Advanced Computer Studies (UMIACS). In 2019, Rachel completed her Ph.D. in Computer Science at Johns Hopkins University in the Center for Language and Speech Processing. From 2019-2020, she was a Young Investigator at the Allen Institute for AI in Seattle, and a visiting researcher at the University of Washington. Her research interests include computational semantics, common-sense reasoning, and issues of social bias and fairness in NLP.

Graham Neubig
Carnegie Mellon University/Inspired Cognition

Title:
Is My NLP Model Working? The Answer is Harder Than You Think
Abstract:
As natural language processing now permeates many different applications, its practical use is unquestionable. However, at the same time NLP is still imperfect, and errors cause everything from minor inconveniences to major PR disasters. Better understanding when our NLP models work and when they fail is critical to the efficient and reliable use of NLP in real-world scenarios. So how can we do so? In this talk I will discuss two issues: automatic evaluation of generated text, and automatic fine-grained analysis of NLP system results, which are some first steps towards a science of NLP model evaluation.
Brief Bio:
Graham Neubig is an associate professor at the Language Technologies Institute of Carnegie Mellon University and CEO of Inspired Cognition. His research focuses on natural language processing, with a focus on multilingual NLP, natural language interfaces to computers, and machine learning methods for NLP system building and evaluation. His final goal is that every person in the world should be able to communicate with each-other, and with computers in their own language. He also contributes to making NLP research more accessible through open publishing of research papers, advanced NLP course materials and video lectures, and open-source software, all of which are available on his website.

Nazli Goharian
Georgetown University

Title:
NLP Applications in Mental Health
Abstract:
With the ever-increasing usage of social media to either explicitly seek help or to simply share thoughts and feelings, we, in the computational disciplines, have the opportunity to utilize such data for building datasets, models, and doing analysis. I will share our collaborative work done at the Information Retrieval Lab at Georgetown University on detecting and summarizing mental health concerns in social media posts. The first application is on a dedicated mental health forum with the goal of triaging the severity of users’ posts to detect early the potential of self-harm. In the second type of platform, i.e., non-dedicated, we focus on the question of whether we can detect if a user is suffering from any one or more of nine mental health conditions, only using the *general language* of the user; that is, the posts are not in mental health [sub]forums nor have any mental health related words. Detecting mental health conditions based on a relatively smaller number of posts generally is not promising; hence it is important to mitigate our approaches so that such low resource users do not go undetected; we show that changing the sensitivity of ML models by adjusting the decision boundary threshold based on the volume of available data improves the detection rate. Identifying whether a given mental health condition of a user is a recent condition, similarly, whether the user currently suffers from a condition, are important questions yet challenging tasks as our efforts have shown us. Finally, to address information reduction for a faster read and processing of users’ posts by moderators/counselors, I will provide our efforts to summarize the posts to their short forms.
Brief Bio:
Nazli Goharian is Clinical Professor of Computer Science and Associate Director of the Information Retrieval Lab at Georgetown University, which she co-founded in 2010. She joined the Illinois Institute of Technology (IIT) from industry in 2000. Her research and doctoral student mentorship span the domains of information retrieval, text mining, and natural language processing. Specifically, her interest lies in humane-computing applications such as medical/health domain. Joint with her doctoral students, she received an EMNLP 2017 Best Long Paper Award and COLING 2018 Honorable Mention both for papers on mental health and social media. For contributions to undergraduate and graduate curriculum development and teaching excellence, she was recognized with the IIT Julia Beveridge Award for faculty (university-wide female faculty of the year) in 2009, the College of Science and Letters Dean’s Excellence Award in Teaching in 2005, and in 2002, 2003, and 2007, the Computer Science Department Teacher of the Year Award. She served as Senior/Area Chairs at multiple ACL conferences. She is SIGIR Women in Information retrieval (WIR) chair since 2019, focusing on gender pay inequity and women leadership.

Recording

Title:
Opening Remarks
Title:
“Not so fast!”: Revisiting assumptions in (and about) Natural Language Reasoning
Title:
Is My NLP Model Working? The Answer is Harder Than You Think
Title:
NLP Applications in Mental Health

For students talks, please check our youtube channel.
Title:
Closing Remarks