17th Workshop on Innovative Use of NLP for Building Educational Applications

seattle-aerial

Quick Info
Co-located with NAACL 2022
Location Seattle/hybrid
Deadline Friday, April 1, 2022
Friday, April 8, 2022 (extended)
Date July 15, 2022
Organizers Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, and Torsten Zesch
Contact bea.nlp.workshop@gmail.com

Workshop Description

The BEA Workshop is a leading venue for NLP innovation in the context of educational applications. It is one of the largest one-day workshops in the ACL community with over 100 registered attendees in the past several years. The growing interest in educational applications and a diverse community of researchers involved resulted in the creation of the Special Interest Group in Educational Applications (SIGEDU) in 2017, which currently has 240 members.

The workshop’s continuing growth highlights the alignment between societal needs and technological advances: for instance, BEA16 in 2021 hosted a panel discussion on New Challenges for Educational Technology in the Time of the Pandemic addressing the pressing issues around COVID19. NLP capabilities can now support an array of learning domains, including writing, speaking, reading, science, and mathematics, as well as the related intra-personal (e.g., self-confidence) and inter-personal (e.g., peer collaboration) skills. Within these areas, the community continues to develop and deploy innovative NLP approaches for use in educational settings. Another breakthrough for educational applications within the CL community is the presence of a number of shared-task competitions organized by the BEA workshop over the past several years, including four shared tasks on grammatical error detection and correction alone. NLP/Education shared tasks have also seen new areas of research, such as the Automated Evaluation of Scientific Writing at BEA11, Native Language Identification at BEA12, Second Language Acquisition Modelling at BEA13, and Complex Word Identification at BEA13. These competitions increased the visibility of, and interest in, our field.

The 17th BEA workshop will have keynotes by Klinton Bicknell (Duolingo) and Alexandra Cristea (University of Durham), an invited paper presentation by a member of one of the educational societies from the International Alliance to Advance Learning in the Digital Era (IAALDE), oral presentation sessions, a large poster session to maximize the amount of original work presented, and a panel discussion to encourage interaction between workshop participants. We expect that the workshop will continue to highlight novel technologies and opportunities for educational NLP in English as well as other languages. The workshop will solicit both full papers and short papers for either oral or poster presentation.

We will solicit papers that incorporate NLP methods, including, but not limited to:

  • automated scoring of open-ended textual and spoken responses;
  • automated scoring/evaluation for written student responses (across multiple genres);
  • game-based instruction and assessment;
  • educational data mining;
  • intelligent tutoring;
  • collaborative learning environments;
  • peer review;
  • grammatical error detection and correction;
  • learner cognition;
  • spoken dialog;
  • multimodal applications;
  • annotation standards and schemas;
  • tools and applications for classroom teachers, learners and/or test developers; and
  • use of corpora in educational tools.

Workshop Program

Invited Talks

The workshop will have two keynote presentations and one ambassador paper presentation from the 2021 Annual Meeting of the ISLS (International Society of the Learning Sciences), a member society of the IAALDE (International Alliance to Advance Learning in the Digital Era).


Keynote K. Bicknell (Duolingo)

Klinton Bicknell, Duolingo
ML and NLP for Language Learning at Scale

Abstract: As scalable learning technologies become ubiquitous, it generates a large amount of student data, which can be used with machine learning and NLP to develop new instructional technologies, such as personalized practice schedules and adaptive lessons. Additionally, machine learning and NLP are uniquely poised to solve the problems inherent in scaling language instruction to a large number of languages and courses. In this talk, I will describe several projects illustrating these two uses of ML and NLP in language learning at scale at Duolingo – the world’s largest language education platform with over 100 courses and around 40 million monthly active learners.

Bio: Klinton Bicknell is a staff research scientist at Duolingo, where he leads the Learning AI Lab. He works at the intersection of machine learning and cognitive science, and his research has been published in venues including ACL, PNAS, NAACL, Psychological Science, EDM, CogSci, and Cognition. Prior to Duolingo, he was an assistant professor at Northwestern University.


Keynote A. Cristea (Durham University)

Alexandra Cristea, Durham University
Aspects of Learning Analytics

Abstract: My favourite definition of Learning Analytics (LA) is Eric Duval’s: LA means ”collecting traces that learners leave behind and using those traces to improve learning.”, and I’ll tell you more about why during my talk. Whilst the term LA was coined relatively recently (2011), it is a growing area of interest, with immediate practical application, albeit a growing research area at the same time, bringing together many classic as well as cutting edge methodologies, such as statistics, data mining, machine learning (including deep learning), network analysis and visualisation. This talk will bring together an understanding of LA as an emerging discipline and research area, as well as new research directions in LA, such as applications in gamification, explainable AI, predicting certification of students, urgent instructor intervention (where we do use a bit of NLP), and further predict the development and maturity of this area as a whole.

Bio: Alexandra I. Cristea is Professor, Deputy Head, Director of Research and Head of the Artificial Intelligence in Human Systems research group in the Department of Computer Science at Durham University. She is also an Advisory Board Member at the Ustinov College, the N8 CIR Digital Humanities team lead for Durham, and an Honorary Professor at the Computer Science Department, Warwick University. Her research includes web science, learning analytics, user modelling and personalisation, semantic web, social web, authoring. Her work on frameworks for adaptive systems has especially influenced many researchers and is highly cited.


Ambassador Paper J. Fiacco (CMU)

James Fiacco, Carnegie Mellon University
Taking Transactivity to the Next Level

Abstract: Transactivity is a valued collaborative process, which has been associated with elevated learning gains, collaborative product quality, and knowledge transfer within teams. Dynamic forms of collaboration support have made use of real time monitoring of transactivity, and automation of its analysis has been affirmed as valuable to the field. Early models were able to achieve high reliability within restricted domains. More recent approaches have achieved a level of generality across learning domains. In this study, we investigate generalizability of models developed primarily in computer science courses to a new student population, namely, masters students in a leadership course, where we observe strikingly different patterns of transactive exchange than in prior studies. This difference prompted both a reformulation of the coding standards and innovation in the modeling approach, both of which we report on here.


Accepted Papers

We received 66 submissions and accepted 31 papers (acceptance rate: 47%).

  • A Baseline Readability Model for Cebuano, Joseph Marvin Imperial, Lloyd Lois Antonie Reyes, Michael Antonio Ibanez, Ranz Sapinit and Mohammed Hussien [paper] [resources]
  • A Dependency Treebank of Spoken Second Language English, Kristopher Kyle, Masaki Eguchi, Aaron Miller and Theodore Sither [paper] [resources]
  • ALEN App: Argumentative Writing Support To Foster English Language Learning, Thiemo Wambsganss, Andrew Caines and Paula Buttery [paper]
  • Activity focused Speech Recognition of Preschool Children in Early Childhood Classrooms, Satwik Dutta, Dwight Irvin, Jay Buzhardt and John H.L. Hansen [paper]
  • An Evaluation of Binary Comparative Lexical Complexity Models, Kai North, Marcos Zampieri and Matthew Shardlow [paper]
  • Assessing sentence readability for German language learners with broad linguistic modeling or readability formulas: When do linguistic insights make a difference?, Zarah Weiss and Detmar Meurers [paper]
  • Automatic True/False Question Generation for Educational Purpose, Bowei Zou, Pengfei Li, Liangming Pan and Ai Ti Aw [paper]
  • Automatic scoring of short answers using justification cues estimated by BERT, Shunya Takano and Osamu Ichikawa [paper]
  • Automatically Detecting Reduced-formed English Pronunciations by Using Deep Learning, Lei Chen, Chenglin Jiang, Yiwei Gu, Yang Liu and Jiahong Yuan [paper]
  • Computationally Identifying Funneling and Focusing Questions in Classroom Discourse, Sterling Alic, Dorottya Demszky, Zid Mancenido, Jing Liu, Heather Hill and Dan Jurafsky [paper] [resources]
  • Cross-corpora experiments of automatic proficiency assessment and error detection for spoken English, Stefano Bannò and Marco Matassoni [paper]
  • Don’t Drop the Topic - The Role of the Prompt in Argument Identification in Student Writing, Yuning Ding, Marie Bexte and Andrea Horbach [paper] [resources]
  • Educational Multi-Question Generation for Reading Comprehension, Manav Rathod, Tony Tu and Katherine Stasaski [paper] [resources]
  • Educational Tools for Mapuzugun, Cristian Ahumada, Claudio Gutierrez and Antonios Anastasopoulos [paper] [resources]
  • Fine-tuning Transformers with Additional Context to Classify Discursive Moves in Mathematics Classrooms, Abhijit Suresh, Jennifer Jacobs, Margaret Perkoff, James H. Martin and Tamara Sumner [paper]
  • Generation of Synthetic Error Data of Verb Order Errors for Swedish, Judit Casademont Moner and Elena Volodina [paper] [resources]
  • Incremental Disfluency Detection for Spoken Learner English, Lucy Skidmore and Roger Moore [paper]
  • ‘Meet me at the ribary’ – Acceptability of spelling variants in free-text answers to listening comprehension prompts, Ronja Laarmann-Quante, Leska Schwarz, Andrea Horbach and Torsten Zesch [paper]
  • Mitigating Learnerese Effects for CEFR Classification, Rricha Jalota, Peter Bourgonje, Jan van Sas and Huiyan Huang [paper]
  • On Assessing and Developing Spoken ’Grammatical Error Correction’ Systems, Yiting Lu, Stefano Bannò and Mark Gales [paper]
  • Parametrizable exercise generation from authentic texts: Effectively targeting the language means on the curriculum, Tanja Heck and Detmar Meurers [paper]
  • Response Construct Tagging: NLP-Aided Assessment for Engineering Education, Ananya Ganesh, Hugh Scribner, Jasdeep Singh, Katherine Goodman, Jean Hertzberg and Katharina Kann [paper] [resources]
  • Selecting Context Clozes for Lightweight Reading Compliance, Greg Keim and Michael Littman [paper]
  • Similarity-based Content Scoring - How to Make S-BERT Keep up with BERT, Marie Bexte, Andrea Horbach and Torsten Zesch [paper] [resources]
  • Starting from “Zero’’: An Incremental Zero-shot Learning Approach for Assessing Peer Feedback Comments, Qinjin Jia, Yupeng Cao and Edward F. Gehringer [paper]
  • Structural information in mathematical formulas for exercise difficulty prediction: a comparison of NLP representations, Ekaterina Loginova and Dries F. Benoit [paper]
  • The Specificity and Helpfulness of Peer-to-Peer Feedback in Higher Education, Roman Rietsche, Andrew Caines, Cornelius Schramm, Dominik Pfütze and Paula Buttery [paper] [resources]
  • Toward Robust Discourse Parsing of Student Writing Motivated by Neural Interpretation, James Fiacco, Shiyan Jiang, David Adamson and Carolyn Rosé [paper]
  • Towards Automatic Short Answer Assessment for Finnish as a Paraphrase Retrieval, Li-Hsin Chang, Jenna Kanerva and Filip Ginter [paper]
  • Towards an open-domain chatbot for language practice, Gladys Tyen, Mark Brenchley, Andrew Caines and Paula Buttery [paper] [resources]
  • Using Item Response Theory to Measure Gender and Racial Bias of a BERT-based Automated English Speech Assessment System, Alexander Kwako, Yixin Wan, Jieyu Zhao, Kai-Wei Chang, Li Cai and Mark Hansen [paper]

BEA 2022 proceedings: https://aclanthology.org/volumes/2022.bea-1/

Dinner

All presenters and registered participants are cordially invited to attend our annual workshop dinner. This time it will take place in Thai Ginger at 600 Pine Street, level 4 (Cinema Level), Seattle, WA 98101. The restaurant is just 5 minutes away from the conference venue.

As is the tradition, the dinner will be free for student participants. Unlike the previous years, this time it will take place on the evening before the workshop – on Thursday, July 14, at 7pm. We hope that this way more of you will be able to attend, and it will be a great opportunity to network and meet each other before the workshop on Friday!

Schedule

  July 14, 2022 (General Time in PDT, GMT-7)
Location: Thai Ginger at 600 Pine Street, level 4 (Cinema Level), Seattle, WA 98101
19:00–22:00 Workshop dinner
  July 15, 2022 (General Time in PDT, GMT-7)
Location: 501 - Chiwawa (in person) or Underline.io (remote)
08:30-09:00 Loading of Oral Presentations
09:00-09:15 Opening remarks
09:15-10:00 Keynote by Alexandra Cristea
Aspects of Learning Analytics
10:00–10:30 Coffee Break
10:30-12:00 Paper Session
10:30–10:55 The Specificity and Helpfulness of Peer-to-Peer Feedback in Higher Education (Roman Rietsche, Andrew Caines, Cornelius Schramm, Dominik Pfütze and Paula Buttery) – Remote
10:55–11:20 On Assessing and Developing Spoken ’Grammatical Error Correction’ Systems (Yiting Lu, Stefano Bannò and Mark Gales) – Remote
11:20–11:40 Automatically Detecting Reduced-formed English Pronunciations by Using Deep Learning (Lei Chen, Chenglin Jiang, Yiwei Gu, Yang Liu and Jiahong Yuan) – In person
11:40–12:00 Educational Multi-Question Generation for Reading Comprehension (Manav Rathod, Tony Tu and Katherine Stasaski) – In person
12:00–13:30 Lunch
13:30–15:00  Poster Sessions
Location: Regency ballroom / GatherTown
13:30–14:15 Poster Session A
  Using Item Response Theory to Measure Gender and Racial Bias of a BERT-based Automated English Speech Assessment System (Alexander Kwako, Yixin Wan, Jieyu Zhao, Kai-Wei Chang, Li Cai and Mark Hansen) – Remote
  Mitigating Learnerese Effects for CEFR Classification (Rricha Jalota, Peter Bourgonje, Jan van Sas and Huiyan Huang) – In person
  Generation of Synthetic Error Data of Verb Order Errors for Swedish (Judit Casademont Moner and Elena Volodina) – Remote
  Cross-corpora experiments of automatic proficiency assessment and error detection for spoken English (Stefano Bannò and Marco Matassoni) – Remote
  Activity focused Speech Recognition of Preschool Children in Early Childhood Classrooms (Satwik Dutta, Dwight Irvin, Jay Buzhardt and John H.L. Hansen) – In person
  Structural information in mathematical formulas for exercise difficulty prediction: a comparison of NLP representations (Ekaterina Loginova and Dries F. Benoit) – Remote
  Similarity-Based Content Scoring - How to Make S-BERT Keep Up With BERT (Marie Bexte, Andrea Horbach and Torsten Zesch) – Remote
  Don’t Drop the Topic - The Role of the Prompt in Argument Identification in Student Writing (Yuning Ding, Marie Bexte and Andrea Horbach) – In person
  ALEN App: Argumentative Writing Support To Foster English Language Learning (Thiemo Wambsganss, Andrew Caines and Paula Buttery) – Remote
  Assessing sentence readability for German language learners with broad linguistic modeling or readability formulas: When do linguistic insights make a difference? (Zarah Weiss and Detmar Meurers) – In person
  ‘Meet me at the ribary’ – Acceptability of spelling variants in free-text answers to listening comprehension prompts (Ronja Laarmann-Quante, Leska Schwarz, Andrea Horbach and Torsten Zesch) – Remote
  Educational Tools for Mapuzugun (Cristian Ahumada, Claudio Gutierrez and Antonios Anastasopoulos) – In person
  Response Construct Tagging: NLP-Aided Assessment for Engineering Education (Ananya Ganesh, Hugh Scribner, Jasdeep Singh, Katherine Goodman, Jean Hertzberg and Katharina Kann) – In person
  Incremental Disfluency Detection for Spoken Learner English (Lucy Skidmore and Roger Moore) – In person
14:15–15:00 Poster Session B
  Automatic scoring of short answers using justification cues estimated by BERT (Shunya Takano and Osamu Ichikawa) – Remote
  A Baseline Readability Model for Cebuano (Joseph Marvin Imperial, Lloyd Lois Antonie Reyes, Michael Antonio Ibanez, Ranz Sapinit and Mohammed Hussien) – Remote
  A Dependency Treebank of Spoken Second Language English (Kristopher Kyle, Masaki Eguchi, Aaron Miller and Theodore Sither) – In person
  Starting from “Zero’’: An Incremental Zero-shot Learning Approach for Assessing Peer Feedback Comments (Qinjin Jia, Yupeng Cao and Edward F. Gehringer) – Remote
  Automatic True/False Question Generation for Educational Purpose (Bowei Zou, Pengfei Li, Liangming Pan and Ai Ti Aw) – Remote
  Fine-tuning Transformers with Additional Context to Classify Discursive Moves in Mathematics Classrooms (Abhijit Suresh, Jennifer Jacobs, Margaret Perkoff, James H. Martin and Tamara Sumner) – In person
  Parametrizable exercise generation from authentic texts: Effectively targeting the language means on the curriculum (Tanja Heck and Detmar Meurers) – In person
  Selecting Context Clozes for Lightweight Reading Compliance (Greg Keim and Michael Littman) – In person
  An Evaluation of Binary Comparative Lexical Complexity Models (Kai North, Marcos Zampieri and Matthew Shardlow) – Remote
  Toward Automatic Discourse Parsing of Student Writing Motivated by Neural Interpretation (James Fiacco, Shiyan Jiang, David Adamson and Carolyn Rosé) – In person
  Computationally Identifying Funneling and Focusing Questions in Classroom Discourse (Sterling Alic, Dorottya Demszky, Zid Mancenido, Jing Liu, Heather Hill and Dan Jurafsky) – In person
  Towards an open-domain chatbot for language practice (Gladys Tyen, Mark Brenchley, Andrew Caines and Paula Buttery) – In person
  Towards Automatic Short Answer Assessment for Finnish as a Paraphrase Retrieval Task (Li-Hsin Chang, Jenna Kanerva and Filip Ginter) – In person
15:00–15:30 Mid-Afternoon Snacks
15:30–16:00 Ambassador paper by James Fiacco
Taking Transactivity to the Next Level
16:00–16:45 Keynote by Klinton Bicknell
ML and NLP for Language Learning at Scale
16:45–17:00 Closing remarks

Anti-Harassment Policy

SIGEDU adheres to the ACL Anti-Harassment Policy for the BEA workshops. Any participant of the workshop who experiences harassment or hostile behavior may contact any current member of the ACL Executive Committee or contact Priscilla Rasmussen, who is usually available at the registration desk of the conference. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.

Share Code & Data on GitHub

If you are interested in sharing your code and data with the BEA community, we created the #bea-workshop topic on GitHub.

Important Dates

Note: All deadlines are 11:59pm UTC-12 (anywhere on earth).

  • Submission Deadline: Friday, April 1, 2022 Friday, April 8, 2022
  • Notification of Acceptance: Friday, May 6, 2022 Tuesday, May 10, 2022
  • Camera-ready Papers Due: Friday, May 20, 2022
  • Workshop: Friday, July 15, 2022

Registration

If you wish to attend the workshop, you must register with the NAACL conference and select BEA from the list of offered workshops. More information can be found here. There is no need to have a paper accepted. The workshop is open to anyone who wishes to attend. Importantly, at least one author of each accepted paper must register.

Submission Information

We will be using the NAACL Submission Guidelines for the BEA Workshop this year. Authors are invited to submit a long paper of up to eight (8) pages of content, plus unlimited references; final versions of long papers will be given one additional page of content (up to 9 pages) so that reviewers’ comments can be taken into account. We also invite short papers of up to four (4) pages of content, plus unlimited references. Upon acceptance, short papers will be given five (5) content pages in the proceedings. Authors are encouraged to use this additional page to address reviewers’ comments in their final versions. Papers which describe systems are also invited to give a demo of their system. If you would like to present a demo in addition to presenting the paper, please make sure to select either “long paper + demo” or “short paper + demo” under “Submission Category” in the START submission page.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …”, should be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …”.

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.

We will be using the START conference system to manage submissions: https://www.softconf.com/naacl2022/BEA2022/

LaTeX and Word Templates

Paper submissions must use the official ACL style templates, which are available from here. Please follow the paper formatting guidelines general to “*ACL” conferences available here. Authors may not modify these style files or use templates designed for other conferences. Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review.

Double Submission Policy

We will follow the official ACL double-submission policy. Specifically: Papers being submitted both to BEA and another conference or workshop must:

  • Note on the title page the other conference or workshop to which they are being submitted.
  • State on the title page that if the authors choose to present their paper at BEA (assuming it was accepted), then the paper will be withdrawn from other conferences and workshops.

Sponsors

Gold Sponsors

Duolingo Logo ETS Logo iLexIR Logo NBME Logo

Sponsoring Opportunities

We are extremely grateful to our sponsors for the past workshops: in the recent years, we have been supported by Duolingo, Grammarly, NBME, iLexIR, Educational Testing Service, and Newsela. This year, we want to continue helping students to attend the workshop, including the accommodation of the student post-workshop dinner (in case the workshop runs offline) and offering student grants covering the BEA registration fees. We are hoping to identify sponsors who might be willing to contribute $100 (Bronze), $250 (Silver) or $500 (Gold sponsorship) to subsidize some of the workshop costs. Perks of sponsorship include logos on the workshop website and in the proceedings. If you would like to sponsor the BEA, please send us an email.

Organizing Committee

Workshop contact email address: bea.nlp.workshop@gmail.com

Program Committee

  • Tazin Afrin, University of Pittsburgh
  • David Alfter, Université catholique de Louvain
  • Jason Angel, IPN - Computing Research Center
  • Piper Armstrong, Brigham Young University
  • Timo Baumann, Universität Hamburg
  • Lee Becker, Educational Testing Service
  • Beata Beigman Klebanov, ETS
  • Lisa Beinborn, Vrije Universiteit Amsterdam
  • Kay Berkling, Cooperative State University, Germany
  • Marie Bexte, University of Duisburg-Essen
  • Daniel Brenner, ETS
  • Chris Bryant, University of Cambridge
  • Andrew Caines, University of Cambridge
  • Dumitru-Clementin Cercel, University Politehnica of Bucharest
  • Zhiyu Chen, University of California, Santa Barbara
  • Mei-Hua Chen, Department of Foreign Languages and Literature, Tunghai University
  • Guanliang Chen, Monash University
  • Leshem Choshen, Hebrew University of Jerusalem
  • Mark Core, University of Southern California
  • Scott Crossley, Georgia State Unviersity
  • Kordula De Kuthy, University of Tübingen
  • Yuning Ding, FernUniversität in Hagen
  • Rahul Divekar, Educational Testing Service
  • Yo Ehara, Tokyo Gakugei University
  • Mariano Felice, University of Cambridge
  • Michael Flor, Educational Testing Service
  • Thomas François, UCLouvain
  • Jennifer-Carmen Frey, Institute for Applied Linguistics, Eurac Research
  • Michael Gamon, Microsoft Research
  • Lingyu Gao, Toyota Technological Institute at Chicago
  • Samuel González-López, Technological University of Nogales
  • Cyril Goutte, National Research Council Canada
  • Na-Rae Han, University of Pittsburgh
  • Jiangang Hao, Educational Testing Service
  • Nicolas Hernandez, Université de Nantes
  • Chung-Chi Huang, Frostburg State University
  • Yi-Ting Huang, Sinica Academia
  • Joseph Marvin Imperial, National University Philippines
  • Radu Tudor Ionescu, University of Bucharest
  • Richard Johansson, University of Gothenburg
  • Lis Kanashiro Pereira, Ochanomizu University
  • Elma Kerz, RWTH Aachen University
  • Ekaterina Kochmar, University of Bath
  • Mamoru Komachi, Tokyo Metropolitan University
  • Ritesh Kumar , Dr. Bhimrao Ambedkar University
  • Kristopher Kyle, University of Oregon
  • Ji-Ung Lee, UKP, Technical University of Darmstadt
  • Yudong Liu, Western Washington University
  • Anastassia Loukina, Grammarly
  • Lieve Macken, Language and Translation Technology Team
  • Irina Maslowski, non-affiliated
  • Sandeep Mathias, Presidency University
  • Janet Mee, National Board of Medical Examiners
  • Detmar Meurers, Universität Tübingen
  • Alessio Miaschi, University of Pisa / Institute for Computational Linguistics (ILC-CNR), Pisa
  • Masato Mita, RIKEN AIP
  • Diane Napolitano, Associated Press
  • Kamel Nebhi, Education First
  • Hwee Tou Ng, National University of Singapore
  • Huy Nguyen, Data Scientist at Amazon
  • Robert Östling, Department of Linguistics, Stockholm University
  • Mengyang Qiu, University at Buffalo
  • Martí Quixal, University of Tübingen
  • Vipul Raheja, Grammarly
  • Lakshmi Ramachandran, Amazon
  • Hanumant Redkar, IIT Bombay, Mumbai
  • Frankie Robertson, University of Jyväskylä
  • Alla Rozovskaya, City University of New York
  • C. Anton Rytting, University of Maryland, College Park
  • Katherine Stasaski, UC Berkeley
  • Helmer Strik, Radboud University, Nijmegen, The Netherlands
  • Jan Švec, University of West Bohemia
  • Anaïs Tack, Stanford University
  • Shalaka Vaidya, NYU
  • Giulia Venturi, Institute of Computational Linguistics “A. Zampolli” (ILC-CNR), Pisa, Italy
  • Carl Vogel, Trinity College Dublin
  • Elena Volodina, University of Gothenburg, Sweden
  • Hongfei Wang, Tokyo Metropolitan University
  • Xinyu Wang, Riiid Labs
  • Zarah Weiss, University of Tübingen
  • Michael White, The Ohio State University
  • David Wible, National Central University, Taiwan
  • Alistair Willis, The Open University, UK
  • Yunkai Xiao, North Carolina State University
  • Yiqiao Xu, North Carolina State University
  • Zheng Yuan, University of Cambridge
  • Marcos Zampieri, Rochester Institute of Technology
  • Klaus Zechner, ETS
  • Fabian Zehner, DIPF / Leibniz Institute for Research and Information in Education
  • Torsten Zesch, Fernuniversität in Hagen