18th Workshop on Innovative Use of NLP for Building Educational Applications

toronto

Quick Info
Co-located with ACL 2023
Location Toronto, Canada
Deadline April 24, May 2, 2023
Date July 13, 2023
Organizers Ekaterina Kochmar, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Nitin Madnani, Anaïs Tack, Victoria Yaneva, Zheng Yuan, and Torsten Zesch
Contact bea.nlp.workshop@gmail.com

Workshop Description

The BEA Workshop is a leading venue for NLP innovation in the context of educational applications. It is one of the largest one-day workshops in the ACL community with over 100 registered attendees in the past several years. The growing interest in educational applications and a diverse community of researchers involved resulted in the creation of the Special Interest Group in Educational Applications (SIGEDU) in 2017, which currently has over 300 members.

The workshop’s continuing growth highlights the alignment between societal needs and technological advances: for instance, BEA16 in 2021 hosted a panel discussion on New Challenges for Educational Technology in the Time of the Pandemic addressing the pressing issues around COVID19. NLP capabilities can now support an array of learning domains, including writing, speaking, reading, science, and mathematics, as well as the related intra-personal (e.g., self-confidence) and inter-personal (e.g., peer collaboration) skills. Within these areas, the community continues to develop and deploy innovative NLP approaches for use in educational settings. Another breakthrough for educational applications within the CL community is the presence of a number of shared-task competitions organized by the BEA workshop over the past several years, including four shared tasks on grammatical error detection and correction alone. NLP/Education shared tasks have also seen new areas of research, such as the Automated Evaluation of Scientific Writing at BEA11, Native Language Identification at BEA12, Second Language Acquisition Modelling at BEA13, and Complex Word Identification at BEA13. These competitions increased the visibility of, and interest in, our field.

The 18th BEA workshop will follow the format of BEA in 2022 and will be hybrid. We will have three invited talks, a shared task on generation of teacher responses in educational dialogues, oral presentation sessions, and a large poster session to maximize the amount of original work presented. We expect that the workshop will continue to highlight novel technologies and opportunities, including the use of state-of-the-art large language models in educational applications, and challenges around responsible AI for educational NLP, in English as well as other languages. The workshop will solicit both full papers and short papers for either oral or poster presentation. We will solicit papers that incorporate NLP methods, including, but not limited to: automated scoring of open-ended textual and spoken responses; game-based instruction and assessment; educational data mining; intelligent tutoring; peer review; grammatical error detection and correction; learner cognition; spoken dialog; multimodal applications; tools for teachers and test developers; and use of corpora. We will solicit papers that incorporate NLP methods, including, but not limited to:

  • automated scoring of open-ended textual and spoken responses;
  • automated scoring/evaluation for written student responses (across multiple genres);
  • game-based instruction and assessment;
  • educational data mining;
  • intelligent tutoring;
  • collaborative learning environments;
  • peer review;
  • grammatical error detection and correction;
  • learner cognition;
  • spoken dialog;
  • multimodal applications;
  • annotation standards and schemas;
  • tools and applications for classroom teachers, learners and/or test developers; and
  • use of corpora in educational tools.

Workshop Program

Invited Talks

The workshop will have two keynote presentations and one ambassador paper presentation from EDM 2022 (The 15th International Conference on Educational Data Mining), a member society of the IAALDE (International Alliance to Advance Learning in the Digital Era).


Keynote S. Lottridge (Cambium Assessment)

Susan Lottridge, Cambium Assessment
Building Educational Applications using NLP: A Measurement Perspective

Abstract: The domains of NLP, data science, software engineering, and educational measurement are becoming increasingly interdependent when creating NLP-based educational applications. Indeed, the domains themselves are merging in key ways, with each incorporating one another’s methods and tools into their work. For example, many software engineers regularly deploy machine learning models and many linguists, data scientists, and measurement staff regularly develop software. Even so, each discipline approaches this complex task with the assumptions, priorities, and values of their field. The best educational applications are the result of multi-disciplinary teams that can leverage one another’s strengths and can recognize and honor the values of each disciplinary perspective.
This talk will describe the educational measurement perspective within this collaborative process. At a high level, educational measurement is the design, use, and analysis of assessments in order to make inferences about what students know and can do. Given this, the measurement experts on a team focus heavily on defining what students need to know and do, what evidence supports inferences about what students know and can do, and whether the data are accurate, reliable, and fair to all students. This perspective can impact the full life-cycle development of educational applications, from designing the core product focus, data collection activities, NLP modelling, analysis of model outputs, and information provided to students. It can also help ensure that educational applications produce information that is valuable to teachers and students. Because these perspectives can be opaque to those outside of measurement, the development process of various NLP educational tools will be used to illustrate key areas where measurement can contribute in product design.

Bio: Sue Lottridge is a Chief Scientist of Natural Language Applications at Cambium Assessment, Inc. She has a Ph.D. in Assessment and Measurement from James Madison University and Masters’ degrees in Mathematics and Computer Science from the University of Wisconsin – Madison. In this role, she leads CAI’s machine learning and scoring team on the research, development, and operation of CAI’s automated scoring and feedback software. Dr. Lottridge has worked in automated scoring for fifteen years and has contributed to the design, research, and use of multiple automated scoring engines including equation scoring, essay scoring, short answer scoring, speech scoring, crisis alert detection, and essay feedback.


Keynote J. Heller (Textio)

Jordana Heller, Textio
Interrupting Linguistic Bias in Written Communication with NLP tools

Abstract: Unconscious bias is hard to detect, but when we identify it in language usage, we can take steps to interrupt and reduce it. At Textio, we focus on using NLP to detect, interrupt, and educate writers about bias in written workforce communications. Unconscious bias affects many facets of the employee lifecycle. Exclusionary language in recruiting communications can deter candidates from diverse backgrounds from even applying to a position, hindering efforts to build inclusive workplaces. Once a candidate has accepted a position, the language used to provide them feedback on their performance affects how they develop professionally, and we have found stark inequities in the language of feedback to members of different demographic groups. This talk will discuss how Textio uses NLP to interrupt these patterns of bias by assessing these texts for bias and providing 1) real-time iterative, educational feedback to the writer on how to improve a specific document, including guidance toward less-biased language alternatives, and 2) an assessment at a workplace level of exclusionary and inclusive language, so that companies can set goals around language improvement and track their progress toward them.

Bio: Jordana Heller, PhD, is Director of Data Intelligence at Textio, a tech company focused on interrupting bias in performance feedback and recruiting. Textio identifies bias in written documents and provides data to writers in real time that helps them write more effectively and equitably. At Textio, Jordana applies her background as a computational psycholinguist and cognitive scientist to her leadership of R&D teams who are focused on using data and NLP to help employers reduce bias and accelerate professional growth equitably.


Ambassador Paper BEA 2023 Shared Task A. Tack (KU Leuven)

Anaïs Tack, KU Leuven, imec
Generating Teacher Responses in Educational Dialogues: The AI Teacher Test & BEA 2023 Shared Task

Abstract: How can we test whether state-of-the-art generative models, such as Blender and GPT-3, are good AI teachers, capable of replying to a student in an educational dialogue? Designing an AI teacher test is challenging: although evaluation methods are much-needed, there is no off-the-shelf solution to measuring pedagogical ability.
In the first part of this talk, I will describe our paper The AI Teacher Test: Measuring the Pedagogical Ability of Blender and GPT-3 in Educational Dialogues presented at EDM 2022. The paper reported on a first attempt at an AI teacher test. We built a solution around the insight that you can run conversational agents in parallel to human teachers in real-world dialogues, simulate how different agents would respond to a student, and compare these counterpart responses in terms of three abilities: speak like a teacher, understand a student, help a student. Our method builds on the reliability of comparative judgments in education and uses a probabilistic model and Bayesian sampling to infer estimates of pedagogical ability. We find that, even though conversational agents (Blender in particular) perform well on conversational uptake, they are quantifiably worse than real teachers on several pedagogical dimensions, especially with regard to helpfulness.
In the second part of this talk, I will describe the results of the BEA 2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues, which was a continuation of our EDM paper.

Bio: Anaïs Tack is a postdoctoral researcher working on language technology for smart education at itec, an imec research group at KU Leuven, and is also a lecturer in NLP at UCLouvain. She holds a joint Ph.D. in linguistics from UCLouvain and KU Leuven, where she worked as an F.R.S.-FNRS doctoral research fellow. She was a BAEF postdoctoral scholar and research fellow at Stanford University, where she worked in Chris Piech’s lab and the Stanford HAI education team. Her research interests include the generation and evaluation of teacher language in educational dialogues, the prediction of lexical difficulty for non-native readers, the automated scoring of language proficiency for non-native writers, and the creation of machine-readable resources from educational materials. Anaïs participated in organizing the CWI shared task at BEA 2018 as well as the 27th International EUROCALL conference in 2019. She is an executive board member of the ACL SIGEDU and has been involved in organizing the BEA workshop since 2021.


Shared Task

Webpage: https://sig-edu.org/sharedtask/2023

The workshop will host a shared task on generation of teacher responses in educational dialogues. Participants will be provided with teacher–student dialogue samples from the Teacher Student Chatroom Corpus (Caines et al., 2020) of real-world teacher–student interactions and will be asked to generate teacher responses using NLP and AI methods. Submissions will be ranked according to automated evaluation metrics, with the top submissions selected for further human evaluation. Given active participation in the previous BEA-hosted shared tasks, we expect to attract around 20 participating teams.

Organizers: Anaïs Tack, KU Leuven, imec; Ekaterina Kochmar, MBZUAI; Zheng Yuan, King’s College London; Serge Bibauw, Universidad Central del Ecuador; Chris Piech, Stanford University


Accepted Papers

We received 110 submissions and accepted 58 papers (acceptance rate: 53%).

BEA 2023 proceedings.

Oral Presentations

  • Improving Reading Comprehension Question Generation with Data Augmentation and Overgenerate-and-rank. Nischal Ashok Kumar, Nigel Steven Fernandez, Zichao Wang, and Andrew Lan
  • Grammatical Error Correction for Sentence-level Assessment in Language Learning. Anisia Katinskaia and Roman Yangarber

Poster Presentations

  • A Closer Look at k-Nearest Neighbors Grammatical Error Correction. Justin Vasselli and Taro Watanabe
  • Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A Study on Performance and Controllability in Prompt-Based Methods. Mengsay Loem, Masahiro Kaneko, sho Takase, and Naoaki Okazaki
  • Gender-Inclusive Grammatical Error Correction through Augmentation. Gunnar Lund, Kostiantyn Omelianchuk, and Igor Samokhin
  • Recognizing Learner Handwriting Retaining Orthographic Errors for Enabling Fine-Grained Error Feedback. Christian Gold, Ronja Laarmann-Quante, and Torsten Zesch
  • Towards automatically extracting morphosyntactical error patterns from L1-L2 parallel dependency treebanks. Arianna Masciolini, Elena Volodina, and Dana Dannélls
  • Training for Grammatical Error Correction Entirely Without Human-annotated L2 Learners’ Corpora. Mikio Oda
  • A dynamic model of lexical experience for tracking of oral reading fluency. Beata Beigman Klebanov, Michael Suhan, Zuowei Wang, and Tenaha O’Reilly
  • ALEXSIS+: Improving Substitute Generation and Selection for Lexical Simplification with Information Retrieval. Kai North, Alphaeus Dmonte, Tharindu Ranasinghe, Matthew Shardlow, and Marcos Zampieri
  • CEFR-based Contextual Lexical Complexity Classifier in English and French. Desislava Aleksandrova and Vincent Pouliot
  • “Geen makkie”: Interpretable Classification and Simplification of Dutch Text Complexity. Eliza Hobo, Charlotte Pouw, and Lisa Beinborn
  • Hybrid Models for Sentence Readability Assessment. Fengkai Liu and John Lee
  • Japanese Lexical Complexity for Non-Native Readers: a New Dataset. Yusuke Ide, Masato Mita, Adam Nohejl, Hiroki Ouchi, and Taro Watanabe
  • You’ve Got a Friend in … a Language Model? A Comparison of Explanations of Multiple-Choice Items of Reading Comprehension between ChatGPT and Humans. George Duenas, Sergio Jimenez, and Geral Eduardo Mateus Ferro
  • ACTA: Short-Answer Grading in High-Stakes Medical Exams. King Yiu Suen, Victoria Yaneva, Le An Ha, Janet Mee, Yiyun Zhou, and Polina Harik
  • Automated evaluation of written discourse coherence using GPT-4. Ben Naismith, Phoebe Mulcaire, and Jill Burstein
  • ExASAG: Explainable Framework for Automatic Short Answer Grading. Maximilian Tornqvist, Mosleh Mahamud, Erick Mendez Guzman, and Alexandra Farazouli
  • Predicting the Quality of Revisions in Argumentative Writing. Zhexiong Liu, Diane Litman, Elaine Lin Wang, Lindsay C. Matsumura, and Richard Correnti
  • Rating Short L2 Essays on the CEFR Scale with GPT-4. Kevin P. Yancey, Geoffrey T. LaFlair, Anthony R. Verardi, and Jill Burstein
  • Span Identification of Epistemic Stance-Taking in Academic Written English. Masaki Eguchi and Kristopher Kyle
  • Towards Extracting and Understanding the Implicit Rubrics of Transformer Based Automatic Essay Scoring Models. James Fiacco, David Adamson, and Carolyn Rosé
  • Transformer-based Hebrew NLP models for Short Answer Scoring in Biology. Abigail Gurin Schleifer, Beata Beigman Klebanov, Moriah Ariely, and Giora Alexandron
  • Analyzing Bias in Large Language Model Solutions for Assisted Writing Feedback Tools: Lessons from the Feedback Prize Competition Series. Perpetual Baffour, Tor Saxberg, and Scott Crossley
  • Labels are not necessary: Assessing peer-review helpfulness using domain adaptation based on self-training. Chengyuan Liu, Divyang Doshi, Muskaan Bhargava, Ruixuan Shang, Jialin Cui, Dongkuan Xu, and Edward Gehringer
  • Reviewriter: AI-Generated Instructions For Peer Review Writing. Xiaotian Su, Roman Rietsche, Seyed Parsa Neshaei, Thiemo Wambsganss, and Tanja Käser
  • Assisting Language Learners: Automated Trans-Lingual Definition Generation via Contrastive Prompt Learning. Hengyuan Zhang, Dawei Li, Yanran Li, Chenming Shang, Chufan Shi, and Yong Jiang
  • Comparing Neural Question Generation Architectures for Reading Comprehension. E. Margaret Perkoff, Abhidip Bhattacharyya, Jon Z. Cai, and Jie Cao
  • Difficulty-Controllable Neural Question Generation for Reading Comprehension using Item Response Theory. Masaki Uto, Yuto Tomikawa, and Ayaka Suzuki
  • Generating Better Items for Cognitive Assessments Using Large Language Models. Antonio Laverghetta Jr., and John Licato
  • Generating Dialog Responses with Specified Grammatical Items for Second Language Learning. Yuki Okano, Kotaro Funakoshi, Ryo Nagata and Manabu Okumura
  • Learning from Partially Annotated Data: Example-aware Creation of Gap-filling Exercises for Language Learning. Semere Kiros Bitew, Johannes Deleu, A. Seza Doğruöz, Chris Develder, and Thomas Demeester
  • MultiQG-TI: Towards Question Generation from Multi-modal Sources. Zichao Wang and Richard Baraniuk
  • Using Learning Analytics for Adaptive Exercise Generation. Tanja Heck and Detmar Meurers
  • GrounDialog: A Dataset for Repair and Grounding in Task-oriented Spoken Dialogues for Language Learning. Xuanming Zhang, Rahul Divekar, Rutuja Ubale, and Zhou Yu
  • SIGHT: A Large Annotated Dataset on Student Insights Gathered from Higher Education Transcripts. Rose Wang, Pawan Wirawarn, Noah Goodman, and Dorottya Demszky
  • Socratic Questioning of Novice Debuggers: A Benchmark Dataset and Preliminary Evaluations. Erfan Al-Hossami, Razvan Bunescu, Ryan Teehan, Laurel Powell, Khyati Mahajan, and Mohsen Dorodchi
  • Towards L2-friendly pipelines for learner corpora: A case of written production by L2-Korean learners. Hakyung Sung and Gyu-Ho Shin
  • Improving Mathematics Tutoring With A Code Scratchpad. Shriyash Upadhyay, Etan Ginsberg, and Chris Callison-Burch
  • Scalable and Explainable Automated Scoring for Open-Ended Constructed Response Math Word Problems. Scott Hellman, Alejandro Andrade, and Kyle Habermehl
  • The NCTE Transcripts: A Dataset of Elementary Math Classroom Transcripts. Dorottya Demszky and Heather Hill
  • Does BERT Exacerbate Gender or L1 Biases in Automated English Speaking Assessment? Alexander Kwako, Yixin Wan, Jieyu Zhao, Mark Hansen, Kai-Wei Chang, and Li Cai
  • Inspecting Spoken Language Understanding from Kids for Basic Math Learning at Home. Eda Okur, Roddy Fuentes Alba, Saurav Sahay, and Lama Nachman
  • Exploring a New Grammatico-functional Type of Measure as Part of a Language Learning Expert System. Cyriel Mallart, Andrew Simpkin, Rémi Venant, Nicolas Ballier, Bernardo Stearns, Jen Yu Li, and Thomas Gaillat
  • Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For Scoring and Providing Actionable Insights on Classroom Instruction. Rose Wang and Dorottya Demszky
  • Reconciling Adaptivity and Task Orientation in the Student Dashboard of an Intelligent Language Tutoring System. Leona Colling, Tanja Heck, and Detmar Meurers
  • A Transfer Learning Pipeline for Educational Resource Discovery with Application in Survey Generation. Irene Li, Thomas George, Alex Fabbri, Tammy Liao, Benjamin Chen, Rina Kawamura, Richard Zhou, Vanessa Yan, Swapnil Hingmire, and Dragomir Radev
  • Enhancing Human Summaries for Question-Answer Generation in Education. Hannah Gonzalez, Liam Dugan, Eleni Miltsakaki, Zhiqi Cui, Jiaxuan Ren, Bryan Li, Shriyash Upadhyay, Etan Ginsberg, and Chris Callison-Burch
  • Automatically Generated Summaries of Video Lectures Enhance Students’ Learning Experience. Hannah Gonzalez, Jiening Li, Helen Jin, Jiaxuan Ren, Hongyu Zhang, Ayotomiwa Akinyele, Adrian Wang, Eleni Miltsakaki, Ryan Baker, and Chris Callison-Burch
  • Handcrafted Features in Computational Linguistics. Bruce W. Lee and Jason Lee

Demonstrations

  • ChatBack: Investigating Methods of Providing Grammatical Error Feedback in a GUI-based Language Learning Chatbot. Kai-Hui Liang, Sam Davidson, Xun Yuan, Shehan Panditharatne, Chun-Yen Chen, Ryan Shea, Derek Pham, Yinghua Tan, Erik Voss, Luke Fryer, and Zhou Yu
  • Enhancing Video-based Learning Using Knowledge Tracing: Personalizing Students’ Learning Experience with ORBITS. Shady Shehata, David Santandreu Calonge, Philip Purnell, and Mark Thompson
  • Evaluating Classroom Potential for Card-it: Digital Flashcards for Studying and Learning Italian Morphology. Mariana Shimabukuro, Jessica Zipf, Shawn Yama, and Christopher Collins
  • ReadAlong-Studio Web Interface for Digital Interactive Storytelling. Aidan Pine, David Huggins-Daines, Eric Joanis, Patrick Littell, Marc Tessier, Delasie Torkornoo, Rebecca Knowles, Roland Kuhn, and Delaney Lothian
  • UKP-SQuARE: An Interactive Tool for Teaching Question Answering. Haishuo Fang, Haritz Puerto, and Iryna Gurevych
  • Auto-req: Automatic detection of pre-requisite dependencies between academic videos. Rushil Thareja, Ritik Garg, Shiva Baghel, Deep Dwivedi, Mukesh Mohania, and Ritvik Kulshrestha
  • Evaluating Reading Comprehension Exercises Generated by LLMs: A Showcase of ChatGPT in Education Applications. Changrong Xiao, Sean Xin Xu, Kunpeng Zhang, Yufang Wang, and Lei Xia
  • Beyond Black Box AI generated Plagiarism Detection: From Sentence to Document Level. Chunhui Li, Parijat Dube, and Ali Quidwai

Special Track: Shared Task Posters

  • The BEA 2023 Shared Task on Generating AI Teacher Responses in Educational Dialogues. Anaïs Tack, Ekaterina Kochmar, Zheng Yuan, Serge Bibauw, and Chris Piech
  • Enhancing Educational Dialogues: A Reinforcement Learning Approach for Generating AI Teacher Responses. Thomas Huber, Christina Niklaus, and Siegfried Handschuh
  • Assessing the efficacy of large language models in generating accurate teacher responses. Yann Hicke, Abhishek Masand, Wentao Guo, and Tushaar Gangavarapu
  • RETUYT-InCo at BEA 2023 Shared Task: Tuning Open-Source LLMs for Generating Teacher Responses. Alexis Baladón, Ignacio Sastre, Luis Chiruzzo and Aiala Rosá
  • Empowering Conversational Agents using Semantic In-Context Learning. Amin Omidvar and Aijun An
  • NAISTeacher: A Prompt and Rerank Approach to Generating Teacher Utterances in Educational Dialogues. Justin Vasselli, Christopher Vasselli, Adam Nohejl and Taro Watanabe
  • The ADAIO System at the BEA-2023 Shared Task: Shared Task Generating AI Teacher Responses in Educational Dialogues. Adaeze Adigwe and Zheng Yuan

Schedule

  July 13, 2023 (General Time in EDT, GMT-4)
Location: Harbour A (in person) or Underline.io (remote)
09:00–09:05 Opening remarks
09:05–09:50 Keynote by Susan Lottridge
Building Educational Applications using NLP: A Measurement Perspective
09:50–10:30 Outstanding Papers Session
09:50–10:10 Improving Reading Comprehension Question Generation with Data Augmentation and Overgenerate-and-rank (Nischal Ashok Kumar, Nigel Steven Fernandez, Zichao Wang, and Andrew Lan) – In person
10:10–10:30 Grammatical Error Correction for Sentence-level Assessment in Language Learning (Anisia Katinskaia and Roman Yangarber) – In person
10:30–11:00 Morning Coffee Break
11:00–11:30 Spotlight talks for Poster / Demo Session A (In-person + Virtual)
Location: Harbour A (in person) or Underline.io (remote)
11:30–12:30 Poster / Demo Session A
Location: Poster Area / GatherTown
A Transfer Learning Pipeline for Educational Resource Discovery with Application in Survey Generation (Irene Li, Thomas George, Alex Fabbri, Tammy Liao, Benjamin Chen, Rina Kawamura, Richard Zhou, Vanessa Yan, Swapnil Hingmire and Dragomir Radev) – In-person poster
Using Learning Analytics for Adaptive Exercise Generation (Tanja Heck and Detmar Meurers) – In-person poster
14  Enhancing Human Summaries for Question-Answer Generation in Education (Hannah Gonzalez, Liam Dugan, Eleni Miltsakaki, Zhiqi Cui, Jiaxuan Ren, Bryan Li, Shriyash Upadhyay, Etan Ginsberg and Chris Callison-Burch) – In-person poster
16  Difficulty-Controllable Neural Question Generation for Reading Comprehension using Item Response Theory (Masaki Uto, Yuto Tomikawa and Ayaka Suzuki) – In-person poster
18  Evaluating Classroom Potential for Card-it: Digital Flashcards for Studying and Learning Italian Morphology (Mariana Shimabukuro, Jessica Zipf, Shawn Yama and Christopher Collins) – In-person poster
20  Gender-Inclusive Grammatical Error Correction through Augmentation (Gunnar Lund, Kostiantyn Omelianchuk and Igor Samokhin) – In-person poster
24  Labels are not necessary: Assessing peer-review helpfulness using domain adaptation based on self-training (Chengyuan Liu, Divyang Doshi, Muskaan Bhargava, Ruixuan Shang, Jialin Cui, Dongkuan Xu and Edward Gehringer) – In-person poster
25  Generating Dialog Responses with Specified Grammatical Items for Second Language Learning (Yuki Okano, Kotaro Funakoshi, Ryo Nagata and Manabu Okumura) – In-person poster
32  A Closer Look at k-Nearest Neighbors Grammatical Error Correction (Justin Vasselli and Taro Watanabe) – In-person poster
34  Analyzing Bias in Large Language Model Solutions for Assisted Writing Feedback Tools: Lessons from the Feedback Prize Competition Series (Perpetual Baffour, Tor Saxberg and Scott Crossley) – In-person poster
40  Predicting the Quality of Revisions in Argumentative Writing (Zhexiong Liu, Diane Litman, Elaine Lin Wang, Lindsay C. Matsumura and Richard Correnti) – In-person poster
 42 Reconciling Adaptivity and Task Orientation in the Student Dashboard of an Intelligent Language Tutoring System (Leona Colling, Tanja Heck and Detmar Meurers) – In-person poster
45  SIGHT: A Large Annotated Dataset on Student Insights Gathered from Higher Education Transcripts (Rose Wang, Pawan Wirawarn, Noah Goodman and Dorottya Demszky) – In-person poster
50  Automatically Generated Summaries of Video Lectures May Enhance Students’ Learning Experience (Hannah Gonzalez, Jiening Li, Helen Jin, Jiaxuan Ren, Hongyu Zhang, Ayotomiwa Akinyele, Adrian Wang, Eleni Miltsakaki, Ryan Baker and Chris Callison-Burch) – In-person poster
10  ChatBack: Investigating Methods of Providing Grammatical Error Feedback in a GUI-based Language Learning Chatbot (Kai-Hui Liang, Sam Davidson, Xun Yuan, Shehan Panditharatne, Chun-Yen Chen, Ryan Shea, Derek Pham, Yinghua Tan, Erik Voss, Luke Fryer and Zhou Yu) – In-person demo
11  Enhancing Video-based Learning Using Knowledge Tracing: Personalizing Students’ Learning Experience with ORBITS (Shady Shehata, David Santandreu Calonge, Philip Purnell and Mark Thompson) – In-person demo
18  Evaluating Classroom Potential for Card-it: Digital Flashcards for Studying and Learning Italian Morphology (Mariana Shimabukuro, Jessica Zipf, Shawn Yama and Christopher Collins) – In-person demo
75  The NCTE Transcripts: A Dataset of Elementary Math Classroom Transcripts (Dorottya Demszky and Heather Hill) – Online poster
AI-Generated Instructions For Peer Review Writing (Xiaotian Su, Thiemo Wambsganss, Roman Rietsche, Seyed Parsa Neshaei and Tanja Käser) – Online poster
  9 Towards L2-friendly pipelines for learner corpora: A case of written production by L2-Korean learners (Hakyung Sung and Gyu-Ho Shin) – Online poster
 30 Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A Study on Performance and Controllability in Prompt-Based Methods (Mengsay Loem, Masahiro Kaneko, sho Takase and Naoaki Okazaki) – Online poster
47  Recognizing Learner Handwriting Retaining Orthographic Errors for Enabling Fine-Grained Error Feedback (Christian Gold, Ronja Laarmann-Quante and Torsten Zesch) – Online poster
48  ExASAG: Explainable Framework for Automatic Short Answer Grading (Maximilian Tornqvist, Mosleh Mahamud, Erick Mendez Guzman and Alexandra Farazouli) – Online poster
49  You’ve Got a Friend in … a Language Model? A Comparison of Explanations of Multiple-Choice Items of Reading Comprehension between ChatGPT and Humans (George Duenas, Sergio Jimenez and Geral Eduardo Mateus Ferro) – Online poster
 69 ‘Geen makkie’: Interpretable Classification and Simplification of Dutch Text Complexity (Eliza Hobo, Charlotte Pouw and Lisa Beinborn) – Online poster
88  Towards automatically extracting morphosyntactical error patterns from L1-L2 parallel dependency treebanks (Arianna Masciolini, Elena Volodina and Dana Dannélls) – Online poster
90  Learning from Partially Annotated Data: Example-aware Creation of Gap-filling Exercises for Language Learning (Semere Kiros Bitew, Johannes Deleu, A. Seza Doğruöz, Chris Develder and Thomas Demeester) – Online poster
111  Enhancing Educational Dialogues: A Reinforcement Learning Approach for Generating AI Teacher Responses (Thomas Huber, Christina Niklaus and Siegfried Handschuh) – Online poster
39  Assisting Language Learners: Automated Trans-Lingual Definition Generation via Contrastive Prompt Learning (Hengyuan Zhang, Dawei Li, Yanran Li, Chenming Shang, Chufan Shi and Yong Jiang) – Online poster
44  GrounDialog: A Dataset for Repair and Grounding in Task-oriented Spoken Dialogues for Language Learning (Xuanming Zhang, Rahul Divekar, Rutuja Ubale and Zhou Yu) – Online poster
62  Hybrid Models for Sentence Readability Assessment (Fengkai Liu and John Lee) – Online poster
Handcrafted Features in Computational Linguistics (Bruce W. Lee and Jason Lee) – Online poster
30  Exploring Effectiveness of GPT-3 in Grammatical Error Correction: A Study on Performance and Controllability in Prompt-Based Methods (Mengsay Loem, Masahiro Kaneko, sho Takase and Naoaki Okazaki) – Online poster
58  Span Identification of Epistemic Stance-Taking in Academic Written English (Masaki Eguchi and Kristopher Kyle) – Online poster
110  Beyond Black Box AI generated Plagiarism Detection: From Sentence to Document Level (Chunhui Li, Parijat Dube and Ali Quidwai) – Online demo
93  Evaluating Reading Comprehension Exercises Generated by LLMs: A Showcase of ChatGPT in Education Applications (Changrong Xiao, Sean Xin Xu, Kunpeng Zhang, Yufang Wang and Lei Xia) – Online demo
76  Auto-req: Automatic detection of pre-requisite dependencies between academic videos (Rushil Thareja, Ritik Garg, Shiva Baghel, Deep Dwivedi, Mukesh Mohania and Ritvik Kulshrestha) – Online demo
12:30–14:00 Lunch
14:00–14:30 Spotlight talks for Poster / Demo Session B (In-person + Virtual)
Location: Harbour A (in person) or Underline.io (remote)
14:30–15:30 Poster / Demo Session B
Location: Poster Area / GatherTown
 55 Automated evaluation of written discourse coherence using GPT-4 (Ben Naismith, Phoebe Mulcaire and Jill Burstein) – In-person poster
56  ALEXSIS+: Improving Substitute Generation and Selection for Lexical Simplification with Information Retrieval (Kai North, Alphaeus Dmonte, Tharindu Ranasinghe, Matthew Shardlow and Marcos Zampieri) – In-person poster
61  ACTA: Short-Answer Grading in High-Stakes Medical Exams (King Yiu Suen, Victoria Yaneva, Le An Ha, Janet Mee, Yiyun Zhou and Polina Harik) – In-person poster
63  Training for Grammatical Error Correction Without Human-Annotated L2 Learners’ Corpora (Mikio Oda) – In-person poster
65  Exploring a New Grammatico-functional Type of Measure as Part of a Language Learning Expert System (Cyriel Mallart, Andrew Simpkin, Rémi Venant, Nicolas Ballier, Bernardo Stearns, Jen Yu Li and Thomas Gaillat) – In-person poster
66  Japanese Lexical Complexity for Non-Native Readers: A New Dataset (Yusuke Ide, Masato Mita, Adam Nohejl, Hiroki Ouchi and Taro Watanabe) – In-person poster
 74 CEFR-based Contextual Lexical Complexity Classifier in English and French (Desislava Aleksandrova and Vincent Pouliot) – In-person poster
77  Transformer-based Hebrew NLP models for Short Answer Scoring in Biology (Abigail Gurin Schleifer, Beata Beigman Klebanov, Moriah Ariely and Giora Alexandron) – In-person poster
78  Comparing Neural Question Generation Architectures for Reading Comprehension (E. Margaret Perkoff, Abhidip Bhattacharyya, Jon Z. Cai and Jie Cao) – In-person poster
81  A dynamic model of lexical experience for tracking of oral reading fluency (Beata Beigman Klebanov, Michael Suhan, Zuowei Wang and Tenaha O’Reilly) – In-person poster
86  Rating Short L2 Essays on the CEFR Scale with GPT-4 (Kevin P. Yancey, Geoffrey T. LaFlair, Anthony R. Verardi and Jill Burstein) – In-person poster
90  Learning from Partially Annotated Data: Example-aware Creation of Gap-filling Exercises for Language Learning (Semere Kiros Bitew, Johannes Deleu, A. Seza Doğruöz, Chris Develder and Thomas Demeester) – In-person poster
97  Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For Scoring and Providing Actionable Insights on Classroom Instruction (Rose Wang and Dorottya Demszky) – In-person poster
99  Does BERT Exacerbate Gender or L1 Biases in Automated English Speaking Assessment? (Alexander Kwako, Yixin Wan, Jieyu Zhao, Mark Hansen, Kai-Wei Chang and Li Cai) – In-person poster
100  MultiQG-TI: Towards Question Generation from Multi-modal Sources (Zichao Wang and Richard Baraniuk) – In-person poster
106  Socratic Questioning of Novice Debuggers: A Benchmark Dataset and Preliminary Evaluations (Erfan Al-Hossami, Razvan Bunescu, Ryan Teehan, Laurel Powell, Khyati Mahajan and Mohsen Dorodchi) – In-person poster
114  Empowering Conversational Agents using Semantic In-Context Learning (Amin Omidvar and Aijun An) – In-person poster
 115 NAISTeacher: A Prompt and Rerank Approach to Generating Teacher Utterances in Educational Dialogues (Justin Vasselli, Christopher Vasselli, Adam Nohejl and Taro Watanabe) – In-person poster
21  ReadAlong Studio Web Interface for Digital Interactive Storytelling (Aidan Pine, David Huggins-Daines, Eric Joanis, Patrick Littell, Marc Tessier, Delasie Torkornoo, Rebecca Knowles, Roland Kuhn and Delaney Lothian) – In-person demo
28  UKP-SQuARE: An Interactive Tool for Teaching Question Answering (Haishuo Fang, Haritz Puerto and Iryna Gurevych) – In-person demo
57  Generating Better Items for Cognitive Assessments Using Large Language Models (Antonio Laverghetta Jr. and John Licato) – Online poster
 99 Does BERT Exacerbate Gender or L1 Biases in Automated English Speaking Assessment? (Alexander Kwako, Yixin Wan, Jieyu Zhao, Mark Hansen, Kai-Wei Chang and Li Cai) – Online poster
102  Inspecting Spoken Language Understanding from Kids for Basic Math Learning at Home (Eda Okur, Roddy Fuentes Alba, Saurav Sahay and Lama Nachman) – Online poster
112  Assessing the efficacy of large language models in generating accurate teacher responses (Yann Hicke, Abhishek Masand, Wentao Guo and Tushaar Gangavarapu) – Online poster
19  Scalable and Explainable Automated Scoring for Open-Ended Constructed Response Math Word Problems (Scott Hellman, Alejandro Andrade and Kyle Habermehl) – Online poster
49  You’ve Got a Friend in … a Language Model? A Comparison of Explanations of Multiple-Choice Items of Reading Comprehension between ChatGPT and Humans (George Duenas, Sergio Jimenez and Geral Eduardo Mateus Ferro) – Online poster
24  Labels are not necessary: Assessing peer-review helpfulness using domain adaptation based on self-training (Chengyuan Liu, Divyang Doshi, Muskaan Bhargava, Ruixuan Shang, Jialin Cui, Dongkuan Xu and Edward Gehringer) – Online poster
56  ALEXSIS+: Improving Substitute Generation and Selection for Lexical Simplification with Information Retrieval (Kai North, Alphaeus Dmonte, Tharindu Ranasinghe, Matthew Shardlow and Marcos Zampieri) – Online poster
113  RETUYT-InCo at BEA 2023 Shared Task: Tuning Open-Source LLMs for Generating Teacher Responses (Alexis Baladón, Ignacio Sastre, Luis Chiruzzo and Aiala Rosá) – Online poster
21  ReadAlong Studio Web Interface for Digital Interactive Storytelling (Aidan Pine, David Huggins-Daines, Eric Joanis, Patrick Littell, Marc Tessier, Delasie Torkornoo, Rebecca Knowles, Roland Kuhn and Delaney Lothian) – Online demo
15:30–16:00 Afternoon Coffee Break
16:00–16:40 Ambassador paper by Anaïs Tack
Generating Teacher Responses in Educational Dialogues: The AI Teacher Test & BEA 2023 Shared Task
16:40–17:25 Keynote by Jordana Heller
Interrupting Linguistic Bias in Written Communication with NLP tools
17:25–17:30 Closing remarks
From 17:45 Dinner at NBA Courtside Restaurant

Instructions for presenters

  • In-person posters: Poster stands at the venue will be provided for A0 posters, portrait orientation. This means that your actual poster may be smaller than that but you should make sure it fits on the stand provided. Apart from this, we do not have any specific requirements regarding font types, sizes, etc. (just keep in mind that since you are presenting in person, the poster has to be well readable from a distance, so the font should not be too small).
  • Online posters: We do not have any specific requirements regarding font types, sizes, etc. In line with the physical posters, it might be a good idea to have your digital poster in portrait orientation. During the workshop, you will be presenting your poster in a “virtual” room mimicking the in-person poster sessions (further details to be provided by ACL / Underline). If you have any further technical questions about virtual presentations, please contact Underline directly at acl2023@underline.io.
  • In-person and online demos: In addition to the poster, please demonstrate how your proposed system works on your laptop. Wi-fi connection will be available at the venue for browser-based systems. If you have any further technical questions about virtual presentations, please contact Underline directly at acl2023@underline.io.

Please note that there are no dedicated templates for BEA posters, but for inspiration, you can take a look at the recorded poster talks from last year’s edition of BEA here.

Anti-Harassment Policy

SIGEDU adheres to the ACL Anti-Harassment Policy for the BEA workshops. Any participant of the workshop who experiences harassment or hostile behavior may contact any current member of the ACL Executive Committee or contact Priscilla Rasmussen, who is usually available at the registration desk of the conference. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.

Share Code & Data on GitHub

If you are interested in sharing your code and data with the BEA community, we created the #bea-workshop topic on GitHub.

Important Dates

Note: All deadlines are 11:59pm UTC-12 (anywhere on earth).

  • Anonymity Period Begins: Friday, March 24, 2023
  • Submission Deadline: Monday, April 24 Tuesday, May 2, 2023
  • Notification of Acceptance: Monday, May 22 Tuesday, May 23, 2023
  • Camera-ready Papers Due: Tuesday, May 30 Wednesday, May 31, 2023
  • Pre-recorded Videos Due: Monday, June 12, 2023
  • Workshop: Thursday, July 13, 2023

Registration

If you wish to attend the workshop, you must register with the ACL conference here. Select BEA from the list of offered workshops. There is no need to have a paper accepted. The workshop is open to anyone who wishes to attend. Importantly, at least one author of each accepted paper must register.

Visa information

All visa requests are processed by the ACL’s dedicated visa assistance team. To apply for an invitation letter, please follow the information from https://2023.aclweb.org/blog/visa-info/. Specifically, for invitation letters and further questions, it is advised that you reach out at acl2023_visa_help@googlegroups.com, communicating the following: name, institution, address, role in conference (author/reviewer/workshop organizer/invited speakers/other participants), additional reasons/justifications for attending. ACL’s dedicated visa assistance team will be in touch to help out with the visa letter request.

ACL 2023 invitation letter form can be found here.

Submission Information

We will be using the ACL Submission Guidelines for the BEA Workshop this year. Authors are invited to submit a long paper of up to eight (8) pages of content, plus unlimited references; final versions of long papers will be given one additional page of content (up to 9 pages) so that reviewers’ comments can be taken into account. We also invite short papers of up to four (4) pages of content, plus unlimited references. Upon acceptance, short papers will be given five (5) content pages in the proceedings. Authors are encouraged to use this additional page to address reviewers’ comments in their final versions. Papers which describe systems are also invited to give a demo of their system. If you would like to present a demo in addition to presenting the paper, please make sure to select either “long paper + demo” or “short paper + demo” under “Submission Category” in the START submission page.

Previously published papers cannot be accepted. The submissions will be reviewed by the program committee. As reviewing will be blind, please ensure that papers are anonymous. Self-references that reveal the author’s identity, e.g., “We previously showed (Smith, 1991) …”, should be avoided. Instead, use citations such as “Smith previously showed (Smith, 1991) …”.

We have also included conflict of interest in the submission form. You should mark all potential reviewers who have been authors on the paper, are from the same research group or institution, or who have seen versions of this paper or discussed it with you.

Note on Ethics Statement and Limitations sections: We welcome submissions with Ethics Statement and Limitations sections included. These sections will not count towards the page limit in accordance with ACL guidelines. This year, such sections are not yet mandatory and papers missing them will not be penalized. Next year, we will include more detailed information on these sections in our submission guidelines.

We will be using the START conference system to manage submissions: https://www.softconf.com/acl2023/bea2023/

LaTeX and Word Templates

Paper submissions must use the official ACL style templates, which are available from here. Please follow the paper formatting guidelines general to “*ACL” conferences available here. Authors may not modify these style files or use templates designed for other conferences. Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review.

Double Submission Policy

We will follow the official ACL double-submission policy. Specifically, papers being submitted both to BEA and another conference or workshop must:

  • Note on the title page (as a footnote on the abstract) the other conference or workshop to which they are being submitted.
  • State on the title page that if the authors choose to present their paper at BEA (assuming it was accepted), then the paper will be withdrawn from other conferences and workshops.

Sponsors

Gold Sponsors

Cambridge University Press & Assessment Logo CATALPA Logo Duolingo Logo ETS Logo Grammarly Logo NBME Logo

Silver Sponsors

Cognii Logo

Sponsoring Opportunities

We are extremely grateful to our sponsors for the past workshops: in the recent years, we have been supported by Duolingo, Grammarly, NBME, iLexIR, Educational Testing Service, and Newsela. This year, we want to continue helping students to attend the workshop, including the accommodation of the student post-workshop dinner (in case the workshop runs offline) and offering student grants covering the BEA registration fees. We are hoping to identify sponsors who might be willing to contribute $100 (Bronze), $250 (Silver) or $500 (Gold sponsorship) to subsidize some of the workshop costs. Perks of sponsorship include logos on the workshop website and in the proceedings. If you would like to sponsor the BEA, please send us an email.

Organizing Committee

Workshop contact email address: bea.nlp.workshop@gmail.com

Program Committee

  • Sihat Afnan, Bangladesh University of Engineering and Technology
  • Tazin Afrin, Educational Testing Service
  • Erfan Al-Hossami, University of North Carolina at Charlotte
  • Desislava Aleksandrova, CBC/Radio-Canada
  • Aderajew Alem, Wachemo University
  • Giora Alexandron, Weizmann Institute of Science
  • David Alfter, Université catholique de Louvain
  • Alejandro Andrade, Pearson Knowledge Technologies
  • Nischal Ashok Kumar, University of Massachusetts Amherst
  • Berk Atil, Pennsylvania State University
  • Rabin Banjade, University of Memphis
  • Michael Gringo Angelo Bayona, Trinity College Dublin
  • Lee Becker, Pearson
  • Beata Beigman Klebanov, Educational Testing Service
  • Marie Bexte, FernUniversität in Hagen
  • Abhidip Bhattacharyya, University of Colorado Boulder
  • Serge Bibauw, Universidad Central del Ecuador; KU Leuven
  • Shayekh Bin Islam, Bangladesh University of Engineering and Technology
  • Daniel Brenner, ETS
  • Ted Briscoe, MBZUAI
  • Dominique Brunato, Institute of Computational Linguistics / CNR (Pisa, Italy)
  • Chris Callison-Burch, University of Pennsylvania
  • Hannan Cao, National University of Singapore
  • Jie Cao, University of Colorado Boulder
  • Brian Carpenter, Indiana University of Pennsylvania
  • Dumitru-Clementin Cercel, University Politehnica of Bucharest
  • Chung-Chi Chen, National Institute of Advanced Industrial Science and Technology
  • Guanliang Chen, Monash University
  • Hyundong Cho, USC ISI
  • Martin Chodorow, The City University of New York
  • Aubrey Condor, University of California, Berkeley
  • Mark Core, University of Southern California
  • Steven Coyne, Tohoku University
  • Scott Crossley, Vanderbilt University
  • Sam Davidson, University of California, Davis
  • Kordula De Kuthy, University of Tübingen
  • Jasper Degraeuwe, Ghent University
  • Thomas Demeester, Internet Technology and Data Science Lab (IDLab), Ghent University
  • Carrie Demmans Epp, University of Alberta
  • Dorottya Demszky, Stanford University
  • Yuning Ding, FernUniversität in Hagen
  • Rahul Divekar, ETS
  • George Duenas, Universidad Pedagogica Nacional
  • Masaki Eguchi, University of Oregon
  • Yo Ehara, Tokyo Gakugei University
  • Mariano Felice, British Council
  • Wanyong Feng, UMass Amherst
  • Nigel Steven Fernandez, University of Massachusetts Amherst
  • James Fiacco, Carnegie Mellon University
  • Michael Flor, Educational Testing Service
  • Estibaliz Fraca, University College London
  • Kotaro Funakoshi, Tokyo Institute of Technology
  • Thomas Gaillat, University of Rennes 2
  • Ananya Ganesh, University of Colorado Boulder
  • Lingyu Gao, TTIC
  • Rujun Gao, Texas A&M University
  • Ritik Garg, Extramarks Education India
  • Christian Gold, FernUniversität in Hagen
  • Samuel González-López, Technological University of Nogales Sonora
  • Le An Ha, University of Wolverhampton
  • Ching Nam Hang, City University of Hong Kong
  • Nicolas Hernandez, Nantes University - LS2N
  • Chung-Chi Huang, Frostburg State University
  • Ping-Yu Huang, The General Education Center, Ming Chi University of Technology
  • Yi-Ting Huang, Academia Sinica
  • David Huggins-Daines, independent researcher
  • Yusuke Ide, NAIST
  • Joseph Marvin Imperial, University of Bath
  • Radu Tudor Ionescu, University of Bucharest
  • Qinjin Jia, North Carolina State University
  • Helen Jin, University of Pennsylvania
  • Richard Johansson, University of Gothenburg
  • Masahiro Kaneko, Tokyo Institute of Technology
  • Neha Kardam, University of Washington
  • Anisia Katinskaia, University of Helsinki
  • Elma Kerz, RWTH Aachen University
  • Mamoru Komachi, Hitotsubashi University
  • Roland Kuhn, National Research Council of Canada
  • Alexander Kwako, UCLA
  • Kristopher Kyle, University of Oregon
  • Geoffrey LaFlair, Duolingo
  • Antonio Laverghetta Jr., University of South Florida
  • Jaewook Lee, UMass Amherst
  • Ji-Ung Lee, UKP Lab, TU Darmstadt, Germany
  • Arun Balajiee Lekshmi Narayanan, University of Pittsburgh
  • Xu Li, Zhejiang University
  • Chengyuan Liu, North Carolina State University
  • Yudong Liu, Western Washington University
  • Zhexiong Liu, University of Pittsburgh
  • Zoey Liu, University of Florida
  • Susan Lottridge, Cambium Assessment
  • Anastassia Loukina, Grammarly Inc.
  • Jiaying Lu, Emory University
  • Jakub Macina, ETH Zurich
  • Lieve Macken, Ghent University, Belgium
  • James Martin, University of Colorado Boulder
  • Sandeep Mathias, Presidency University, Bangalore
  • Janet Mee, NBME
  • Detmar Meurers, University of Tübingen
  • Phoebe Mulcaire, Duolingo
  • Tsegay Mullu, Wachemo University
  • Faizan E Mustafa, QUIBIQ GmbH
  • Farah Nadeem, World Bank
  • Ben Naismith, Duolingo
  • Sungjin Nam, ACT Inc.
  • Diane Napolitano, The Associated Press
  • Kamel Nebhi, Education First
  • Seyed Parsa Neshaei, Sharif University of Technology
  • Hwee Tou Ng, National University of Singapore
  • Huy Nguyen, Amazon
  • Gebregziabihier Nigusie, Mizan-Tepi University
  • S Jaya Nirmala, National Institute of Technology, Tiruchirapalli
  • Kai North, George Mason University
  • Eda Okur, Intel Labs
  • Priti Oli, University of Memphis
  • Kostiantyn Omelianchuk, Grammarly
  • Brian Ondov, National Library of Medicine
  • Christopher Ormerod, Cambium Assessment
  • Simon Ostermann, Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI)
  • Ulrike Pado, Hochschule fuer Technik Stuttgart
  • Frank Palma Gomez, CUNY Queens College
  • Chanjun Park, Upstage AI
  • Rebecca Passonneau, Penn State
  • Fabio Perez, independent researcher
  • E. Margaret Perkoff, University of Colorado Boulder
  • Jakob Prange, Hong Kong Polytechnic University
  • Reinald Adrian Pugoy, University of the Philippines Open University
  • Long Qin, Alibaba Cloud
  • Mengyang Qiu, University at Buffalo
  • Muhammad Reza Qorib, National University of Singapore
  • Martí Quixal, University of Tübingen
  • Arjun Ramesh Rao, Microsoft
  • Vivi Rantung, Universitas Negeri Manado
  • Manav Rathod, Glean
  • Brian Riordan, ETS
  • Frankie Robertson, University of Jyväskylä
  • Aiala Rosá, Instituto de Computación, Facultad de Ingeniería, Udelar
  • Carolyn Rosé, Carnegie Mellon University
  • Alla Rozovskaya, Queens College, CUNY
  • Igor Samokhin, Grammarly
  • Alexander Scarlatos, UMass Amherst
  • Matthew Shardlow, Manchester Metropolitan University
  • Anchal Sharma, PES University
  • Shady Shehata, MBZUAI
  • Gyu-Ho Shin, Palacký University Olomouc
  • Shashank Sonkar, Rice University
  • Katherine Stasaski, Salesforce Research
  • Helmer Strik, Radboud University
  • Hakyung Sung, University of Oregon
  • Abhijit Suresh, Reddit Inc.
  • Jan Švec, University of West Bohemia
  • Xiangru Tang, Yale University
  • Zhongwei Teng, Vanderbilt University Nashville
  • Rushil Thareja, Extramarks Education
  • Naveen Thomas, Texas A&M University
  • Alexandra Uitdenbogerd, RMIT
  • Shriyash Upadhyay, Martian
  • Masaki Uto, The University of Electro-communications
  • Sowmya Vajjala, National Research Council, Canada
  • Justin Vasselli, Nara Institute of Science and Technology
  • Giulia Venturi, Institute for Computational Linguistics “A. Zampolli” (CNR-ILC)
  • Anthony Verardi, Duolingo
  • Carl Vogel, Trinity College Dublin
  • Elena Volodina, University of Gothenburg, Sweden
  • Spencer von der Ohe, University of Alberta
  • Zichao Wang, Rice University
  • Taro Watanabe, Nara Institute of Science and Technology
  • Michael White, The Ohio State University
  • Alistair Willis, The Open University
  • Man Fai Wong, City University of Hong Kong
  • Menbere Worku, Wachemo University
  • Changrong Xiao, Tsinghua University
  • Yiqiao Xu, North Carolina State University
  • Kevin P. Yancey, Duolingo
  • Roman Yangarber, University of Helsinki
  • Su-Youn Yoon, EduLab
  • Kamyar Zeinalipour, University of Siena
  • Hengyuan Zhang, Tsinghua University
  • Jing Zhang, Emory University
  • Jessica Zipf, University of Konstanz
  • Michael Zock, CNRS-LIS