19th Workshop on Innovative Use of NLP for Building Educational Applications

Mexico City Hilton Reforma

Quick Info
Co-located with NAACL 2024
Location Mexico City, Mexico
Deadline March 10 March 16, 2024
Date June 20, 2024
Organizers Ekaterina Kochmar, Marie Bexte, Jill Burstein, Andrea Horbach, Ronja Laarmann-Quante, Anaïs Tack, Victoria Yaneva, Zheng Yuan
Contact bea.nlp.workshop@gmail.com
Proceedings https://aclanthology.org/volumes/2024.bea-1/

Workshop Description

The BEA Workshop is a leading venue for NLP innovation in the context of educational applications. It is one of the largest one-day workshops in the ACL community with over 100 registered attendees in the past several years. The growing interest in educational applications and a diverse community of researchers involved resulted in the creation of the Special Interest Group in Educational Applications (SIGEDU) in 2017, which currently has over 300 members.

The workshop’s continuing growth reflects how technology is increasingly fulfilling societal demands. For instance, the BEA16 workshop in 2021 hosted a panel discussion on “New Challenges for Educational Technology in the Time of the Pandemic” addressing the pressing issues around COVID-19. Additionally, NLP has evolved to aid diverse learning domains, including writing, speaking, reading, science, and mathematics, as well as the related intra-personal (e.g., self-confidence) and inter-personal (e.g., peer collaboration) skills. Within these areas, the community continues to develop and deploy innovative NLP approaches for use in educational settings.

Another significant advancement in educational applications within the Computational Linguistics (CL) community is the continuing series of shared-task competitions organized by and hosted at the BEA workshop. Over the years, this initiative has included four dedicated tasks focused solely on grammatical error detection and correction. Moreover, NLP/Education shared tasks have expanded into novel research areas, such as the Automated Evaluation of Scientific Writing at BEA11, Native Language Identification at BEA12, Second Language Acquisition Modeling at BEA13, Complex Word Identification at BEA13, and Generating AI Teacher Responses in Educational Dialogues at BEA18. These competitions have significantly bolstered the visibility and interest in our field.

The 19th BEA workshop will adopt the same format as the 2023 edition and will be hybrid, integrating both in-person and virtual presentations and attendance. The workshop will feature a keynote talk, a main workshop track with oral presentation sessions and large poster sessions to facilitate the presentation of a wide array of original research. Moreover, there will be two shared task tracks, with each comprising an oral overview presentation by the shared task organizers and several poster presentations by the shared task participants.

We expect that the workshop will continue to highlight novel technologies and opportunities, including the use of state-of-the-art large language models in educational applications, and challenges around responsible AI for educational NLP, in English as well as other languages.

Sponsors

Gold Sponsors
British Council Cambridge University Press & Assessment Logo CATALPA Logo Duolingo English Test Logo ETS Logo NBME Logo
Sponsoring Opportunities
We are extremely grateful to our sponsors for the past workshops: in the recent years, we have been supported by British Council, Cambridge University Press & Assessment, CATALPA, Cognii, Duolingo, Duolingo English Test, Educational Testing Service, Grammarly, iLexIR, NBME, and Newsela. This year, we want to continue helping students to attend the workshop, including the accommodation of the student post-workshop dinner and offering grants covering best paper presentations. We are hoping to identify sponsors who might be willing to contribute $100 (Bronze), $250 (Silver) or $500 (Gold) to subsidize some of the workshop costs. Perks of sponsorship include logos on the workshop website and in the proceedings. If you would like to sponsor the BEA, please send us an email.

Call for Papers

The workshop will accept submissions of both full papers and short papers, eligible for either oral or poster presentation. We solicit papers that incorporate NLP methods, including, but not limited to:

  • automated scoring of open-ended textual and spoken responses;
  • game-based instruction and assessment;
  • educational data mining;
  • intelligent tutoring;
  • collaborative learning environments;
  • peer review;
  • grammatical error detection and correction;
  • learner cognition;
  • spoken dialog;
  • multimodal applications;
  • annotation standards and schemas;
  • tools and applications for classroom teachers, learners, or test developers; and
  • use of corpora in educational tools.

Important Dates

All deadlines are 11:59pm UTC-12 (anywhere on earth).

Event Date
Submission Deadline March 10 March 16, 2024
Notification of Acceptance April 14 April 15, 2024
Camera-ready Papers Due April 24, 2024
Pre-recorded Videos Due May 19, 2024
Workshop June 20, 2024

Submission Guidelines

To streamline the submission process, we rely on the ACL submission guidelines and the START conference system, accessible at https://softconf.com/naacl2024/BEA2024. All submissions undergo review by the program committee.

Long, Short, and Demo Papers
Authors can choose to submit long papers (up to eight (8) pages) or short papers (up to four (4) pages), alongside unlimited references. After peer review, all accepted papers will be allotted an additional page of content (up to nine for long papers, five for short papers), allowing authors to address reviewer comments. Authors are strongly urged to present a live demonstration for papers that elaborate on systems. If opting for this, authors should choose either “long paper + demo” or “short paper + demo” under the “Submission Category” on the submission page.
LaTeX and Word Templates
Authors must ensure their paper submissions adhere to the general paper formatting guidelines for “*ACL” conferences, found here, and use the official ACL style templates, downloadable here. Do not modify these style files or use templates intended for other conferences. Submissions failing to meet required styles, including paper size, margin width, and font size restrictions, will be rejected without review.
Limitations
Authors are required to discuss the limitations of their work in a dedicated section titled “Limitations”. This section should be included at the end of the paper, before the references, and it will not count toward the page limit. This includes both, long and short papers. Note, prior to the December 2023 cycle, this was optional.
Ethics Policy
Authors are required to honour the ethical code set out in the ACL Code of Ethics. The consideration of the ethical impact of our research, use of data, and potential applications of our work has always been an important consideration, and as artificial intelligence is becoming more mainstream, these issues are increasingly pertinent. We ask that all authors read the code, and ensure that their work is conformant to this code. Authors are encouraged to devote a section of their paper to concerns about the ethical impact of the work and to a discussion of broader impacts of the work, which will be taken into account in the review process. This discussion may extend into a 5th page (short papers) or 9th page (long papers).
Anonymity
Given the blind review process, it is essential to ensure that papers remain anonymous. Authors should avoid self-references that disclose their identity (e.g., “We previously showed (Smith, 1991)”), opting instead for citations like “Smith previously showed (Smith, 1991)”.
Conflicts of Interest
Authors are required to mark potential reviewers who have co-authored the paper, belong to the same research group or institution, or have had prior exposure to the paper, ensuring transparency in the review process.
Double Submissions
We adhere to the official ACL double-submission policy. If papers are submitted to both BEA and another conference or workshop, authors must specify the other event on the title page (as a footnote on the abstract). Additionally, the title page should state that if the paper is accepted for presentation at BEA, it will be withdrawn from other conferences and workshops.
Republications
Previously published papers will not be accepted.

Presentation Guidelines

All accepted papers must be presented at the workshop to appear in the proceedings. The workshop will include both in-person and virtual presentation options. At least one author of each accepted paper must register for the conference by the early registration deadline.

Long and short papers will be presented orally or as posters as determined by the workshop organizers. While short papers will be distinguished from long papers in the proceedings, there will be no distinction in the proceedings between papers presented orally and papers presented as posters.

In-person posters
Poster stands at the venue will be provided for A0 posters, portrait orientation. This means that your actual poster may be smaller than that but you should make sure it fits on the stand provided. Apart from this, we do not have any specific requirements regarding font types, sizes, etc. (just keep in mind that since you are presenting in person, the poster has to be well readable from a distance, so the font should not be too small).
Online posters
We do not have any specific requirements regarding font types, sizes, etc. In line with the physical posters, it might be a good idea to have your digital poster in portrait orientation. During the workshop, you will be presenting your poster in a “virtual” room mimicking the in-person poster sessions. Further details are to be provided by NAACL / Underline.
In-person and online demos
In addition to the poster, please demonstrate how your proposed system works on your laptop. Wi-fi connection will be available at the venue for browser-based systems.

Please note that there are no dedicated templates for BEA posters, but for inspiration, you can take a look at the recorded poster talks from previous years’ editions of BEA, e.g., here.

Share Code & Data on GitHub

If you are interested in sharing your code and data with the BEA community, we created the #bea-workshop topic on GitHub.

Shared Tasks

In addition to the main workshop track, the workshop has two shared tasks tracks. For more information on how to participate and latest updates, please refer to the shared task websites.

Task 1: Automated Prediction of Item Difficulty and Item Response Time (APIDIRT)

Task 2: Multilingual Lexical Simplification Pipeline (MLSP)

Workshop Program

  June 20, 2024 (General Time in GMT-6)
Location: Don Alberto 4 (in person) or Underline.io (remote)
Add to calendar: ics
09:00 - 09:05 Opening Remarks
09:05 - 09:50 Keynote by Alla Rozovskaya (City University of New York). ‘Multilingual Low-Resource Natural Language Processing for Language Learning’
09:50 - 10:30 Shared Tasks Session
09:50 - 10:10 The BEA 2024 Shared Task on the Multilingual Lexical Simplification Pipeline (Matthew Shardlow, Fernando Alva-Manchego, Riza Batista-Navarro, Stefan Bott, Saul Calderon Ramirez, Rémi Cardon, Thomas François, Akio Hayakawa, Andrea Horbach, Anna Huelsing, Yusuke Ide, Joseph Marvin Imperial, Adam Nohejl, Kai North, Laura Occhipinti, Nelson Peréz Rojas, Nishat Raihan, Tharindu Ranasinghe, Martin Solis Salazar, Sanja Stajner, Marcos Zampieri, Horacio Saggion)
ORAL MLSP_SHAREDTASK
10:10 - 10:30 Findings from the First Shared Task on Automated Prediction of Difficulty and Response Time for Multiple-Choice Questions (Victoria Yaneva, Kai North, Peter Baldwin, Le An Ha, Saed Rezayi, Yiyun Zhou, Sagnik Ray Choudhury, Polina Harik, Brian Clauser)
ORAL APIDIRT_SHAREDTASK
10:30 - 11:00 Morning Coffee Break
11:00 - 11:30 Spotlight talks for Poster / Demo Session A (In-person + Virtual)
11:30 - 12:30 Poster / Demo Session A
92 Using Adaptive Empathetic Responses for Teaching English (Li Siyan, Teresa Shao, Julia Hirschberg, Zhou Yu)
POSTER MAIN
94 Beyond Flesch-Kincaid: Prompt-based Metrics Improve Difficulty Classification of Educational Texts (Donya Rooein, Paul Röttger, Anastassia Shaitarova, Dirk Hovy)
POSTER MAIN
99 Can Language Models Guess Your Identity? Analyzing Demographic Biases in AI Essay Scoring (Alexander Kwako, Christopher Ormerod)
POSTER MAIN
102 Automated Scoring of Clinical Patient Notes: Findings From the Kaggle Competition and Their Translation into Practice (Victoria Yaneva, King Yiu Suen, Le An Ha, Janet Mee, Milton Quranda, Polina Harik)
POSTER MAIN
104 A World CLASSE Student Summary Corpus (Scott Crossley, Perpetual Baffour, Mihai Dascalu, Stefan Ruseti)
POSTER MAIN
106 Improving Socratic Question Generation using Data Augmentation and Preference Optimization (Nischal Ashok Kumar, Andrew Lan)
POSTER MAIN
110 Scoring with Confidence? – Exploring High-confidence Scoring for Saving Manual Grading Effort (Marie Bexte, Andrea Horbach, Lena Schützler, Oliver Christ, Torsten Zesch)
POSTER MAIN
113 Improving Transfer Learning for Early Forecasting of Academic Performance by Contextualizing Language Models (Ahatsham Hayat, Bilal Khan, Mohammad Hasan)
POSTER MAIN
114 Can GPT-4 do L2 analytic assessment? (Stefano Banno, Hari Krishna Vydana, Kate Knill, Mark Gales)
POSTER MAIN
120 Automated Evaluation of Teacher Encouragement of Student-to-Student Interactions in a Simulated Classroom Discussion (Michael Ilagan, Beata Beigman Klebanov, Jamie Mikeska)
POSTER MAIN
121 Explainable AI in Language Learning: Linking Empirical Evidence and Theoretical Concepts in Proficiency and Readability Modeling of Portuguese (Luisa Ribeiro-Flucht, Xiaobin Chen, Detmar Meurers)
POSTER MAIN
5 How Good are Modern LLMs in Generating Relevant and High-Quality Questions at Different Bloom’s Skill Levels for Indian High School Social Science Curriculum? (Nicy Scaria, Suma Chenna, Deepak Subramani)
VIRTUAL POSTER MAIN
95 Large Language Models Are State-of-the-Art Evaluator for Grammatical Error Correction (Masamune Kobayashi, Masato Mita, Mamoru Komachi)
VIRTUAL POSTER MAIN
116 Using Program Repair as a Proxy for Language Models’ Feedback Ability in Programming Education (Charles Koutcheme, Nicola Dainese, Arto Hellas)
VIRTUAL POSTER MAIN
122 Fairness in Automated Essay Scoring: A Comparative Analysis of Algorithms on German Learner Essays from Secondary Education (Nils-Jonathan Schaller, Yuning Ding, Andrea Horbach, Jennifer Meyer, Thorben Jansen)
VIRTUAL POSTER MAIN
125 Identifying Fairness Issues in Automatically Generated Testing Content (Kevin Stowe)
VIRTUAL POSTER MAIN
105 Predicting Item Difficulty and Item Response Time with Scalar-mixed Transformer Encoder Models and Rational Network Regression Heads (Sebastian Gombert, Lukas Menzel, Daniele Di Mitri, Hendrik Drachsler)
VIRTUAL POSTER APIDIRT_SHAREDTASK
150 UnibucLLM: Harnessing LLMs for Automated Prediction of Item Difficulty and Response Time for Multiple-Choice Questions (Ana-Cristina Rogoz, Radu Tudor Ionescu)
VIRTUAL POSTER APIDIRT_SHAREDTASK
160 The British Council submission to the BEA 2024 shared task (Mariano Felice, Zeynep Duran Karaoz)
VIRTUAL POSTER APIDIRT_SHAREDTASK
168 ITEC at BEA 2024 Shared Task: Predicting Difficulty and Response Time of Medical Exam Questions with Statistical, Machine Learning, and Language Models (Anaïs Tack, Siem Buseyne, Changsheng Chen, Robbe D’hondt, Michiel De Vrindt, Alireza Gharahighehi, Sameh Metwaly, Felipe Kenji Nakano, Ann-Sophie Noreillie)
POSTER APIDIRT_SHAREDTASK
184 Item Difficulty and Response Time Prediction with Large Language Models: An Empirical Analysis of USMLE Items (Okan Bulut, Guher Gorgun, Bin Tan)
VIRTUAL POSTER APIDIRT_SHAREDTASK
185 Utilizing Machine Learning to Predict Question Difficulty and Response Time for Enhanced Test Construction (Rishikesh Fulari, Jonathan Rusert)
VIRTUAL POSTER APIDIRT_SHAREDTASK
186 Leveraging Physical and Semantic Features of text item for Difficulty and Response Time Prediction of USMLE Questions (Gummuluri Venkata Ravi Ram, Ashinee Kesanam, Anand Kumar M)
VIRTUAL POSTER APIDIRT_SHAREDTASK
187 UPN-ICC at BEA 2024 Shared Task: Leveraging LLMs for Multiple-Choice Questions Difficulty Prediction (George Duenas, Sergio Jimenez, Geral Mateus Ferro)
VIRTUAL POSTER APIDIRT_SHAREDTASK
188 Using Machine Learning to Predict Item Difficulty and Response Time in Medical Tests (Mehrdad Yousefpoori-Naeim, Shayan Zargari, Zahra Hatami)
VIRTUAL POSTER APIDIRT_SHAREDTASK
189 Large Language Model-based Pipeline for Item Difficulty and Response Time Estimation for Educational Assessments (Hariram Veeramani, Surendrabikram Thapa, Natarajan Balaji Shankar, Abeer Alwan)
VIRTUAL POSTER APIDIRT_SHAREDTASK
191 UNED team at BEA 2024 Shared Task: Testing different Input Formats for predicting Item Difficulty and Response Time in Medical Exams (Alvaro Rodrigo, Sergio Moreno-Álvarez, Anselmo Peñas)
VIRTUAL POSTER APIDIRT_SHAREDTASK
12:30 - 14:00 Lunch Break
14:00 - 14:30 Spotlight talks for Poster / Demo Session B (In-person + Virtual)
14:30 - 15:30 Poster / Demo Session B
124 Improving Automated Distractor Generation for Math Multiple-choice Questions with Overgenerate-and-rank (Alexander Scarlatos, Wanyong Feng, Andrew Lan, Simon Woodhead, Digory Smith)
POSTER MAIN
133 Evaluating Vocabulary Usage in LLMs (Matthew Durward, Christopher Thomson)
POSTER MAIN
135 Towards Fine-Grained Pedagogical Control over English Grammar Complexity in Educational Text Generation (Dominik Glandorf, Detmar Meurers)
POSTER MAIN
138 LLMs in Short Answer Scoring: Limitations and Promise of Zero-Shot and Few-Shot Approaches (Imran Chamieh, Torsten Zesch, Klaus Giebermann)
POSTER MAIN
139 Automated Essay Scoring Using Grammatical Variety and Errors with Multi-Task Learning and Item Response Theory (Kosuke Doi, Katsuhito Sudoh, Satoshi Nakamura)
POSTER MAIN
140 Error Tracing in Programming: A Path to Personalised Feedback (Martha Shaka, Diego Carraro, Kenneth Brown)
POSTER MAIN
151 Automatic Crossword Clues Extraction for Language Learning (Santiago Berruti, Arturo Collazo, Diego Sellanes, Aiala Rosá, Luis Chiruzzo)
POSTER MAIN
154 Anna Karenina Strikes Again: Pre-Trained LLM Embeddings May Favor High-Performing Learners (Abigail Gurin Schleifer, Beata Beigman Klebanov, Moriah Ariely, Giora Alexandron)
POSTER MAIN
155 Assessing Student Explanations with Large Language Models Using Fine-Tuning and Few-Shot Learning (Dan Carpenter, Wookhee Min, Seung Lee, Gamze Ozogul, Xiaoying Zheng, James Lester)
POSTER MAIN
158 Harnessing GPT to Study Second Language Learner Essays: Can We Use Perplexity to Determine Linguistic Competence? (Ricardo Muñoz Sánchez, Simon Dobnik, Elena Volodina)
POSTER MAIN
166 BERT-IRT: Accelerating Item Piloting with BERT Embeddings and Explainable IRT Models (Kevin P. Yancey, Andrew Runge, Geoffrey LaFlair, Phoebe Mulcaire)
POSTER MAIN
171 From Miscue to Evidence of Difficulty: Analysis of Automatically Detected Miscues in Oral Reading for Feedback Potential (Beata Beigman Klebanov, Michael Suhan, Tenaha O’Reilly, Zuowei Wang)
POSTER MAIN
132 Towards Automated Document Revision: Grammatical Error Correction, Fluency Edits, and Beyond (Masato Mita, Keisuke Sakaguchi, Masato Hagiwara, Tomoya Mizumoto, Jun Suzuki, Kentaro Inui)
VIRTUAL POSTER MAIN
141 Improving Readability Assessment with Ordinal Log-Loss (Ho Hung Lim, John Lee)
POSTER MAIN
145 Automated Sentence Generation for a Spaced Repetition Software (Benjamin Paddags, Daniel Hershcovich, Valkyrie Savage)
VIRTUAL POSTER MAIN
146 Using Large Language Models to Assess Young Students’ Writing Revisions (Tianwen Li, Zhexiong Liu, Lindsay Matsumura, Elaine Wang, Diane Litman, Richard Correnti)
VIRTUAL POSTER MAIN
167 Transfer Learning of Argument Mining in Student Essays (Yuning Ding, Julian Lohmann, Nils-Jonathan Schaller, Thorben Jansen, Andrea Horbach)
VIRTUAL POSTER MAIN
170 Building Robust Content Scoring Models for Student Explanations of Social Justice Science Issues (Allison Bradford, Kenneth Steimel, BRIAN Riordan, Marcia Linn)
VIRTUAL POSTER MAIN
174 TMU-HIT at MLSP 2024: How Well Can GPT-4 Tackle Multilingual Lexical Simplification? (Taisei Enomoto, Hwichan Kim, Tosho Hirasawa, Yoshinari Nagai, Ayako Sato, Kyotaro Nakajima, Mamoru Komachi)
POSTER MLSP_SHAREDTASK
175 ANU at MLSP-2024: Prompt-based Lexical Simplification for English and Sinhala (Sandaru Seneviratne, Hanna Suominen)
VIRTUAL POSTER MLSP_SHAREDTASK
176 ISEP_Presidency_University at MLSP 2024 Shared Task: Using GPT-3.5 to Generate Substitutes for Lexical Simplification (Benjamin Dutilleul, Mathis Debaillon, Sandeep Mathias)
VIRTUAL POSTER MLSP_SHAREDTASK
177 Archaeology at MLSP 2024: Machine Translation for Lexical Complexity Prediction and Lexical Simplification (Petru Cristea, Sergiu Nisioi)
VIRTUAL POSTER MLSP_SHAREDTASK
178 RETUYT-INCO at MLSP 2024: Experiments on Language Simplification using Embeddings, Classifiers and Large Language Models (Ignacio Sastre, Leandro Alfonso, Facundo Fleitas, Federico Gil, Andrés Lucas, Tomás Spoturno, Santiago Góngora, Aiala Rosá, Luis Chiruzzo)
POSTER MLSP_SHAREDTASK
181 GMU at MLSP 2024: Multilingual Lexical Simplification with Transformer Models (Dhiman Goswami, Kai North, Marcos Zampieri)
VIRTUAL POSTER MLSP_SHAREDTASK
182 ITEC at MLSP 2024: Transferring Predictions of Lexical Difficulty from Non-Native Readers (Anaïs Tack)
POSTER MLSP_SHAREDTASK
15:30 - 16:00 Afternoon Coffee Break
16:00 - 17:20 Oral Presentations
16:00 - 16:20 Synthetic Data Generation for Low-resource Grammatical Error Correction with Tagged Corruption Models (Felix Stahlberg, Shankar Kumar)
ORAL MAIN
16:20 - 16:40 Pillars of Grammatical Error Correction: Comprehensive Inspection Of Contemporary Approaches In The Era of Large Language Models (Kostiantyn Omelianchuk, Andrii Liubonko, Oleksandr Skurzhanskyi, Artem Chernodub, Oleksandr Korniienko, Igor Samokhin)
ORAL MAIN
16:40 - 17:00 Exploring LLM Prompting Strategies for Joint Essay Scoring and Feedback Generation (Maja Stahl, Leon Biermann, Andreas Nehring, Henning Wachsmuth)
ORAL MAIN
17:00 - 17:20 Predicting Initial Essay Quality Scores to Increase the Efficiency of Comparative Judgment Assessments (Michiel De Vrindt, Anaïs Tack, Renske Bouwer, Wim Van Den Noortgate, Marije Lesterhuis)
ORAL MAIN
17:20 - 17:30 Closing remarks & Best paper awards
from 18:00 Dinner

Invited Talks

KeynoteAlla Rozovskaya (CUNY)

Alla Rozovskaya, Queens College, City University of New York
Multilingual Low-Resource Natural Language Processing for Language Learning

Abstract: Recent studies on a wide range of NLP tasks have demonstrated the effectiveness of training paradigms that integrate large language models. However, such methods require large amounts of labeled and unlabeled data, limiting their success to a small set of well-resourced languages. This talk will discuss low-resource approaches for two language learning applications. We will begin with work on generating vocabulary exercises. We will describe an approach that does not require labeled training data and can be used to adapt the exercises to the linguistic profile of the learner. Next, we will discuss our recent work on multilingual grammatical error correction (GEC), addressing the issue of training GEC models for languages with little labeled training data, and the issue of evaluating system performance when high-quality benchmarks are lacking.

Bio: Alla Rozovskaya is an Assistant Professor in the Department of Computer Science at Queens College, City University of New York (CUNY), and a member of the Doctoral Faculty of the Computer Science and Linguistics programs at the CUNY Graduate Center. She earned her Ph.D. in Computational Linguistics at the University of Illinois at Urbana-Champaign, under the supervision of Prof. Dan Roth. Her research interests lie broadly in the area of low-resource and multilingual NLP and educational applications.

Accepted Papers

We received a total of 88 submissions to the main workshop track. After careful review, we approved the following 38 papers, leading to a 43% acceptance rate for the main workshop.

 

Shared Task Papers

In addition to the papers accepted to the main workshop track, the workshop will feature 20 shared task papers, including two shared task overview papers and 18 system description papers.

Task 1: Automated Prediction of Item Difficulty and Item Response Time (APIDIRT)
Task 2: Multilingual Lexical Simplification Pipeline (MLSP)

Participation

Registration

If you wish to attend the workshop, you must register with the NAACL conference here. Select BEA from the list of offered workshops. There is no need to have a paper accepted. The workshop is open to anyone who wishes to attend. Importantly, at least one author of each accepted paper must register.

Visa information

All visa requests are processed by the ACL’s dedicated visa assistance team. To apply for an invitation letter, please follow the information from https://2024.naacl.org/info-for-participants/#visas. Specifically, you are required to fill in the following form. NAACL’s dedicated visa assistance team will then be in touch to help out with your visa letter request.

For other travel information and advice, please check NAACL’s page.

Anti-Harassment Policy

SIGEDU adheres to the ACL Anti-Harassment Policy for the BEA workshops. Any participant of the workshop who experiences harassment or hostile behavior may contact any current member of the ACL Executive Committee or contact Priscilla Rasmussen, who is usually available at the registration desk of the conference. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.

Workshop Committees

Organizing Committee

Program Committee

  • Tazin Afrin (Educational Testing Service)
  • Prabhat Agarwal (Pinterest)
  • Erfan Al-Hossami (University of North Carolina at Charlotte)
  • Desislava Aleksandrova (CBC/Radio-Canada)
  • Giora Alexandron (Weizmann Institute of Science)
  • David Alfter (UCLouvain)
  • Fernando Alva-Manchego (Cardiff University)
  • Jatin Ambasana (Unitedworld Institute of Technology, Karnavati University)
  • Nico Andersen (DIPF ; Leibniz Institute for Research and Information in Education)
  • Alejandro Andrade (Pearson)
  • Tesfa Tegegne Asfaw (Bahir Dar University)
  • Nischal Ashok Kumar (University of Massachusetts Amherst)
  • Berk Atil (Pennsylvania State University)
  • Shiva Baghel (Extramarks)
  • Rabin Banjade (University of Memphis)
  • Stefano Banno (University of Cambridge)
  • Michael Gringo Angelo Bayona (Trinity College Dublin)
  • Lee Becker (Pearson)
  • Beata Beigman Klebanov (Educational Testing Service)
  • Lisa Beinborn (Vrije Universiteit Amsterdam)
  • Enrico Benedetti (University of Bologna)
  • Luca Benedetto (University of Cambridge)
  • Jeanette Bewersdorff (FernUniversität in Hagen)
  • Ummugul Bezirhan (Boston College, TIMSS and PIRLS International Study Center)
  • Smita Bhattacharya (Saarland University)
  • Abhidip Bhattacharyya (University of Massachusetts, Amherst)
  • Serge Bibauw (UCLouvain)
  • Robert-Mihai Botarleanu (National University of Science and Technology POLITEHNICA Bucharest)
  • Allison Bradford (University of California, Berkeley)
  • Ted Briscoe (MBZUAI)
  • Jie Cao (University of Colorado)
  • Dan Carpenter (North Carolina State University)
  • Dumitru-Clementin Cercel (University Politehnica of Bucharest)
  • Imran Chamieh (Hochschule Ruhr West)
  • Jeevan Chapagain (UniversityofMemphis)
  • Mei-Hua Chen (Department of Foreign Languages and Literature, Tunghai University)
  • Luis Chiruzzo (Universidad de la Republica)
  • Yan Cong (Purdue University)
  • Mark Core (University of Southern California)
  • Steven Coyne (Tohoku University / RIKEN)
  • Scott Crossley (Georgia State University)
  • Sam Davidson (University of California, Davis)
  • Orphee De Clercq (LT3, Ghent University)
  • Kordula De Kuthy (Universität Tübingen)
  • Michiel De Vrindt (KU Leuven)
  • Jasper Degraeuwe (Ghent University)
  • Dorottya Demszky (Stanford University)
  • Yang Deng (The Chinese University of Hong Kong)
  • Aniket Deroy (IIT Kharagpur)
  • Chris Develder (Ghent University)
  • Yuning Ding (FernUniversität in Hagen)
  • Rahul Divekar (Educational Testing Service)
  • George Duenas (Universidad Pedagogica Nacional)
  • Matthew Durward (University of Canterbury)
  • Yo Ehara (Tokyo Gakugei University)
  • Yao-Chung Fan (National Chung Hsing University)
  • Effat Farhana (VanderbiltUniversity)
  • Mariano Felice (University of Cambridge)
  • Nigel Fernandez (University of Massachusetts Amherst)
  • Michael Flor (Educational Testing Service)
  • Jennifer-Carmen Frey (EURAC Research)
  • Kotaro Funakoshi (Tokyo Institute of Technology)
  • Thomas Gaillat (Rennes 2 university)
  • Diana Galvan-Sosa (University of Cambridge)
  • Ashwinkumar Ganesan (Amazon Alexa AI)
  • Achyutarama Ganti (Oakland University)
  • Rujun Gao (Texas A&M University)
  • Ritik Garg (Extramarks Education Pvt. Ltd.)
  • Dominik Glandorf (University of Tübingen, Yale University)
  • Christian Gold (Fernuniversitaet Hagen)
  • Sebastian Gombert (DIPF ; Leibniz Institute for Research and Information in Education)
  • Kiel Gonzales (University of the Philippines Diliman)
  • Cyril Goutte (National Research Council Canada)
  • Prasoon Goyal (The University of Texas at Austin)
  • Pranav Gupta (Cornell University)
  • Abigail Gurin Schleifer (Weizmann Institute of Science)
  • Handoko Handoko (Universitas Andalas)
  • Ching Nam Hang (Department of Computer Science, City University of Hong Kong)
  • Jiangang Hao (Educational Testing Service)
  • Ahatsham Hayat (University of Nebraska-Lincoln)
  • Nicolas Hernandez (Nantes University)
  • Nils Hjortnaes (Indiana University Bloomington)
  • Michael Holcomb (University of Texas Southwestern Medical Center)
  • Heiko Holz (Ludwigsburg University of Education)
  • Sukhyun Hong (Hyperconnect, Matchgroup)
  • Chung-Chi Huang (Frostburg State University)
  • Chieh-Yang Huang (MetaMetrics Inc)
  • Anna Huelsing (University of Hildesheim)
  • Syed-Amad Hussain (Ohio State University)
  • Catherine Ikae (Applied Machine Intelligence, Bern University of Applied Sciences, Switzerland)
  • Joseph Marvin Imperial (University of Bath)
  • Radu Tudor Ionescu (University of Bucharest)
  • Suriya Prakash Jambunathan (New York University)
  • Qinjin Jia (North Carolina State University)
  • Helen Jin (University of Pennsylvania)
  • Ioana Jivet (FernUniversität in Hagen)
  • Léane Jourdan (Nantes University)
  • Anisia Katinskaia (University of Helsinki)
  • Elma Kerz (RWTH Aachen University)
  • Fazel Keshtkar (St. John’s University)
  • Mamoru Komachi (Hitotsubashi University)
  • Charles Koutcheme (Aalto University)
  • Roland Kuhn (National Research Council of Canada)
  • Alexander Kwako (University of California, Los Angeles)
  • Kristopher Kyle (University of Oregon)
  • Antonio Laverghetta Jr. (Pennsylvania State University)
  • Celine Lee (Cornell University)
  • John Lee (City University of Hong Kong)
  • Seolhwa Lee (Technical University of Darmstadt)
  • Jaewook Lee (UMass Amherst)
  • Arun Balajiee Lekshmi Narayanan (University of Pittsburgh)
  • Yayun Li (City University of Hong Kong)
  • Yudong Liu (Western Washington University)
  • Zhexiong Liu (University of Pittsburgh)
  • Naiming Liu (Rice University)
  • Julian Lohmann (Christian Albrechts Universität Kiel)
  • Anastassia Loukina (Grammarly Inc)
  • Jiaying Lu (Emory University)
  • Crisron Rudolf Lucas (UniversityCollegeDublin)
  • Collin Lynch (NCSU)
  • Sarah Löber (University of Tübingen)
  • Jakub Macina (ETH Zurich)
  • Nitin Madnani (Educational Testing Service)
  • Jazzmin Maranan (University of the Philippines Diliman)
  • Arianna Masciolini (University of Gothenburg)
  • Sandeep Mathias (Presidency University)
  • Hunter McNichols (University of Massachusetts Amherst)
  • Jose Marie Mendoza (University of the Philippines Diliman)
  • Amit Mishra (AmityUniversityMadhyaPradesh)
  • Masato Mita (CyberAgent Inc.)
  • Daniel Mora Melanchthon (Pontificia Universidad Católica de Valparaíso)
  • Phoebe Mulcaire (Duolingo)
  • Laura Musto (Universidad de la Republica)
  • Ricardo Muñoz Sánchez (Språkbanken Text, Göteborgs Universitet)
  • Farah Nadeem (LUMS)
  • Sungjin Nam (ACT, Inc)
  • Diane Napolitano (The Washington Post)
  • Tanya Nazaretsky (EPFL)
  • Kamel Nebhi (Education First)
  • Seyed Parsa Neshaei (EPFL)
  • Huy Nguyen (Amazon)
  • Gebregziabihier Nigusie (Mizan-Tepi University)
  • Christina Niklaus (University of St. Gallen)
  • S Jaya Nirmala (National Institute of Technology Tiruchirappalli)
  • Adam Nohejl (Nara Institute of Science and Technology)
  • Kai North (George Mason University)
  • Eda Okur (Intel Labs)
  • Kostiantyn Omelianchuk (Grammarly)
  • Amin Omidvar (PhD student at the Department of Electrical Engineering and Computer Science, York University)
  • Benjamin Paddags (Department of Computer Science, University of Copenhagen)
  • Ulrike Pado (HFT Stuttgart)
  • Jeiyoon Park (Korea University)
  • Chanjun Park (Upstage)
  • Udita Patel (Amazon.com)
  • Long Qin (Alibaba)
  • Mengyang Qiu (University at Buffalo)
  • Martí Quixal (University of Tübingen)
  • Vatsal Raina (University of Cambridge)
  • Manav Rathod (University of California, Berkeley)
  • Hanumant Redkar (Goa University, Goa)
  • Edsel Jedd Renovalles (University of the Philippines Diliman)
  • Robert Reynolds (Brigham Young University)
  • Saed Rezayi (National Board of Medical Examiners)
  • Luisa Ribeiro-Flucht (University of Tuebingen)
  • Frankie Robertson (University of Jyväskylä)
  • Donya Rooein (Bocconi University)
  • Aiala Rosá (Instituto de Computación, Facultad de Ingeniería, Universidad de la República)
  • Allen Roush (University of Oregon)
  • Alla Rozovskaya (Queens College, City University of New York)
  • Josef Ruppenhofer (Fernuniviersität in Hagen)
  • Horacio Saggion (Universitat Pompeu Fabra)
  • Omer Salem (Cairo University)
  • Nicy Scaria (Indian Institute of Science)
  • Nils-Jonathan Schaller (Leibniz Institute for Science and Mathematics Education)
  • Martha Shaka (University College Cork)
  • Ashwath Shankarnarayan (New York University)
  • Matthew Shardlow (Manchester Metropolitan University)
  • Gyu-Ho Shin (University of Illinois Chicago)
  • Li Siyan (Columbia University)
  • Yixiao Song (University of Massachusetts Amherst)
  • Mayank Soni (ADAPT Centre, Trinity College Dublin)
  • Maja Stahl (Leibniz University Hannover)
  • Felix Stahlberg (Google Research)
  • Katherine Stasaski (Salesforce Research)
  • Kevin Stowe (Educational Testing Services (ETS))
  • Helmer Strik (Centre for Language and Speech Technology (CLST), Centre for Language Studies (CLS), Radboud University Nijmegen)
  • David Strohmaier (University of Cambridge)
  • Katsuhito Sudoh (Nara Women’s University)
  • Hakyung Sung (University of Oregon)
  • Abhijit Suresh (Graduate Student)
  • CheeWei Tan (NanyangTechnologicalUniversity)
  • Zhongwei Teng (Vanderbilt University)
  • Xiaoyi Tian (University of Florida)
  • Gladys Tyen (University of Cambridge)
  • Shriyash Upadhyay (University of Pennsylvania)
  • Felipe Urrutia (Center for Advanced Research in Education)
  • Masaki Uto (The University of Electro-Communications)
  • Sowmya Vajjala (National Research Council)
  • Justin Vasselli (Nara Institute of Science and Technology)
  • Giulia Venturi (Institute of Computational Linguistics “Antonio Zampolli” (ILC-CNR))
  • Anthony Verardi (Duolingo)
  • Elena Volodina (University of Gothenburg)
  • Jiani Wang (East China Normal University)
  • Taro Watanabe (Nara Institute of Science and Technology)
  • Michael White (The Ohio State University)
  • Alistair Willis (The Open University)
  • Anna Winklerova (Faculty of Informatics Masaryk University)
  • Man Fai Wong (City University of Hong Kong)
  • Simon Woodhead (Eedi)
  • Changrong Xiao (Tsinghua University)
  • Kevin P. Yancey (Duolingo)
  • Roman Yangarber (University of Helsinki)
  • Su-Youn Yoon (EduLab)
  • Marcos Zampieri (George Mason University)
  • Fabian Zehner (DIPF ; Leibniz Institute for Research and Information in Education)
  • Kamyar Zeinalipour (University of Siena)
  • Torsten Zesch (Computational Linguistics, FernUniversität in Hagen)
  • Jing Zhang (Emory University)
  • Yang Zhong (University of Pittsburgh)
  • Yiyun Zhou (NBME)
  • Jessica Zipf (University of Konstanz)
  • Michael Zock (CNRS-LIS)
  • Bowei Zou (Institute for Infocomm Research)