Educational NLP at AIED 2022

6 minute read

Proceedings: Part I and Part II

Topics: automated item generation (AIG) automated short answer grading (ASAG) assessment automated writing evaluation (AWE) complexity dialogue discourse feedback information-extraction language-learning math programming readability reading-comprehension speech vocabulary-acquisition

  • Towards Human-Like Educational Question Generation with Large Language Models. Zichao Wang, Jakob Valdez, Debshila Basu Mallick, and Richard G. Baraniuk [paper] automated item generation (AIG)
  • Evaluating AI-Generated Questions: A Mixed-Methods Analysis Using Question Data and Student Perceptions. Rachel Van Campenhout, Martha Hubertz, and Benny G. Johnson [paper] automated item generation (AIG)
  • Automatic Riddle Generation for Learning Resources. Niharika Sri Parasa, Chaitali Diwan, and Srinath Srinivasa [paper] automated item generation (AIG)
  • Obj2Sub: Unsupervised Conversion of Objective to Subjective Questions. Aarish Chhabra, Nandini Bansal, V. Venktesh, Mukesh Mohania, and Deep Dwivedi [paper] automated item generation (AIG)
  • First Steps Towards Automatic Question Generation and Assessment of LL(1) Grammars. Ricardo Conejo, José del Campo-Ávila, and Beatriz Barros [paper] automated item generation (AIG) assessment
  • Auxiliary Task Guided Interactive Attention Model for Question Difficulty Prediction. V. Venktesh, Md. Shad Akhtar, Mukesh Mohania, and Vikram Goyal [paper] automated item generation (AIG) complexity
  • Question Personalization in an Intelligent Tutoring System. Sabina Elkins, Ekaterina Kochmar, Robert Belfer, Iulian Serban, and Jackie C. K. Cheung [paper] automated item generation (AIG) complexity
  • An Automatic Self-explanation Sample Answer Generation with Knowledge Components in a Math Quiz. Ryosuke Nakamoto, Brendan Flanagan, Yiling Dai, Kyosuke Takami, and Hiroaki Ogata [paper] automated item generation (AIG) math
  • Automatic Question Generation for Scaffolding Self-explanations for Code Comprehension. Lasang J. Tamang, Rabin Banjade, Jeevan Chapagain, and Vasile Rus [paper] automated item generation (AIG) programming
  • Programming Question Generation by a Semantic Network: A Preliminary User Study with Experienced Instructors. Cheng-Yu Chung, and I-Han Hsiao [paper] automated item generation (AIG) programming
  • Assessing Readability by Filling Cloze Items with Transformers. Andrew M. Olney [paper] automated item generation (AIG) readability
  • Fully Automated Short Answer Scoring of the Trial Tests for Common Entrance Examinations for Japanese University. Haruki Oka, Hung Tuan Nguyen, Cuong Tuan Nguyen, Masaki Nakagawa, and Tsunenori Ishioka [paper] automated short answer grading (ASAG)
  • Plausibility and Faithfulness of Feature Attribution-Based Explanations in Automated Short Answer Scoring. Tasuku Sato, Hiroaki Funayama, Kazuaki Hanawa, and Kentaro Inui [paper] automated short answer grading (ASAG)
  • Representing Scoring Rubrics as Graphs for Automatic Short Answer Grading. Aubrey Condor, Zachary Pardos, and Marcia Linn [paper] automated short answer grading (ASAG)
  • Balancing Cost and Quality: An Exploration of Human-in-the-Loop Frameworks for Automated Short Answer Scoring. Hiroaki Funayama, Tasuku Sato, Yuichiroh Matsubayashi, Tomoya Mizumoto, Jun Suzuki, and Kentaro Inui [paper] automated short answer grading (ASAG)
  • Assessing the Practical Benefit of Automated Short-Answer Graders. Ulrike Padó [paper] automated short answer grading (ASAG)
  • Towards Generating Counterfactual Examples as Automatic Short Answer Feedback. Anna Filighera, Joel Tschesche, Tim Steuer, Thomas Tregel, and Lisa Wernet [paper] automated short answer grading (ASAG) feedback
  • Enhancing Auto-scoring of Student Open Responses in the Presence of Mathematical Terms and Expressions. Sami Baral, Karthik Seetharaman, Anthony F. Botelho, Anzhuo Wang, George Heineman, and Neil T. Heffernan [paper] assessment
  • Leveraging Natural Language Processing for Quality Assurance of a Situational Judgement Test. Okan Bulut, Alexander MacIntosh, and Cole Walsh [paper] assessment
  • Grading Programming Assignments with an Automated Grading and Feedback Assistant. Marcus Messer [paper] assessment feedback
  • Exploring Fairness in Automated Grading and Feedback Generation of Open-Response Math Problems. Ashish Gurung, and Neil T. Heffernan [paper] assessment feedback
  • Automated Support to Scaffold Students’ Written Explanations in Science. Purushartha Singh, Rebecca J. Passonneau, Mohammad Wasih, Xuesong Cang, ChanMin Kim, and Sadhana Puntambekar [paper] assessment information-extraction
  • Towards the Automated Evaluation of Legal Casenote Essays. Mladen Raković, Lele Sha, Gerry Nagtzaam, Nick Young, Patrick Stratmann, Dragan Gašević, and Guanliang Chen [paper] automated writing evaluation (AWE)
  • An Automated Writing Evaluation System for Supporting Self-monitored Revising. Diane Litman, Tazin Afrin, Omid Kashefi, Christopher Olshefski, Amanda Godley, and Rebecca Hwa [paper] automated writing evaluation (AWE)
  • Multitask Summary Scoring with Longformers. Robert-Mihai Botarleanu, Mihai Dascalu, Laura K. Allen, Scott Andrew Crossley, and Danielle S. McNamara [paper] automated writing evaluation (AWE)
  • Increasing Teachers’ Trust in Automatic Text Assessment Through Named-Entity Recognition. Candy Walter [paper] automated writing evaluation (AWE) information-extraction
  • Semantic Modeling of Programming Practices with Local Knowledge Graphs: The Effects of Question Complexity on Student Performance. Cheng-Yu Chung, and I-Han Hsiao [paper] complexity
  • The Impact of Conversational Agents’ Language on Self-efficacy and Summary Writing. Haiying Li, Fanshuo Cheng, Grace Wang, and Art Graeser [paper] dialogue
  • Developing an Inclusive Q&A Chatbot in Massive Open Online Courses. Songhee Han, and Min Liu [paper] dialogue
  • Using Open Source Technologies and Generalizable Procedures in Conversational and Affective Intelligent Tutoring Systems. Romina Soledad Albornoz-De Luise, Miguel Arevalillo-Herráez, and David Arnau [paper] dialogue
  • Toward Accessible Intelligent Tutoring Systems: Integrating Cognitive Tutors and Conversational Agents. Michael Smalenberger, and Kelly Smalenberger [paper] dialogue
  • Dynamic Conversational Chatbot for Assessing Primary Students. Esa Weerasinghe, Thamashi Kotuwegedara, Rangeena Amarasena, Prasadi Jayasinghe, and Kalpani Manathunga [paper] dialogue
  • A Conversational Recommender System for Exploring Pedagogical Design Patterns. Nasrin Dehbozorgi, and Dinesh Chowdary Attota [paper] dialogue
  • On Providing Natural Language Support for Intelligent Tutoring Systems. Romina Soledad Albornoz-De Luise, Pablo Arnau-González, and Miguel Arevalillo-Herráez [paper] dialogue
  • Human-in-the-Loop Data Collection and Evaluation for Improving Mathematical Conversations. Debajyoti Datta, Maria Phillips, James P. Bywater, Sarah Lilly, Jennifer Chiu, Ginger S. Watson, and Donald E. Brown [paper] dialogue math
  • Modeling Student Discourse in Online Discussion Forums Using Semantic Similarity Based Topic Chains. Harshita Chopra, Yiwen Lin, Mohammad Amin Samadi, Jacqueline Guadalupe Cavazos, Renzhe Yu, Spencer Jaquay, and Nia Nixon [paper] discourse
  • “Teacher, Can You Say It Again?” Improving Automatic Speech Recognition Performance over Classroom Environments with Limited Data. Danner Schlotterbeck, Abelino Jiménez, Roberto Araya, Daniela Caballero, Pablo Uribe, and Johan Van der Molen Moris [paper] discourse speech
  • AI-Enabled Personalized Interview Coach in Rural India. Shriniwas Nayak, Satish Kumar, Dolly Agarwal, and Paras Parikh [paper] discourse speech
  • Improving Automated Evaluation of Formative Assessments with Text Data Augmentation. Keith Cochran, Clayton Cohn, Nicole Hutchins, Gautam Biswas, and Peter Hastings [paper] feedback
  • Improving the Quality of Students’ Written Reflections Using Natural Language Processing: Model Design and Classroom Evaluation. Ahmed Magooda, Diane Litman, Ahmed Ashraf, and Muhsin Menekse [paper] feedback
  • Measuring Inconsistency in Written Feedback: A Case Study in Politeness. Wei Dai, Yi-Shan Tsai, Yizhou Fan, Dragan Gašević, and Guanliang Chen [paper] feedback
  • Machine Learning Techniques to Evaluate Lesson Objectives. Pei Hua Cher, Jason Wen Yau Lee, and Fernando Bello [paper] information-extraction
  • Fine-grained Main Ideas Extraction and Clustering of Online Course Reviews. Chenghao Xiao, Lei Shi, Alexandra Cristea, Zhaoxing Li, and Ziqi Pan [paper] information-extraction
  • Scaling Mixed-Methods Formative Assessments (mixFA) in Classrooms: A Clustering Pipeline to Identify Student Knowledge. Xinyue Chen, and Xu Wang [paper] information-extraction
  • What Is Relevant for Learning? Approximating Readers’ Intuition Using Neural Content Selection. Tim Steuer, Anna Filighera, Gianluca Zimmer, and Thomas Tregel [paper] information-extraction
  • Providing Insights for Open-Response Surveys via End-to-End Context-Aware Clustering. Soheil Esmaeilzadeh, Brian Williams, Davood Shamsi, and Onar Vikingstad [paper] information-extraction
  • Extraction of Useful Observational Features from Teacher Reports for Student Performance Prediction. Menna Fateen, and Tsunenori Mine [paper] information-extraction
  • Reflection as an Agile Course Evaluation Tool. Siaw Ling Lo, Pei Hua Cher, and Fernando Bello [paper] information-extraction
  • Prerequisite Graph Extraction from Lectures. Ilaria Torre, Luca Mirenda, Gianni Vercelli, and Fulvio Mastrogiovanni [paper] information-extraction
  • Education Theories and AI Affordances: Design and Implementation of an Intelligent Computer Assisted Language Learning System. Xiaobin Chen, Elizabeth Bear, Bronson Hui, Haemanth Santhi-Ponnusamy, and Detmar Meurers [paper] language-learning
  • Speech and Eye Tracking Features for L2 Acquisition: A Multimodal Experiment. Sofiya Kobylyanskaya [paper] language-learning speech
  • An AI-Based Feedback Visualisation System for Speech Training. Adam T. Wynn, Jingyun Wang, Kaoru Umezawa, and Alexandra I. Cristea [paper] language-learning speech
  • Automatic Identification of Non-native English Speaker’s Phoneme Mispronunciation Tendencies. Shi Pu, Lee Becker, and Misaki Kato [paper] language-learning speech
  • Assessing Readability of Learning Materials on Artificial Intelligence in English for Second Language Learners. Yo Ehara [paper] readability
  • On Applicability of Neural Language Models for Readability Assessment in Filipino. Michael Ibañez, Lloyd Lois Antonie Reyes, Ranz Sapinit, Mohammed Ahmed Hussien, and Joseph Marvin Imperial [paper] readability
  • Computer-Aided Response-to-Intervention for Reading Comprehension Based on Recommender System. Ming-Chi Liu, Wei-Yang Lin, and Chia-Ling Tsai [paper] reading-comprehension
  • Automated Scoring for Reading Comprehension via In-context BERT Tuning. Nigel Fernandez, Aritra Ghosh, Naiming Liu, Zichao Wang, Benoît Choffin, Richard Baraniuk, and Andrew Lan [paper] reading-comprehension
  • How Item and Learner Characteristics Matter in Intelligent Tutoring Systems Data. John Hollander, John Sabatini, and Art Graesser [paper] reading-comprehension
  • Real-Time Spoken Language Understanding for Orthopedic Clinical Training in Virtual Reality. Han Wei Ng, Aiden Koh, Anthea Foong, Jeremy Ong, Jun Hao Tan, Eng Tat Khoo, and Gabriel Liu [paper] speech
  • An Intelligent Interactive Support System for Word Usage Learning in Second Languages. Yo Ehara [paper] vocabulary-acquisition
  • Predicting Second Language Learners’ Actual Knowledge Using Self-perceived Knowledge. Yo Ehara [paper] vocabulary-acquisition
  • An Intelligent Multimodal Dictionary for Chinese Character Learning. Jinglei Yu, Jiachen Song, Penghe Chen, and Yu Lu [paper] vocabulary-acquisition
  • Using Metacognitive Information and Objective Features to Predict Word Pair Learning Success. Bledar Fazlija, and Mohamed Ibrahim [paper] vocabulary-acquisition
  • Distributional Estimation of Personalized Second-Language Vocabulary Sizes with Wordlists that the Learner is Likely to Know. Yo Ehara [paper] vocabulary-acquisition