Project LISTEN (Literacy Innovation that Speech Technology ENables) was a 25-year research project at Carnegie Mellon University to improve children's reading skills. Project LISTEN. The project created a computer-based Reading Tutor that listens to a child reading aloud, corrects errors, helps when the child is stuck or encounters a hard word, provides hints, assesses progress, and presents more advanced text when the child is ready. The Reading Tutor has been used daily by hundreds of children in field tests at schools in the United States, Canada, Ghana, and India. Thousands of hours of usage logged at multiple levels of detail, including millions of words read aloud, have been stored in a database that has been mined to improve the Tutor's interactions with students. An extensive list of publications (with abstracts) can be found at Carnegie Mellon University.[1]

Project LISTEN’s Reading Tutor is now being transformed into "RoboTutor" by Carnegie Mellon’s team competing in the Global Learning XPRIZE.[2] The goal of the Global Learning XPRIZE is to develop open-source Android tablet apps, in both English and Swahili, that enables children in developing countries who have little or no access to schooling to teach themselves basic reading, writing and arithmetic without adult assistance. RoboTutor is an integrated collection of intelligent tutors and educational games implemented on an Android tablet, and is now being field-tested in Tanzania.

History edit

Project LISTEN was led by (now Emeritus) Professor David 'Jack' Mostow,[3] who currently leads Carnegie Mellon's "RoboTutor" team in the Global Learning XPRIZE competition.[4] Project LISTEN was supported by the Defense Advanced Research Projects Agency through DARPA Order 5167, the National Science Foundation under ITR/IERI Grant No. IEC-0326153,[5] Grant No. REC-0326153 from the U.S. Department of Education’s Institute of Education Sciences[6] and under Grants R305B070458, R305A080157 and R305A080628, and by the Heinz Endowments.[7] Project LISTEN's purpose was to develop, evaluate, and refine an intelligent tutor to listen to children read aloud, and help them learn to read.

As part of the research and testing, Project LISTEN's Reading Tutor has been used with positive results by hundreds of children in the United States, Canada, and other countries.[8] (See Prototype Testing below.) Results indicated that often the students whose initial proficiency was lowest benefited most from the Reading Tutor. Of particular interest was the strong performance of the Reading Tutor for English Language Learners.[9]

Project Listen fits well into Carnegie Mellon University's Simon Initiative, whose goal is to use learning science research to improve educational practice. As noted in the History of the Simon Initiative, "The National Science Foundation included Project LISTEN’s speech recognition system as one of its top 50 innovations from 1950-2000."[10]

How it works edit

The goal of the Reading Tutor is to make the student experience of learning to read using it as effective or more effective than being tutored by a human coach - for example, as described at the Intervention Central website.[11] A child selects an item from a menu listing texts from a source such the Weekly Reader or authored stories. The Reading Tutor listens to the child read aloud using Carnegie Mellon’s Sphinx – II Speech Recognizer[12][13] to process and interpret the student's oral reading. When the Reading Tutor notices a student misread a word, skip a word, get stuck, hesitate, or click for help, it responds with assistance modeled in part on expert reading teachers, adapted to the capabilities and limitations of technology.[14]

The Reading Tutor dynamically updates its estimate of a student’s reading level and picks stories a bit harder (or easier) according to the estimated level; this approach allows the Reading Tutor to aim for the zone of proximal development, that is, to expand the span of what a learner currently can do without help, toward what he or she can do with help.

The Tutor also scaffolds (provides support for) key processes in reading. It explains unfamiliar words and concepts by presenting short factoids (that is, comparisons to other words). It can provide both spoken and graphical assistance when the student has a problem. The Tutor represents Visual speech using talking mouth video clips of phonemes. It assists word identification by previewing new words, reading hard words aloud, and giving rhyming and other hints.

Detailed data on the interactions is saved in a database, and data-mining has been used to improve the Reading Tutor and investigate research questions.

Prototype Testing edit

Project LISTEN trials demonstrated usability, user acceptance, effective assistance, and pre-to-post-test gains. A number of controlled studies extended over several months, with student use of 20 minutes per day. Use of the Reading Tutor produced higher comprehension gains than current methods. To ensure there was no third variable involved, different treatments were compared within the same classrooms, with randomized assignment of children to treatments, stratified by pretest scores. Valid and reliable measures (Woodcock.1998)[15] were used to measure gains between pre and post test.[16] Data gathered during each trial was used to improve the efficacy of the tutor.

Various controlled studies were carried out as the Reading Tutor evolved, for example,

  1. Pilot Study(1996–97)[17]
  2. Within-classroom comparison(1998)[18][19]
  3. Comparison to human tutors(1999-2000)[20][21][22]
  4. Equal-time comparison to Sustained Silent Reading (2000-2001)[23]

Since 2005, researchers both within and outside Project LISTEN have conducted and published controlled studies of the Reading Tutor. (See list in[24]).

Awards edit

Project Listen has received global recognition and many awards:

  • 1994: Outstanding Paper Award at the Twelfth National Conference on Artificial Intelligence[25]
  • 1998: Represented the Computing Research Association (CRA) at the Coalition for National Science Funding Exhibit (CNSF) for Congress
  • 2000: The National Science Foundation included Project LISTEN in its "Nifty Fifty" list of top 50 innovations from 1950-2000.[26][27]
  • 2002: Distinguished Finalist for the International Reading Association's Outstanding Dissertation of the Year Award[28]
  • 2002: Project LISTEN videos included in PBS series "Reading Rockets"[29]
  • 2008: Best Paper Award at the 9th International Conference on Intelligent Tutoring Systems.[30][31]
  • 2008: Best Paper Nominee at the 9th International Conference on Intelligent Tutoring Systems[32][33]
  • 2011: Best Paper Nominee at the 15th International Conference on Artificial Intelligence in Education.[34][35]
  • 2012: Best Paper Award at the 25th Florida Artificial Intelligence Research Society Conference (FLAIRS-25).[36][37]
  • 2012: Best Student Paper Award at the 5th International Conference on Educational Data Mining[38][39]
  • 2013: Finalist for Best Paper Award at the 16th International Conference on Artificial Intelligence in Education, Memphis, TN[40][41]

See also edit

References edit

  1. ^ Project LISTEN Publications
  2. ^ ‘’Global Learning XPRIZE’’
  3. ^ "Professor Jack Mostow"
  4. ^ "Carnegie Mellon University's RoboTutor XPRIZE Team"
  5. ^ “Integrating Speech and User Modeling in a Reading Tutor that Listens"
  6. ^ Institute of Education Sciences
  7. ^ Heinz Endowments
  8. ^ "Project LISTEN A summary". Project Listen summary. February 7, 2013.
  9. ^ "A Magic Reading Box: New literacy software delivers amazing results among Vancouver grade schoolers who speak English as a second language". UBC Reports. 51 (8). August 4, 2005.
  10. ^ "The Simon Initiative"
  11. ^ Assisted reading example
  12. ^ "CMU Sphinx Speech Recognition Toolkit". CMU Sphinx.
  13. ^ "CMU Sphinx-II User Guide". CMU Sphinx.
  14. ^ "Project Listen Description". HCI Project Listen.
  15. ^ "WOODCOCK, R.W. 1998. Woodcock Reading Mastery Tests - Revised (WRMT-R/NU). American Guidance Service, Circle Pines, Minnesota".
  16. ^ "Project Listen Research Basis".
  17. ^ "When Speech Input is Not an Afterthought: A Reading Tutor that Listens, In Workshop on Perceptual User Interfaces, Banff, Canada, October, 1997" (PDF).
  18. ^ Within-classroom comparison(1998)
  19. ^ Mostow, Jack; Aist, Greg; Burkhead, Paul; Corbett, Albert; Cuneo, Andrew; Eitelman, Susan; Huang, Cathy; Junker, Brian; Sklar, Mary Beth; Tobin, Brian (22 July 2016). "Evaluation of an Automated Reading Tutor That Listens: Comparison to Human Tutoring and Classroom Instruction". Journal of Educational Computing Research. 29 (1): 61–117. doi:10.2190/06AX-QW99-EQ5G-RDCF. S2CID 46073812.
  20. ^ Jack Mostow; Greg Aist; Paul Burkhead; Albert Corbett; Andrew Cuneo; Susan Rossbach; Brian Tobin, Independent practice versus computer-guided oral reading: Equal-time comparison of sustained silent reading to an automated reading tutor that listens (PDF)
  21. ^ "A controlled evaluation of computer- versus human-assisted oral reading" (PDF).
  22. ^ A controlled evaluation of computer-versus human-assisted oral reading. Tenth Artificial Intelligence in Education (AI-ED) Conference, J.D. MOORE, C.L. REDFIELD and W.L. JOHNSON, Eds. Amsterdam: IOS Press, San Antonio, Texas, 586-588.
  23. ^ Mostow, Jack; Nelson-Taylor, Jessica; Beck, Joseph E. (2013). "Computer-Guided Oral Reading versus Independent Practice: Comparison of Sustained Silent Reading to an Automated Reading Tutor That Listens". Journal of Educational Computing Research. 49 (2): 249–276. doi:10.2190/EC.49.2.g. S2CID 62382102.
  24. ^ "Published Studies of Project Listen".
  25. ^ Jack Mostow; Steven F. Roth; Alexander G. Hauptmann; Matthew Kane. "(Outstanding Paper) A Prototype Reading Coach that Listens".
  26. ^ The Simon Initiative
  27. ^ "Nifty 50".
  28. ^ "Helping Children Learn Vocabulary during Computer-Assisted Oral Reading".
  29. ^ "Project Listen Videos on Reading Rockets PBS Series".
  30. ^ Beck, J. E., Chang, K.-m., Mostow, J., & Corbett, A. (2008). Introducing the Bayesian Evaluation and Assessment methodology. Programming and Software Engineering. Springer. pp. 383–394. ISBN 9783540691303.{{cite book}}: CS1 maint: multiple names: authors list (link)
  31. ^ "(Best Paper Award) Does help help? Introducing the Bayesian Evaluation and Assessment methodology" (PDF).
  32. ^ Beck, Joseph E.; Mostow, Jack (2008), "How Who Should Practice: Using Learning Decomposition to Evaluate the Efficacy of Different Types of Practice for Different Types of Students", Intelligent Tutoring Systems, Lecture Notes in Computer Science, vol. 5091, pp. 353–362, doi:10.1007/978-3-540-69132-7_39, ISBN 978-3-540-69130-3
  33. ^ "(Best Paper Nominee) How who should practice: Using learning decomposition to evaluate the efficacy of different types of practice for different types of students" (PDF).
  34. ^ Jack Mostow; Kai-min Chang; Jessica Nelson (January 2013). "Toward Exploiting EEG Input in a Reading Tutor". International Journal of Artificial Intelligence in Education. 22 (1–2): 230–237. doi:10.3233/JAI-130033.
  35. ^ "(Best Paper Award) Toward Exploiting EEG Input in a Reading Tutor, Mostow, J., Chang, K.-m., & Nelson, J., Proceedings of the 15th International Conference on Artificial Intelligence in Education, Auckland, NZ, 230-237, 2011" (PDF).
  36. ^ Sunayana Sitaram; Jack Mostow. "Mining Data from Project LISTEN's Reading Tutor to Analyze Development of Children's Oral Reading Prosody". Proceedings of the Twenty-Fifth International Florida Artificial Intelligence Research Society Conference, AASI Digital Library.
  37. ^ "(Best Paper Award) Mining Data from Project LISTEN's Reading Tutor to Analyze Development of Children's Oral Reading Prosody" (PDF).
  38. ^ Xu, Y., Mostow, J. "Comparison of methods to trace multiple subskills: Is LR-DBN best?" (PDF). Proceedings of the Fifth International Conference on Educational Data Mining, Chania, Crete, Greece.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  39. ^ "(Best Student Paper Award) Comparison of methods to trace multiple subskills: Is LR-DBN best?" (PDF).
  40. ^ Lallé, S., Mostow, J., Luengo, V., & Guin, N.page=161-170 (22 June 2013). "Comparing Student Models in Different Formalisms by Predicting their Impact on Help Success". Proceedings of the 16th International Conference on Artificial Intelligence in Education, Memphis, TN 2013. Springer US. ISBN 9783642391125.{{cite book}}: CS1 maint: multiple names: authors list (link) CS1 maint: numeric names: authors list (link)
  41. ^ "(Finalist for Best Paper Award) Comparing Student Models in Different Formalisms by Predicting their Impact on Help Success" (PDF).

External links edit