Results 1 - 10
of
98
When and How Often Should Worked Examples be Given to Students? New Results and a Summary of the Current State of Research
"... Our work explores the assistance dilemma: when should instruction provide or withhold assistance? In three separate but very similar studies, we have investigated whether worked examples, a high-assistance approach, studied in conjunction with tutored problems to be solved, a mid-level assistance ap ..."
Abstract
-
Cited by 31 (15 self)
- Add to MetaCart
(Show Context)
Our work explores the assistance dilemma: when should instruction provide or withhold assistance? In three separate but very similar studies, we have investigated whether worked examples, a high-assistance approach, studied in conjunction with tutored problems to be solved, a mid-level assistance approach, can lead to better learning. Contrary to prior results with untutored problem solving, a low-assistance approach, we found that worked examples alternating with isomorphic tutored problems did not produce more learning gains than tutored problems alone. On the other hand, the examples group across the three studies learned more efficiently than the tutored-alone group; the students spent 21 % less time learning the same amount of material. Practically, if these results were to scale across a 20-week course, students could save 4 weeks of time – yet learn just as much. Scientifically, we provide an analysis of a key dimension of assistance: when and how often should problem solutions be given to students versus elicited from them? Our studies, in conjunction with past studies, suggest that on this exampleproblem dimension mid-level assistance may lead to better learning than either lower or higher level assistance. While representing a step toward resolving the assistance dilemma for this dimension, more studies are required to confirm that mid-level assistance is best and further analysis is needed to develop predictive theory for what combinations of assistance yield the most effective and efficient learning.
A new paradigm for intelligent tutoring systems: example-tracing tutors
- International Journal of Artificial Intelligence in Education
, 2009
"... Abstract. The Cognitive Tutor Authoring Tools (CTAT) support creation of a novel type of tutors called example-tracing tutors. Unlike other types of ITSs (e.g., model-tracing tutors, constraint-based tutors), exampletracing tutors evaluate student behavior by flexibly comparing it against generalize ..."
Abstract
-
Cited by 30 (7 self)
- Add to MetaCart
(Show Context)
Abstract. The Cognitive Tutor Authoring Tools (CTAT) support creation of a novel type of tutors called example-tracing tutors. Unlike other types of ITSs (e.g., model-tracing tutors, constraint-based tutors), exampletracing tutors evaluate student behavior by flexibly comparing it against generalized examples of problemsolving behavior. Example-tracing tutors are capable of sophisticated tutoring behaviors; they provide step-bystep guidance on complex problems while recognizing multiple student strategies and (where needed) maintaining multiple interpretations of student behavior. They therefore go well beyond VanLehn’s (2006) minimum criterion for ITS status, namely, that the system has an inner loop (i.e., provides within-problem guidance, not just end-of-problem feedback). Using CTAT, example-tracing tutors can be created without programming. An author creates a tutor interface through drag-and-drop techniques, and then demonstrates the problem-solving behaviors to be tutored. These behaviors are recorded in a “behavior graph, ” which can be easily edited and generalized. Compared to other approaches to programming by demonstration for ITS development, CTAT implements a simpler method (no machine learning is used) that is currently more pragmatic and proven for widespread, real-world use by non-programmers. Development time estimates from a large number of real-world ITS projects that have used CTAT suggest
Worked Examples and Tutored Problem Solving: Redundant or Synergistic Forms of Support?
"... The current research investigates a combination of two instructional approaches, tutored problem solving and worked-examples. Tutored problem solving with automated tutors has proven to be an effective instructional method. Worked-out examples have been shown to be an effective complement to untutor ..."
Abstract
-
Cited by 24 (4 self)
- Add to MetaCart
(Show Context)
The current research investigates a combination of two instructional approaches, tutored problem solving and worked-examples. Tutored problem solving with automated tutors has proven to be an effective instructional method. Worked-out examples have been shown to be an effective complement to untutored problem solving, but it is largely unknown whether they are an effective complement to tutored problem solving. Further, while computer-based learning environments offer the possibility of adaptively transitioning from examples to problems while tailoring to an individual learner, the effectiveness of such machine-adapted example fading is largely unstudied. To address these research questions, one lab and one classroom experiment were conducted. Both studies compared a standard Cognitive Tutor with two example-enhanced Cognitive Tutors, in which the fading of worked-out examples occurred either fixed or adaptively. Results indicate that the adaptive fading of worked-out examples leads to higher transfer performance on delayed post-tests than the other two methods.
The Knowledge-Learning-Instruction Framework: Bridging the Science-Practice Chasm to Enhance Robust Student Learning
, 2012
"... Despite the accumulation of substantial cognitive science research relevant to education, there remains confusion and controversy in the application of research to educational practice. In support of a more systematic approach, we describe the Knowledge-Learning-Instruction (KLI) framework. KLI prom ..."
Abstract
-
Cited by 22 (2 self)
- Add to MetaCart
(Show Context)
Despite the accumulation of substantial cognitive science research relevant to education, there remains confusion and controversy in the application of research to educational practice. In support of a more systematic approach, we describe the Knowledge-Learning-Instruction (KLI) framework. KLI promotes the emergence of instructional principles of high potential for generality, while explicitly identifying constraints of and opportunities for detailed analysis of the knowledge students may acquire in courses. Drawing on research across domains of science, math, and language learning, we illustrate the analyses of knowledge, learning, and instructional events that the KLI framework affords. We present a set of three coordinated taxonomies of knowledge, learning, and instruction. For example, we identify three broad classes of learning events (LEs): (a) memory and fluency processes, (b) induction and refinement processes, and (c) understanding and sense-making processes, and we show
Example-tracing tutors: A new paradigm for intelligent tutoring systems
- International Journal of Artificial Intelligence and Education
"... Abstract. Key success criteria for an ITS authoring tool are that (1) the tool supports the creation of effective tutoring systems, (2) the tool can be used to build tutors across a wide range of application domains, (3) authoring with the tool is cost-effective, (4) the tool supports easy deploymen ..."
Abstract
-
Cited by 18 (6 self)
- Add to MetaCart
(Show Context)
Abstract. Key success criteria for an ITS authoring tool are that (1) the tool supports the creation of effective tutoring systems, (2) the tool can be used to build tutors across a wide range of application domains, (3) authoring with the tool is cost-effective, (4) the tool supports easy deployment and delivery of tutors in a variety of technical contexts, (5) tutors created with the tool are maintainable, and (6) if tutors are used in a research context, the tool must support research-related functionality. The Cognitive Tutor Authoring Tools (CTAT) address all of these requirements to a substantial degree, fully meeting most of them. CTAT supports the creation of both Cognitive Tutors (Koedinger & Corbett, 2006) and a newer type of tutors called example-tracing tutors. This paper focuses on the latter. Example-tracing tutors evaluate student behavior by flexibly comparing it against examples of correct and incorrect problem-solving behaviors. Example-tracing tutors are capable of sophisticated tutoring behaviors: they provide step-by-step guidance on complex problems while recognizing multiple student strategies and maintaining multiple interpretations of student behavior. On that basis, they should be deemed intelligent tutoring systems. Example-tracing tutors can be built without programming, through drag-and-drop techniques and programming by demonstration. Exampletracing tutors have been built and used in real educational settings for a wide range of application areas. Development time estimates from a large number of projects that have used CTAT suggest that CTAT improves the cost-effectiveness of ITS development by a factor of 4-8, compared to “historical ” estimates of tutor development time. Although there is a lot of variability in these kinds of estimates, they nonetheless support our hope that lowering the skill requirements for tutor creation is a key step toward widespread use of
Scaling up programming by demonstration for intelligent tutoring systems development: an open-access website for middle school mathematics learning
- IEEE Transactions on Learning Technologies
, 2009
"... © 2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract
-
Cited by 15 (3 self)
- Add to MetaCart
(Show Context)
© 2009 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author’s copyright. In most cases, these works may not be reposted
Automated Student Model Improvement
- In Proceedings of the 5th International Conference on Educational Data Mining
, 2012
"... Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demon ..."
Abstract
-
Cited by 14 (6 self)
- Add to MetaCart
(Show Context)
Student modeling plays a critical role in developing and improving instruction and instructional technologies. We present a technique for automated improvement of student models that leverages the DataShop repository, crowd sourcing, and a version of the Learning Factors Analysis algorithm. We demonstrate this method on eleven educational technology data sets from intelligent tutors to games in a variety of domains from math to second language learning. In at least ten of the eleven cases, the method discovers improved models based on better test-set prediction in cross validation. The improvements isolate flaws in the original student models, and we show how focused investigation of flawed parts of models leads to new insights into the student learning process and suggests specific improvements for tutor design. We also discuss the great potential for future work that substitutes alternative statistical models of learning from the EDM literature or alternative model search algorithms.
The Knowledge-Learning-Instruction (KLI) Framework: Toward Bridging the Science-Practice Chasm to Enhance Robust Student Learning
, 2010
"... recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Keywords: computational modeling, cognitive modeling, instructional theory, machine learning, learning science, second language learning, mathematics lea ..."
Abstract
-
Cited by 14 (8 self)
- Add to MetaCart
(Show Context)
recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Keywords: computational modeling, cognitive modeling, instructional theory, machine learning, learning science, second language learning, mathematics learning, science learning, robust learning, learning theory, knowledge componentsExecutive Summary The volume of research on learning and instruction is enormous. Yet progress in improving educational outcomes has been slow at best. Many learning science results have not been translated into general practice and it appears that most that have been fielded have not yielded significant results in randomized control trials. Addressing the chasm between learning science and educational practice will require massive efforts from many constituencies, but one of these efforts is to develop a theoretical framework that permits a more systematic accumulation of the relevant research base. A key piece in such a theoretical framework is the development of levels of analyses that are fine enough to be supported by cognitive science and cognitive neuroscience, but also at levels appropriate to guide the design of effective educational practices. An ideal scientific solution would be a small set of universal instructional principles that can be applied to produce efficient
Intelligent Tutoring Systems with Multiple Representations and SelfExplanation Prompts Support Learning of Fractions.
- In Dimitrova et al. (Eds.), Proceedings of the 14th International Conference on AIED
, 2009
"... Abstract. Although a solid understanding of fractions is foundational in mathematics, the concept of fractions remains a challenging one. Previous research suggests that multiple graphical representations (MGRs) may promote learning of fractions. Specifically, we hypothesized that providing student ..."
Abstract
-
Cited by 9 (5 self)
- Add to MetaCart
Abstract. Although a solid understanding of fractions is foundational in mathematics, the concept of fractions remains a challenging one. Previous research suggests that multiple graphical representations (MGRs) may promote learning of fractions. Specifically, we hypothesized that providing students with MGRs of fractions, in addition to the conventional symbolic notation, leads to better learning outcomes as compared to instruction incorporating only one graphical representation. We anticipated, however, that MGRs would make the students' task more challenging, since they must link the representations and distill from them a common concept or principle. Therefore, we hypothesized further that selfexplanation prompts would help students benefit from working with MGRs. To investigate these hypotheses, we conducted a classroom study in which 112 6 th -grade students used intelligent tutors for fraction conversion and fraction addition. The results of the study show that students learned more with MGRs of fractions than with a single representation, but only when prompted to self-explain how the graphics relate to the symbolic fraction representations.