Results 1 - 10
of
19
Autonomously generating hints by inferring problem solving policies
- In Proceedings of the Second (2015) ACM Conference on Learning@ Scale, L@S ’15
, 2015
"... Exploring the whole sequence of steps a student takes to pro-duce work, and the patterns that emerge from thousands of such sequences is fertile ground for a richer understanding of learning. In this paper we autonomously generate hints for the Code.org ‘Hour of Code, ’ (which is to the best of our ..."
Abstract
-
Cited by 7 (3 self)
- Add to MetaCart
(Show Context)
Exploring the whole sequence of steps a student takes to pro-duce work, and the patterns that emerge from thousands of such sequences is fertile ground for a richer understanding of learning. In this paper we autonomously generate hints for the Code.org ‘Hour of Code, ’ (which is to the best of our knowledge the largest online course to date) using historical student data. We first develop a family of algorithms that can predict the way an expert teacher would encourage a student to make forward progress. Such predictions can form the ba-sis for effective hint generation systems. The algorithms are more accurate than current state-of-the-art methods at recreat-ing expert suggestions, are easy to implement and scale well. We then show that the same framework which motivated the hint generating algorithms suggests a sequence-based statis-tic that can be measured for each learner. We discover that this statistic is highly predictive of a student’s future success.
Methods for ordinal peer grading
- In KDD, KDD ’14, ACM
, 2014
"... Massive Online Open Courses have become an accessible and affordable choice for education. This has led to new technical challenges for instructors such as student evaluation at scale. Recent work has found ordinal peer grading, where individ-ual grader orderings are aggregated into an overall order ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Massive Online Open Courses have become an accessible and affordable choice for education. This has led to new technical challenges for instructors such as student evaluation at scale. Recent work has found ordinal peer grading, where individ-ual grader orderings are aggregated into an overall ordering of assignments, to be a viable alternate to traditional instruc-tor/staff evaluation [23]. Existing techniques, which extend rank-aggregation methods, produce a single ordering as out-put. While these rankings have been found to be an accurate reflection of assignment quality on average, they do not com-municate any of the uncertainty inherent in the assessment process. In particular, they do not to provide instructors with an estimate of the uncertainty of each assignment’s position in the ranking. In this work, we tackle this problem by ap-plying Bayesian techniques to the ordinal peer grading prob-lem, using MCMC-based sampling techniques in conjunction with the Mallows model. Experiments are performed on real-world peer grading datasets, which demonstrate that the pro-posed method provides accurate uncertainty information via the estimated posterior distributions.
Scaling Short-answer Grading by Combining Peer Assessment with Algorithmic Scoring
"... Peer assessment helps students reflect and exposes them to different ideas. It scales assessment and allows large online classes to use open-ended assignments. However, it requires students to spend significant time grading. How can we lower this grading burden while maintaining quality? This paper ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Peer assessment helps students reflect and exposes them to different ideas. It scales assessment and allows large online classes to use open-ended assignments. However, it requires students to spend significant time grading. How can we lower this grading burden while maintaining quality? This paper integrates peer and machine grading to preserve the robust-ness of peer assessment and lower grading burden. In the identify-verify pattern, a grading algorithm first predicts a student grade and estimates confidence, which is used to es-timate the number of peer raters required. Peers then iden-tify key features of the answer using a rubric. Finally, other peers verify whether these feature labels were accurately ap-plied. This pattern adjusts the number of peers that evaluate an answer based on algorithmic confidence and peer agree-ment. We evaluated this pattern with 1370 students in a large, online design class. With only 54 % of the student grading time, the identify-verify pattern yields 80-90 % of the accu-racy obtained by taking the median of three peer scores, and provides more detailed feedback. A second experiment found that verification dramatically improves accuracy with more raters, with a 20 % gain over the peer-median with four raters. However, verification also leads to lower initial trust in the grading system. The identify-verify pattern provides an ex-ample of how peer work and machine learning can combine to improve the learning experience. Author Keywords assessment; online learning; automated assessment; peer learning
Strategyproof peer selection: Mechanisms, analyses, and experiments
- In Proceedings of the 30th AAAI Conference on Artificial Intelligence (AAAI
, 2016
"... We study an important crowdsourcing setting where agents evaluate one another and, based on these evaluations, a sub-set of agents are selected. This setting is ubiquitous when peer review is used for distributing awards in a team, allocat-ing funding to scientists, and selecting publications for co ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
We study an important crowdsourcing setting where agents evaluate one another and, based on these evaluations, a sub-set of agents are selected. This setting is ubiquitous when peer review is used for distributing awards in a team, allocat-ing funding to scientists, and selecting publications for con-ferences. The fundamental challenge when applying crowd-sourcing in these settings is that agents may misreport their reviews of others to increase their chances of being se-lected. We propose a new strategyproof (impartial) mecha-nism called Dollar Partition that satisfies desirable axiomatic properties. We then show, using a detailed experiment with parameter values derived from target real world domains, that our mechanism performs better on average, and in the worst case, than other strategyproof mechanisms in the literature. 1
Uncovering Trajectories of Informal Learning in Large Online Communities Of Creators
"... We analyzed informal learning in Scratch Online – an online community with over 4.3 million users and 6.7 million instances of user-generated content. Users develop projects, which are graphical interfaces consisting of interacting programming blocks. We investigated two fundamental questions of how ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
We analyzed informal learning in Scratch Online – an online community with over 4.3 million users and 6.7 million instances of user-generated content. Users develop projects, which are graphical interfaces consisting of interacting programming blocks. We investigated two fundamental questions of how we can model informal learning, and which patterns of informal learning emerge. We proceeded in two phases. First, we modeled learning as a trajectory of cumulative programming block usage by long-term users who created at least 50 projects. Second, we applied K-means++ clustering to uncover patterns of learning and corresponding subpopulations. We found four groups of users manifesting four different patterns of learning, ranging from the smallest to the largest improvement. At one end of the spectrum, users learned more and in a faster manner. At the opposite end, users did not show much learning progress, even after creating dozens of projects. The modeling and clustering of trajectory patterns that enabled us to quantitatively analyze informal learning may be applicable to other similar communities. The results can also support administrators of online communities in implementing customized interventions for specific subpopulations. Author Keywords Learning analytics; informal learning; modeling; clustering;
Creating Awareness and Reflection in a Large-Scale IS Lecture – The Application of a Peer Assessment in a Flipped Classroom Scenario
"... Abstract. Large-scale lectures are a typical way of teaching university students. However, these lectures often lack interaction elements and do not foster awareness and reflection in the learning process. This results in insufficient learning outcomes such as learning satisfaction and success. Ther ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract. Large-scale lectures are a typical way of teaching university students. However, these lectures often lack interaction elements and do not foster awareness and reflection in the learning process. This results in insufficient learning outcomes such as learning satisfaction and success. Therefore, a new approach to engage interaction in such large-scale lectures is the flipped class-room concept which seeks to overcome these challenges by stimulating self-regulated learning phases and improving interaction as well as awareness and reflection in the presence phases of a lecture. However, it is still unclear how to actually increase reflection and awareness through interaction in such learning scenarios. For this purpose, we propose an application of a technology-enhanced peer assessment that is carried out in large-scale information systems lectures. Preliminary evaluation results suggest the potentials of this approach. Thus, we are able to provide first theoretical and practical implications for the application of a technology-enhanced peer assessment in large-scale lectures.
University of Toronto Instructors ’ Experiences with Developing MOOCs
, 2015
"... We interviewed eight University of Toronto (U of T) instructors who have offered MOOCs on Coursera or EdX between 2012 and 2014 to understand their motivation for MOOC instruction, their experience developing and teaching MOOCs, and their perceptions of the implications of MOOC instruction on their ..."
Abstract
- Add to MetaCart
We interviewed eight University of Toronto (U of T) instructors who have offered MOOCs on Coursera or EdX between 2012 and 2014 to understand their motivation for MOOC instruction, their experience developing and teaching MOOCs, and their perceptions of the implications of MOOC instruction on their teaching and research practices. Through inductive analysis, we gleaned common motivations for MOOC development, including expanding public access to high quality learning resources, showcasing U of T teaching practices, and attempting to engage MOOC learners in application of concepts learned, even in the face of constraints that may inhibit active learning in MOOC contexts. MOOC design and delivery was a team effort with ample emphasis on planning and clarity. Instructors valued U of T instructional support in promoting systematic MOOC design and facilitating technical issues related to MOOC platforms. The evolution of MOOC support at U of T grew from a focus on addressing technical issues, to instructional design of MOOCs driven, first, by desired learning outcomes. Findings include changes in teaching practices of the MOOC instructors as they revised pedagogical practices in their credit courses by increasing opportunities for active learning and using MOOC resources to subsequently flip their classrooms. This study addresses the paucity of research on faculty experiences with developing MOOCs, which can subsequently inform the design of new forms of MOOC-like initiatives to increase public access to high quality learning resources, including those available through U of T.
unknown title
"... Abstract While most MOOCs rely on world-famous experts to teach the masses, in many circumstances students may learn more from people who share their context such as local teachers or peers. Here, we describe an experiment to explore how the "source" of video content, the teacher, affects ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract While most MOOCs rely on world-famous experts to teach the masses, in many circumstances students may learn more from people who share their context such as local teachers or peers. Here, we describe an experiment to explore how the "source" of video content, the teacher, affects online learning, specifically in the context of higher education in Indian colleges. The proposed experiment will compare three content sources -a local lecturer (teacher from an Indian engineering college), a local peer (both male and female students similar to the targeted audience), and an internationally recognized expert (a Stanford lecturer). Students will watch videos by the various source authors, after which we will measure differences in their preference, engagement, and learning. In addition, we discuss our experiences with helping students prepare video lectures and describe the support and processes we used to curate interesting and clear peer-generated content.
A System for Scalable and Reliable Technical-Skill Testing in Online Labor Markets
"... Abstract The emergence of online labor platforms, online crowdsourcing sites, and even Massive Open Online Courses (MOOCs), has created an increasing need for reliably evaluating the skills of the participating users (e.g., "does a candidate know Java") in a scalable way. Many platforms a ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract The emergence of online labor platforms, online crowdsourcing sites, and even Massive Open Online Courses (MOOCs), has created an increasing need for reliably evaluating the skills of the participating users (e.g., "does a candidate know Java") in a scalable way. Many platforms already allow job candidates to take online tests to asses their competence in a variety of technical topics. However the existing approaches face many problems. First, cheating is very common in online testing without supervision, as the test questions often "leak" and become easily available online along with the answers. Second, technical-skills, such as programming, require the tests to be frequently updated in order to reflect the current state-of-the-art. Third, there is very limited evaluation of the tests themselves, and how effectively they measure the skill that the users are tested for. In this article we present a platform, that continuously generates test questions and evaluates their quality as predictors of the user skill level. Our platform leverages content that is already available on question answering sites such as Stack Overflow and re-purposes these questions to generate tests. This approach has some major benefits: we continuously generate new questions, decreasing the impact of cheating, and we also create questions that are closer to the real problems that the skill holder is expected to solve in real life. Our platform leverages the use of Item Response Theory to evaluate the quality of the questions. We also use external signals about the quality of the workers to examine the external validity of the generated test questins: Questions that have external validity also have a strong predictive ability for identifying early the workers that have the potential to succeed in the online job marketplaces. Our experimental evaluation shows that our system generates questions of comparable or higher quality compared to existing tests, with a cost of approximately $3 to $5 dollars per question, which is lower than the cost of licensing questions from existing test banks, and an order of magnitude lower than the cost of producing such questions from scratch using experts.
Analyzing MOOC Discussion Forum Messages to Identify Cognitive Learning Information Exchanges
"... ABSTRACT While discussion forums in online courses have been studied in the past, no one has proposed a model linking messages in discussion forums to a learning taxonomy, even though forums are widely used as educational tools in online courses. In this research, we view forums as information seek ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT While discussion forums in online courses have been studied in the past, no one has proposed a model linking messages in discussion forums to a learning taxonomy, even though forums are widely used as educational tools in online courses. In this research, we view forums as information seeking events and use a keyword taxonomy approach to analyze a large amount of MOOC forum data to identify the types of learning interactions taking place in forum conversations. Using 51,761 forum messages from 8,169 forum threads from a MOOC with a 50,000+ enrollment, messages are analyzed based on levels of Bloom's Taxonomy to categorize the scholarly discourse. The results of this research show that interactions within MOOC discussion forums are a learning process with unique characteristics specific to particular cognitive learning levels. Results also imply that different types of forum interactions have characteristics relevant to particular learning levels, and the volume of higher levels of cognitive learning incidents increase as the course progresses.