Results 1 -
5 of
5
Using Matching for Automatic Assessment in Computer Science Learning Environments
, 2000
"... The traditional method of automatically assessing programming exercises in Computer Science uses a black-box approach where a set of test data is inputed to both students and teachers programs and their outputs compared. This approach is useful for grading but inadequate for detecting and correcting ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
The traditional method of automatically assessing programming exercises in Computer Science uses a black-box approach where a set of test data is inputed to both students and teachers programs and their outputs compared. This approach is useful for grading but inadequate for detecting and correcting students errors. In this paper we present several cases where we were able to develop matching algorithms to compare answers with solutions and pinpoint dierences between them. In some cases the matching is based on the actual structure of answers and solutions. In other cases we use execution side-eects to collect a structure that can be compared using a matching algorithm. This approach is currently being used in Ganesh - a web environment for learning Computer Science. 1 Introduction The web is the natural environment for learning Computer Science (CS) nowadays. As computers are both the subject and the medium, students are naturally inclined to explore all the potential of this form...
Making Failure The Mother of Success Session T1A
"... Abstract- Do students really welcome failure, the mother of success? This paper reports the implementation of a trial-and-failure learning strategy, in both entry and senior level computer science classes. The idea is simple: given a sophisticated course project, let students try project submissions ..."
Abstract
- Add to MetaCart
Abstract- Do students really welcome failure, the mother of success? This paper reports the implementation of a trial-and-failure learning strategy, in both entry and senior level computer science classes. The idea is simple: given a sophisticated course project, let students try project submissions as many times as they want before the project deadline. For each submission, a thorough inspection is performed by an automated grading system called APOGEE. A student project has to accomplish not only the functional requirements but also many desired quality attributes such as robustness and security. In 2009, the trial-and-failure strategy has been adopted by four universities in five class sections. We report some interesting observations from student survey results, e.g., one can find out if factors like students ’ positive experiences of programming, choice of programming language, years of working experience, and instructors are predictive variables for positive attitudes toward the trial-and-failure learning experience as a whole.
Lim, Oon and Zhu Towards Definitive Benchmarking Abstract Towards Definitive Benchmarking of Algorithm Performance
"... One of the primary methods employed by researchers to judge the merits of new heuristics and algorithms is to run them on accepted benchmark test cases and comparing their performance against the existing approaches. Such test cases can be either generated or pre-defined, and both approaches have th ..."
Abstract
- Add to MetaCart
One of the primary methods employed by researchers to judge the merits of new heuristics and algorithms is to run them on accepted benchmark test cases and comparing their performance against the existing approaches. Such test cases can be either generated or pre-defined, and both approaches have their shortcomings. Generated data may be accidentally or deliberately skewed to favor the algorithm being tested, and the exact data is usually unavailable to other researchers; pre-defined benchmarks may become outdated. This paper describes a secure online benchmark facility called the Benchmark Server, which would store and run submitted programs in different languages on standard benchmark test cases for different problems and generate the performance statistics. With carefully chosen and up-to-date test cases, the Benchmark Server could provide researchers with the definitive means to compare their new methods with the best existing methods using the latest data. Keywords: Benchmarking of algorithms, Web-based Benchmark Server 1
General Terms
"... Providing consistent, instant, and detailed feedback to students has been a big challenge in teaching Web based computing, given the complexity of project assignments and the comprehensive requirements on security, reliability, and robustness. We present a prototype automated grading system called P ..."
Abstract
- Add to MetaCart
(Show Context)
Providing consistent, instant, and detailed feedback to students has been a big challenge in teaching Web based computing, given the complexity of project assignments and the comprehensive requirements on security, reliability, and robustness. We present a prototype automated grading system called ProtoAPOGEE for enriching students ’ learning experience and elevating faculty productivity. Unlike other automatic graders used in introductory programming classes, ProtoAPOGEE covers a large spectrum of system quality attributes. It is able to generate step by step playback guidance for failed test cases, hence providing informative feedback to help students make reflective and iterative improvements in learning.
By
, 2009
"... large number of software projects exist and will continue to be developed that have textual requirements and textual design elements where the design elements should fully satisfy the requirements. Current techniques to assess the satisfaction of requirements by corresponding design elements are lar ..."
Abstract
- Add to MetaCart
(Show Context)
large number of software projects exist and will continue to be developed that have textual requirements and textual design elements where the design elements should fully satisfy the requirements. Current techniques to assess the satisfaction of requirements by corresponding design elements are largely manual processes that lack formal criteria and standard practices. Software projects that require satisfaction assessment are often very large systems containing several hundred requirements and design elements. Often these projects are within a high assurance project domain, where human lives and millions of dollars of funding are at stake. Manual satisfaction assessment is expensive in terms of hours of human effort and project budget. Automated techniques are not currently applied to satisfaction assessment. This dissertation addresses the problem of automated satisfaction assessment for