Results 1 - 10
of
125
Estimating the Numbers of End Users and End User Programmers
- In IEEE Symp. on Visual Languages and Human-Centric Computing
, 2005
"... In 1995, Boehm predicted that by 2005, there would be “55 million performers ” of “end user programming ” in the United States. The original context and method which generated this number had two weaknesses, both of which we address. First, it relies on undocumented, judgment based factors to estima ..."
Abstract
-
Cited by 121 (29 self)
- Add to MetaCart
(Show Context)
In 1995, Boehm predicted that by 2005, there would be “55 million performers ” of “end user programming ” in the United States. The original context and method which generated this number had two weaknesses, both of which we address. First, it relies on undocumented, judgment based factors to estimate the number of end user programmers based on the total number of end users; we address this weakness by identifying specific end user sub populations and then estimating their sizes. Second, Boehm's estimate relies on additional undocumented, judgment based factors to adjust for rising computer usage rates; we address this weakness by integrating fresh Bureau of Labor Statistics (BLS) data and projections as well as a richer estimation method. With these improvements to Boehm’s method, we estimate that in 2012 there will be 90 million end users in American workplaces. Of these, we anticipate that over 55 million will use spreadsheets or databases (and therefore may potentially program), while over 13 million will describe themselves as programmers, compared to BLS projections of fewer than 3 million professional programmers. We have validated our improved method by generating estimates for 2001 and 2003, then verifying that our estimates are consistent with existing estimates from other sources.
Software development cost estimation approaches – A survey
- Annals of Software Engineering
, 2000
"... This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertise-based and regression-based ..."
Abstract
-
Cited by 120 (7 self)
- Add to MetaCart
(Show Context)
This paper summarizes several classes of software cost estimation models and techniques: parametric models, expertise-based techniques, learning-oriented techniques, dynamics-based models, regression-based models, and composite-Bayesian techniques for integrating expertise-based and regression-based models. Experience to date indicates that neural-net and dynamics-based techniques are less mature than the other classes of techniques, but that all classes of techniques are challenged by the rapid pace of change in software technology. The primary conclusion is that no single technique is best for all situations, and that a careful comparison of the results of several approaches is most likely to produce realistic estimates. 1.
Understanding and Predicting Effort in Software Projects
- In 2003 International Conference on Software Engineering
, 2002
"... We set out to answer a question we were asked by software project management: how much effort remains to be spent on a specific software project and how will that effort be distributed over time? To answer this question we propose a model based on the concept that each modification to software may c ..."
Abstract
-
Cited by 56 (18 self)
- Add to MetaCart
(Show Context)
We set out to answer a question we were asked by software project management: how much effort remains to be spent on a specific software project and how will that effort be distributed over time? To answer this question we propose a model based on the concept that each modification to software may cause repairs at some later time and investigate its theoretical properties and application to several projects in Avaya to predict and plan development resource allocation. Our model presents a novel unified framework to investigate and predict effort, schedule, and defects of a software project. The results of applying the model confirm a fundamental relationship between the new feature and defect repair changes and demonstrate its predictive properties.
Estimation of the COCOMO model parameters using genetic algorithms for NASA software projects
- Journal of Computer Science, USA
, 2006
"... Abstract: Defining the project estimated cost, duration and maintenance effort early in the development life cycle is a valuable goal to be achieved for software projects. Many model structures evolved in the literature. These model structures consider modeling software effort as a function of the d ..."
Abstract
-
Cited by 29 (6 self)
- Add to MetaCart
(Show Context)
Abstract: Defining the project estimated cost, duration and maintenance effort early in the development life cycle is a valuable goal to be achieved for software projects. Many model structures evolved in the literature. These model structures consider modeling software effort as a function of the developed line of code (DLOC). Building such a function helps project managers to accurately allocate the available resources for the project. In this study, we present two new model structures to estimate the effort required for the development of software projects using Genetic Algorithms (GAs). A modified version of the famous COCOMO model provided to explore the effect of the software development adopted methodology in effort computation. The performance of the developed models were tested on NASA software project dataset [1].The developed models were able to provide a good estimation capabilities. Key words: COCOMO model, NASA software, genetic algorithms, genetic programming technique
Empirical evaluation of defect projection models for widely-deployed production software systems
, 2004
"... Defect-occurrence projection is necessary for the development of methods to mitigate the risks of software defect occurrences. In this paper, we examine user-reported software defectoccurrence patterns across twenty-two releases of four widelydeployed, business-critical, production, software systems ..."
Abstract
-
Cited by 27 (6 self)
- Add to MetaCart
(Show Context)
Defect-occurrence projection is necessary for the development of methods to mitigate the risks of software defect occurrences. In this paper, we examine user-reported software defectoccurrence patterns across twenty-two releases of four widelydeployed, business-critical, production, software systems: a commercial operating system, a commercial middleware system, an open source operating system (OpenBSD), and an open source middleware system (Tomcat). We evaluate the suitability of common defect-occurrence models by first assessing the match between characteristics of widely-deployed production software systems and model structures. We then evaluate how well the models fit real world data. We find that the Weibull model is flexible enough to capture defect-occurrence behavior across a wide range of systems. It provides the best model fit in 16 out of the 22 releases. We then evaluate the ability of the moving averages and the exponential smoothing methods to extrapolate Weibull model parameters using fitted model parameters from historical releases. Our results show that in 50 % of our forecasting experiments, these two naïve parameterextrapolation methods produce projections that are worse than the projection from using the same model parameters as the most recent release. These findings establish the need for further research on parameter-extrapolation methods that take into account variations in characteristics of widely-deployed, production, software systems across multiple releases.
The Effects of Software Process Maturity on Software Development Effort
, 1997
"... A software product is often behind schedule, over budget, non-conforming to requirements and of poor quality. Controlling and improving the processes used to develop software has been proposed as a primary remedy to these problems. The Software Engineering Institute at Carnegie Mellon University has ..."
Abstract
-
Cited by 25 (2 self)
- Add to MetaCart
A software product is often behind schedule, over budget, non-conforming to requirements and of poor quality. Controlling and improving the processes used to develop software has been proposed as a primary remedy to these problems. The Software Engineering Institute at Carnegie Mellon University has published the Software Capability Maturity Model (SW-CMM) for use as a set of criteria to evaluate an organization's Process Maturity. The model is also used as a roadmap to improve a software development process 's maturity. The premise of the SW-CMM is that mature development processes deliver products on time, within budget, within requirements, and of high quality. This research examines the effects of Software Process Maturity, using the SWCMM, on software development effort. Effort is the primary determinant of software development cost and schedule. The technical challenge in this research is determining how much change in effort is due solely to changing Process Maturity when this change generally occurs concurrently with changes to other factors that also influence software development effort. The six mathematical models used in this research support the following conclusion: For the one hundred twelve projects in this sample, Software Process Maturity was a significant factor (95% confidence level) affecting software development effort. After normalizing for the effects of other effort influences, a one-increment change in the rating of Process Maturity resulted in a 15% to 21% reduction in effort. The modeling approach used in this analysis can be used in other areas of Software Engineering as well.
System dynamics modeling of an inspection-based process
- In 18th International Conference on Software Engineering (ICSE’96
, 1996
"... A dynamic simulation model of an inspection-based software lifecycle process has been developed to support quantitative process evaluation. The model serves to examine the effects of inspection practices on cost, schedule and quality throughout the lifecycle. It uses system dynamics to model the int ..."
Abstract
-
Cited by 25 (0 self)
- Add to MetaCart
A dynamic simulation model of an inspection-based software lifecycle process has been developed to support quantitative process evaluation. The model serves to examine the effects of inspection practices on cost, schedule and quality throughout the lifecycle. It uses system dynamics to model the interrelated flows of tasks, errors and personnel throughout different development phases and is calibrated to industrial data. It extends previous software project dynamics research by examining an inspection-based process with an original model, integrating it with the knowledge-based method for risk assessment and cost estimation, and using an alternative modeling platform. While specific enough to investigate inspection practices, it is sufficiently general to incorporate changes for other phenomena. It demonstrates the effects of performing inspections or not, the effectiveness of varied inspection policies, and the effects of other managerial policies such as manpower allocation. The results of testing indicate a valid model that can be used for process evaluation and project planning, and serve as a framework for incorporating other dynamic process factors.
Effort estimation of use cases for incremental large-scale software development
- In: Proc of the 27th Int’l Conf. on Software Engineering. St Louis
, 2005
"... This paper describes an industrial study of an effort estimation method based on use cases, the Use Case Points method. The original method was adapted to incremental development and evaluated on a large industrial system with modification of software from the previous release. We modified the follo ..."
Abstract
-
Cited by 21 (0 self)
- Add to MetaCart
(Show Context)
This paper describes an industrial study of an effort estimation method based on use cases, the Use Case Points method. The original method was adapted to incremental development and evaluated on a large industrial system with modification of software from the previous release. We modified the following elements of the original method: a) complexity assessment of actors and use cases, and b) the handling of non-functional requirements and team factors that may affect effort. For incremental development, we added two elements to the method: c) counting both all and the modified actors and transactions of use cases, and d) effort estimation for secondary changes of software not reflected in use cases. We finally extended the method to: e) cover all development effort in a very large project. The method was calibrated using data from one release and it produced an estimate for the successive release that was only 17% lower than the actual effort. The study identified factors affecting effort on large projects with incremental development. It also showed how these factors can be calibrated for a specific context and produce relatively accurate estimates. Categories and Subject Descriptors D.2.9 [Software Engineering]: Management – cost estimation, life cycle.
Can Neural Networks be easily Interpreted in Software Cost Estimation?
- PROCEEDINGS OF THE 2002 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS. FUZZ-IEEE'02
, 2002
"... The use of the neural networks to estimate software development effort has been viewed with skepticism by the majority of the cost estimation community. Although, neural networks have shown their strengths in solving complex problems, their shortcoming of being `black boxes' has prevented them ..."
Abstract
-
Cited by 20 (0 self)
- Add to MetaCart
The use of the neural networks to estimate software development effort has been viewed with skepticism by the majority of the cost estimation community. Although, neural networks have shown their strengths in solving complex problems, their shortcoming of being `black boxes' has prevented them to be accepted as a common practice for cost estimation. In this paper, we study the interpretation of cost estimation models based on a Back-propagation three multilayer Perceptron network. Our idea consists in the use of a method that maps this neural network to a fuzzy rule-based system. Consequently, if the obtained fuzzy rules are easily interpreted, the neural network will also be easy to interpret. Our experiment is made using the COCOMO'81 dataset.
Imagineering inauthentic legitimate peripheral participation: an instructional design approach for motivating computing education
- In ICER ’06: Proceedings of the 2nd International Workshop on Computing Education Research
, 2006
"... ABSTRACT Since its publication, Lave and Wenger's concept of legitimate peripheral participation (LPP) ..."
Abstract
-
Cited by 17 (3 self)
- Add to MetaCart
(Show Context)
ABSTRACT Since its publication, Lave and Wenger's concept of legitimate peripheral participation (LPP)