Results 1 -
4 of
4
Studying the consistency of star ratings and the complaints in 1 & 2-star user reviews for top free cross-platform Android and iOS apps Studying the Consistency of Star Ratings and the Complaints in 1 & 2-Star User Reviews for Top Free Cross-Platform Andr
"... How users rate a mobile app via star ratings and user reviews is of utmost importance for the success of an app. Recent studies and surveys show that users rely heavily on star ratings and user reviews that are provided by other users, for deciding which app to download. However, understanding star ..."
Abstract
- Add to MetaCart
(Show Context)
How users rate a mobile app via star ratings and user reviews is of utmost importance for the success of an app. Recent studies and surveys show that users rely heavily on star ratings and user reviews that are provided by other users, for deciding which app to download. However, understanding star ratings and user reviews is a complicated matter, since they are influenced by many factors such as the actual quality of the app and how the user perceives such quality relative to their expectations, which are in turn influenced by their prior experiences and expectations relative to other apps on the platform (e.g., iOS versus Android). Nevertheless, star ratings and user reviews provide developers with valuable information for improving the software quality of their app. In an effort to expand their revenue and reach more users, app developers commonly build cross-platform apps, i.e., apps that are available on multiple platforms. As star ratings and user reviews are of such importance in the mobile app industry, it is essential for developers of cross-platform apps to maintain a consistent level of star ratings and user reviews for their apps across the various platforms on which they are available. In this paper, we investigate whether cross-platform apps achieve a consistent level of star ratings and user reviews. We manually identify 19 cross-platform apps and conduct an empirical study on their star ratings and user reviews. By manually tagging 9,902 1 & 2-star reviews of the studied cross-platform apps, we discover that the distribution of the frequency of complaint types varies across platforms. Finally, we study the negative impact ratio of complaint types and find that for some apps, users have higher expectations on one platform. All our proposed techniques and our methodologies are generic and can be used for any app. Our findings show that at least 68% of the studied cross-platform apps do not have consistent star ratings, which suggests that different quality assurance efforts need to be considered by Abstract How users rate a mobile app via star ratings and user reviews is of utmost importance for the success of an app. Recent studies and surveys show that users rely heavily on star ratings and user reviews that are provided by other users, for deciding which app to download. However, understanding star ratings and user reviews is a complicated matter, since they are influenced by many factors such as the actual quality of the app and how the user perceives such quality relative to their expectations, which are in turn influenced by their prior experiences and expectations relative to other apps on the platform (e.g., iOS versus Android). Nevertheless, star ratings and user reviews provide developers with valuable information for improving the software quality of their app. In an effort to expand their revenue and reach more users, app developers commonly build cross-platform apps, i.e., apps that are available on multiple platforms. As star ratings and user reviews are of such importance in the mobile app industry, it is essential for developers of cross-platform apps to maintain a consistent level of star ratings and user reviews for their apps across the various platforms on which they are available. In this paper, we investigate whether cross-platform apps achieve a consistent level of star ratings and user reviews. We manually identify 19 cross-platform apps and conduct an empirical study on their star ratings and user reviews. By manually tagging 9,902 1 & 2-star reviews of the studied cross-platform apps, we discover that the distribution of the frequency of complaint types varies across platforms. Finally, we study the negative impact ratio of complaint types and find that for some apps, users have higher expectations on one platform. All our proposed techniques and our methodologies are generic and can be used for any app. Our findings show that at least 68% of the studied cross-platform apps do not have consistent star ratings, which suggests that different quality assurance efforts need to be considered by developers for the different platforms that they wish to support.
Investigating Functional and Code Size Measures for Mobile Applications: A Replicated Study
"... Abstract In this paper we apply a measurement procedure proposed by van Heeringen and van Gorp to approximate the COSMIC size of mobile applications. We compare this procedure with the one introduced by D’Avanzo et al. We also replicate an empirical study recently carried out to assess whether the C ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract In this paper we apply a measurement procedure proposed by van Heeringen and van Gorp to approximate the COSMIC size of mobile applications. We compare this procedure with the one introduced by D’Avanzo et al. We also replicate an empirical study recently carried out to assess whether the COSMIC functional size of mobile applications can be used to estimate the size of the final applications in terms of lines of code, number of bytes of the source code and bytecode. The results showed that the COSMIC functional size evaluated with van Heeringen and van Gorp’s method was well correlated to all the size measures taken into account. Nevertheless, the prediction accuracy did not satisfy the evaluation criteria and turned out ot be slightly worse than the one obtained in the original study and based on the approach proposed by D’Avanzo et al.
Lifting Inter-App Data-Flow Analysis to Large App Sets
, 2015
"... This technical report is published as a means to ensure timely dissemination of our work with the same title, which is currently under review for a conference. Copyright and all rights in this technical report are maintained by the authors. It is understood that all persons copying this information ..."
Abstract
- Add to MetaCart
(Show Context)
This technical report is published as a means to ensure timely dissemination of our work with the same title, which is currently under review for a conference. Copyright and all rights in this technical report are maintained by the authors. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. This work may not be reposted without the explicit permission of the authors.
Feature Lifecycles as They Spread, Migrate, Remain, and Die in App Stores
"... Abstract—We introduce a theoretical characterisation of fea-ture lifecycles in app stores, to help app developers to identify trends and to find undiscovered requirements. To illustrate and motivate app feature lifecycle analysis, we use our theory to empirically analyse the migratory and non-migrat ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—We introduce a theoretical characterisation of fea-ture lifecycles in app stores, to help app developers to identify trends and to find undiscovered requirements. To illustrate and motivate app feature lifecycle analysis, we use our theory to empirically analyse the migratory and non-migratory behaviours of 4,053 non-free features from two App Stores (Samsung and BlackBerry). The results reveal that, in both stores, intransitive features (those that neither migrate nor die out) exhibit signifi-cantly different behaviours with regard to important properties, such as their price. Further correlation analysis also highlights differences between trends relating price, rating, and popularity. Our results indicate that feature lifecycle analysis can yield insights that may also help developers to understand feature behaviours and attribute relationships. I.