Results 1 - 10
of
131
Ace: exploiting correlation for energy-efficient and continuous context sensing
- In Proceedings of the 10th international conference on Mobile systems, applications, and services
, 2012
"... We propose ACE (Acquisitional Context Engine), a middle-ware that supports continuous context-aware applications while mitigating sensing costs for inferring contexts. ACE provides user’s current context to applications running on it. In addition, it dynamically learns relationships among var-ious c ..."
Abstract
-
Cited by 38 (1 self)
- Add to MetaCart
(Show Context)
We propose ACE (Acquisitional Context Engine), a middle-ware that supports continuous context-aware applications while mitigating sensing costs for inferring contexts. ACE provides user’s current context to applications running on it. In addition, it dynamically learns relationships among var-ious context attributes (e.g., whenever the user is Driving, he is not AtHome). ACE exploits these automatically learned relationships for two powerful optimizations. The first is in-ference caching that allows ACE to opportunistically infer one context attribute (AtHome) from another already-known attribute (Driving), without acquiring any sensor data. The second optimization is speculative sensing that enables ACE to occasionally infer the value of an expensive attribute (e.g., AtHome) by sensing cheaper attributes (e.g., Driving). Our experiments with two real context traces of 105 people and a Windows Phone prototype show that ACE can reduce sens-ing costs of three context-aware applications by about 4.2×, compared to a raw sensor data cache shared across applica-tions, with a very small memory and processing overhead.
Cell Phone-Based Biometric Identification
"... Abstract — Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature se ..."
Abstract
-
Cited by 19 (4 self)
- Add to MetaCart
(Show Context)
Abstract — Mobile devices are becoming increasingly sophisticated and now incorporate many diverse and powerful sensors. The latest generation of smart phones is especially laden with sensors, including GPS sensors, vision sensors (cameras), audio sensors (microphones), light sensors, temperature sensors, direction sensors (compasses), and acceleration sensors. In this paper we describe and evaluate a system that uses phone-based acceleration sensors, called accelerometers, to identify and authenticate cell phone users. This form of behavioral biometric identification is possible because a person’s movements form a unique signature and this is reflected in the accelerometer data that they generate. To implement our system we collected accelerometer data from thirty-six users as they performed normal daily activities such as walking, jogging, and climbing stairs, aggregated this time series data into examples, and then applied standard classification algorithms to the resulting data to generate predictive models. These models either predict the identity of the individual from the set of thirty-six users, a task we call user identification, or predict whether (or not) the user is a specific user, a task we call user authentication. This work is notable because it enables identification and authentication to occur unobtrusively, without the users taking any extra actions—all they need to do is carry their cell phones. There are many uses for this work. For example, in environments where sharing may take place, our work can be used to automatically customize a mobile device to a user. It can also be used to provide device security by enabling usage for only specific users and can provide an extra level of identity verification. M I.
Mobile Phone Sensing Systems: A Survey
- IEEE Commun. Surv. Tutor. 2013
"... Abstract—Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core commu-nication device in people’s everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applicati ..."
Abstract
-
Cited by 19 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Mobile phone sensing is an emerging area of interest for researchers as smart phones are becoming the core commu-nication device in people’s everyday lives. Sensor enabled mobile phones or smart phones are hovering to be at the center of a next revolution in social networks, green applications, global environ-mental monitoring, personal and community healthcare, sensor augmented gaming, virtual reality and smart transportation systems. More and more organizations and people are discovering how mobile phones can be used for social impact, including how to use mobile technology for environmental protection, sensing, and to leverage just-in-time information to make our movements and actions more environmentally friendly. In this paper we have described comprehensively all those systems which are using smart phones and mobile phone sensors for humans good will and better human phone interaction.
Walkie-Markie: Indoor Pathway Mapping Made Easy
"... We present Walkie-Markie – an indoor pathway mapping system that can automatically reconstruct internal pathway maps of buildings without any a-priori knowledge about the building, such as the floor plan or access point locations. Central to Walkie-Markie is a novel exploitation of the WiFi infrastr ..."
Abstract
-
Cited by 17 (2 self)
- Add to MetaCart
(Show Context)
We present Walkie-Markie – an indoor pathway mapping system that can automatically reconstruct internal pathway maps of buildings without any a-priori knowledge about the building, such as the floor plan or access point locations. Central to Walkie-Markie is a novel exploitation of the WiFi infrastructure to define landmarks (WiFi-Marks) to fuse crowdsourced user trajectories obtained from inertial sensors on users ’ mobile phones. WiFi-Marks are special pathway locations at which the trend of the received WiFi signal strength changes from increasing to decreasing when moving along the pathway. By embedding these WiFi-Marks in a 2D plane using a newly devised algorithm and connecting them with calibrated user trajectories, Walkie-Markie is able to infer pathway maps with high accuracy. Our experiments demonstrate that Walkie-Markie is able to reconstruct a high-quality pathway map for a real office-building floor after only 5-6 rounds of walks, with accuracy gradually improving as more user data becomes available. The maximum discrepancy between the inferred pathway map and the real one is within 3m and 2.8m for the anchor nodes and path segments, respectively. 1
Simple and Complex Activity Recognition Through Smart Phones
"... Abstract—Due to an increased popularity of assistive healthcare technologies activity recognition has become one of the most widely studied problems in technology-driven assistive healthcare domain. Current approaches for smart-phone based activity recognition focus only on simple activities such as ..."
Abstract
-
Cited by 13 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Due to an increased popularity of assistive healthcare technologies activity recognition has become one of the most widely studied problems in technology-driven assistive healthcare domain. Current approaches for smart-phone based activity recognition focus only on simple activities such as locomotion. In this paper, in addition to recognizing simple activities, we investigate the ability to recognize complex activities, such as cooking, cleaning, etc. through a smart phone. Features extracted from the raw inertial sensor data of the smart phone corresponding to the user’s activities, are used to train and test supervised machine learning algorithms. The results from the experiments conducted on ten participants indicate that, in isolation, while simple activities can be easily recognized, the performance of the prediction models on complex activities is poor. However, the prediction model is robust enough to recognize simple activities even in the presence of complex activities. Keywords-activity recognition; accelerometer; smart environments; smart phone I.
myHealthAssistant: A Phone-based Body Sensor Network that CAPTURES THE WEARERS EXERCISES THROUGHOUT THE DAY
- THE 6TH INTERNATIONAL CONFERENCE ON BODY AREA NETWORKS
, 2011
"... This paper presents a novel fitness and preventive health care system with a flexible and easy to deploy platform. By using embedded wearable sensors in combination with a smartphone as an aggregator, both daily activities as well as specific gym exercises and their counts are recognized and logged. ..."
Abstract
-
Cited by 11 (3 self)
- Add to MetaCart
This paper presents a novel fitness and preventive health care system with a flexible and easy to deploy platform. By using embedded wearable sensors in combination with a smartphone as an aggregator, both daily activities as well as specific gym exercises and their counts are recognized and logged. The detection is achieved with minimal impact on the system’s resources through the use of customized 3D inertial sensors embedded in fitness accessories with built-in pre-processing of the initial 100Hz data. It provides a flexible re-training of the classifiers on the phone which allows deploying the system swiftly. A set of evaluations shows a classification performance that is comparable to that of state of the art activity recognition, and that the whole setup is suitable for daily usage with minimal impact on the phone’s resources.
Luštrek,"Three-layer Activity Recognition Combining Domain Knowledge and Meta-classification
- JMBE
"... One of the essential tasks of healthcare and smart-living systems is to recognize the current activity of a particular user. Such activity recognition (AR) is demanding when only limited sensors are used, such as accelerometers. Given a small number of accelerometers, intelligent AR systems often us ..."
Abstract
-
Cited by 11 (7 self)
- Add to MetaCart
(Show Context)
One of the essential tasks of healthcare and smart-living systems is to recognize the current activity of a particular user. Such activity recognition (AR) is demanding when only limited sensors are used, such as accelerometers. Given a small number of accelerometers, intelligent AR systems often use simple architectures, either general or specific for their AR. In this paper, a system for AR named TriLAR is presented. TriLAR has an AR-specific architecture consisting of three layers: (i) a bottom layer, where an arbitrary number of AR methods can be used to recognize the current activity; (ii) a middle layer, where the predictions from the bottom-layer methods are inputs for a hierarchical structure that combines domain knowledge and meta-classification; and (iii) a top layer, where a hidden Markov model is used to correct spurious transitions between the recognized activities from the middle layer. The middle layer has a hierarchical, three-level structure. First, a meta-classifier is used to make the initial separation between the most distinct activities. Second, domain knowledge in the form of rules is used to differentiate between the remaining activities, recognizing those of interest (i.e., static activities). Third, another meta-classifier deals with the remaining activities. In this way, each activity is recognized by the method best suited to it, leaving unrecognized activities to the next method. This architecture was tested on a dataset recorded using ten volunteers who acted out a complex, real-life scenario while wearing accelerometers placed on the chest, thigh, and ankle. The results show that TriLAR successfully recognized elementary activities using one or two sensors and significantly outperformed three standard, single-layer methods with all sensor placements.
Activity Recognition on Streaming Sensor Data
"... Many real-world applications that focus on addressing needs of a human, require information about the activities being performed by the human in real-time. While advances in pervasive computing have lead to the development of wireless and non-intrusive sensors that can capture the necessary activity ..."
Abstract
-
Cited by 8 (1 self)
- Add to MetaCart
(Show Context)
Many real-world applications that focus on addressing needs of a human, require information about the activities being performed by the human in real-time. While advances in pervasive computing have lead to the development of wireless and non-intrusive sensors that can capture the necessary activity information, current activity recognition approaches have so far experimented on either a scripted or pre-segmented sequence of sensor events related to activities. In this paper we propose and evaluate a sliding window based approach to perform activity recognition in an on line or streaming fashion; recognizing activities as and when new sensor events are recorded. To account for the fact that different activities can be best characterized by different window lengths of sensor events, we incorporate the time decay and mutual information based weighting of sensor events within a window. Additional contextual information in the form of the previous activity and the activity of the previous window is also appended to the feature describing a sensor window. The experiments conducted to evaluate these techniques on real-world smart home datasets suggests that combining mutual information based weighting of sensor events and adding past contextual information into the feature leads to best performance for streaming activity recognition.
Design Considerations for the WISDM Smart Phone-based Sensor Mining Architecture
"... Smart phones comprise a large and rapidly growing market. These devices provide unprecedented opportunities for sensor mining since they include a large variety of sensors, including an: acceleration sensor (accelerometer), location sensor (GPS), direction sensor (compass), audio sensor (microphone) ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
(Show Context)
Smart phones comprise a large and rapidly growing market. These devices provide unprecedented opportunities for sensor mining since they include a large variety of sensors, including an: acceleration sensor (accelerometer), location sensor (GPS), direction sensor (compass), audio sensor (microphone), image sensor (camera), proximity sensor, light sensor, and temperature sensor. Combined with the ubiquity and portability of these devices, these sensors provide us with an unprecedented view into people’s lives—and an excellent opportunity for data mining. But there are obstacles to sensor mining applications, due to the severe resource limitations (e.g., power, memory, bandwidth) faced by mobile devices. In this paper we discuss these limitations, their impact, and propose a solution based on our WISDM (WIireless Sensor Data Mining) smart phone-based sensor mining architecture.