Context Awareness Model using the Improved Google Activity Recognition

개선된 Google Activity Recognition을 이용한 상황인지 모델

  • 백승은 (한국외국어대학교 정보통신공학과) ;
  • 박상원 (한국외국어대학교 정보통신공학과)
  • Received : 2014.11.05
  • Accepted : 2014.12.17
  • Published : 2015.01.31


Activity recognition technology is gaining attention because it can provide useful information follow user's situation. In research of activity recognition before smartphone's dissemination, we had to infer user's activity by using independent sensor. But now, with development of IT industry, we can infer user's activity by using inner sensor of smartphone. So, more animated research of activity recognition is being implemented now. By applying activity recognition system, we can develop service like recommending application according to user's preference or providing information of route. Some previous activity recognition systems have a defect using up too much energy, because they use GPS sensor. On the other hand, activity recognition system which Google released recently (Google Activity Recognition) needs only a few power because it use 'Network Provider' instead of GPS. Thus it is suitable to smartphone application system. But through a result from testing performance of Google Activity Recognition, we found that is difficult to getting user's exact activity because of unnecessary activity element and some wrong recognition. So, in this paper, we describe problems of Google Activity Recognition and propose AGAR(Advanced Google Activity Recognition) applied method to improve accuracy level because we need more exact activity recognition for new service based on activity recognition. Also to appraise value of AGAR, we compare performance of other activity recognition systems and ours and explain an applied possibility of AGAR by developing exemplary program.


Supported by : 한국외국어대학교


  1. 정찬민, "Movement Activitiy Recognition using Accelerometer and GPS in a Smartphone", 경희대학교 대학원 공학 석사학위논문.
  2. Google Activity Recognition,
  3. N. Kern, B. Schiele, and A. Schmidt, "Multi-Sensor Activity Context Detection for WearableComputing", Ambient Intelligence Lecture Notes in Computer Science, Vol.2875, pp.220-232, 2003.
  4. A. Krause, D. Siewiore, A. Smailagic, and J. Farringdon, "Unsupervised, Dynamic Identification of Physiological and Activity Context in Wearable Computing", Proceedings Seventh IEEE International Symposium, pp.88-97, 2005.
  5. C. Zhu and W. Sheng, "Multi-sensor fusion for human daily activity recognition in robot-assisted living", 4th ACM/IEEE International Conference on Human-Robot Interaction, pp.303-304, 2009.
  6. 배은준, "스마트폰 후반전 세그멘테이션과 사업모델이 승부 가른다", LG Business Insight, pp.2-25, 2013.
  7. 백승은 and 박상원, "Accuracy Improvement Plan of Google Activity Recognition for User Activity Recognition", Korean Institute of Information Scientist and Engineers, 2014 Korea Computer Congress, pp.1770-1772, 2014.
  8. 전명중, "Mobile Sensor based User Activity Recognition", 숭실대학교 대학원 컴퓨터학과 석사학위 논문, 2013.
  9. 남윤영, 최유주 and 조위덕, "Human Activity Recognition using an Image Sensor and a 3-axis Accelerometer Sensor", Korean Society for Internet Information, Vol.11, Issue 1, pp.129-141, 2009.
  10. 이계환, 장준혁 and 김형곤, "Acoustic Environment Classification Algorithm for Context-aware Mobile Phone", Telecommunication Review, Vol.18, Issue 1, pp.139-149, 2008.
  11. Jennifer R, Gary M. Weiss, and Samuel A. Moore, "Activity Recognition using Cell Phone Accelerometers", ACM SIGKDO Explorations Newsletter, Vol.12, Issue 2, pp.74-82, 2010.
  12. Zhi Zeng, Xin Li, Xiaohong Ma, and Qiang Ji, "Adaptive Context Recognition Based on Audio Signal", 19th International Conference on Pattern Recognition, pp.1-4, 2008.
  13. 정성수, "Low Frequency Noise", 표준과 표준화 연구, Vol.1, Issue 1, pp.43-51, 2011.