• Title/Summary/Keyword: Implicit Intent

Search Result 5, Processing Time 0.018 seconds

Study on Security Vulnerabilities of Implicit Intents in Android (안드로이드 암시적 인텐트의 보안 취약점에 대한 연구)

  • Jo, Min Jae;Shin, Ji Sun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.6
    • /
    • pp.1175-1184
    • /
    • 2014
  • Android provides a message-passing mechanism called intent. While it helps easy developments of communications between intra and inter applications, it can be vulnerable to attacks. In particular, implicit intent, differing from explicit intent specifying a receiving component, does not specify a component that receives a message and insecure ways of using implicit intents may allow malicious applications to intercept or forge intents. In this paper, we focus on security vulnerabilities of implicit intent and review researched attacks and solutions. For the case of implicit intent using 'developer-created action', specific attacks and solutions have been published. However, for the case of implicit intent using 'Android standard action', no specific attack has been found and less studied. In this paper, we present a new attack on implicit intent using Android standard action and propose solutions to protect smart phones from this attack.

Discriminant Analysis of Human's Implicit Intent based on Eyeball Movement (안구운동 기반의 사용자 묵시적 의도 판별 분석 모델)

  • Jang, Young-Min;Mallipeddi, Rammohan;Kim, Cheol-Su;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.212-220
    • /
    • 2013
  • Recently, there has been tremendous increase in human-computer/machine interaction system, where the goal is to provide with an appropriate service to the user at the right time with minimal human inputs for human augmented cognition system. To develop an efficient human augmented cognition system based on human computer/machine interaction, it is important to interpret the user's implicit intention, which is vague, in addition to the explicit intention. According to cognitive visual-motor theory, human eye movements and pupillary responses are rich sources of information about human intention and behavior. In this paper, we propose a novel approach for the identification of human implicit visual search intention based on eye movement pattern and pupillary analysis such as pupil size, gradient of pupil size variation, fixation length/count for the area of interest. The proposed model identifies the human's implicit intention into three types such as navigational intent generation, informational intent generation, and informational intent disappearance. Navigational intent refers to the search to find something interesting in an input scene with no specific instructions, while informational intent refers to the search to find a particular target object at a specific location in the input scene. In the present study, based on the human eye movement pattern and pupillary analysis, we used a hierarchical support vector machine which can detect the transitions between the different implicit intents - navigational intent generation to informational intent generation and informational intent disappearance.

Development of an EMG-Based Car Interface Using Artificial Neural Networks for the Physically Handicapped (신경망을 적용한 지체장애인을 위한 근전도 기반의 자동차 인터페이스 개발)

  • Kwak, Jae-Kyung;Jeon, Tae-Woong;Park, Hum-Yong;Kim, Sung-Jin;An, Kwang-Dek
    • Journal of Information Technology Services
    • /
    • v.7 no.2
    • /
    • pp.149-164
    • /
    • 2008
  • As the computing landscape is shifting to ubiquitous computing environments, there is increasingly growing the demand for a variety of device controls that react to user's implicit activities without excessively drawing user attentions. We developed an EMG-based car interface that enables the physically handicapped to drive a car using their functioning peripheral nerves. Our method extracts electromyogram signals caused by wrist movements from four places in the user's forearm and then infers the user's intent from the signals using multi-layered neural nets. By doing so, it makes it possible for the user to control the operation of car equipments and thus to drive the car. It also allows the user to enter inputs into the embedded computer through a user interface like an instrument LCD panel. We validated the effectiveness of our method through experimental use in a car built with the EMG-based interface.

Ontology-based Semantic Assembly Modeling for Collaborative Product Design (협업적 제픔 설계를 위한 온톨로지 기반 시맨틱 조립체 모델링)

  • Yang Hyung-Jeong;Kim Kyung-Yun;Kim Soo-Hyung
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.139-148
    • /
    • 2006
  • In the collaborative product design environment, the communication between designers is important to capture design intents and to share a common view among the different but semantically similar terms. The Semantic Web supports integrated and uniform access to information sources and services as well as intelligent applications by the explicit representation of the semantics buried in ontology. Ontologies provide a source of shared and precisely defined terms that can be used to describe web resources and improve their accessibility to automated processes. Therefore, employing ontologies on assembly modeling makes assembly knowledge accurate and machine interpretable. In this paper, we propose a framework of semantic assembly modeling using ontologies to share design information. An assembly modeling ontology plays as a formal, explicit specification of a shared conceptualization of assembly design modeling. In this paper, implicit assembly constraints are explicitly represented using OWL (Web Ontology Language) and SWRL (Semantic Web Rule Language). The assembly ontology also captures design rationale including joint intent and spatial relationships.

Kinetic Typography in Korean Film, 2012 (Study on the movie opening title sequence expression studies using kinetic typography) (키네틱 타이포그래피를 활용한 영화 오프닝타이틀 시퀀스 표현연구(2012 흥행작 중심으로))

  • Bang, Yoon-Kyeong
    • Cartoon and Animation Studies
    • /
    • s.31
    • /
    • pp.227-248
    • /
    • 2013
  • With the advancement of computers, opening title sequences in movies are continuously improving. Initially, titles and opening credits were created using what is called the optical method, whereby text was photographed on separate film and then copied onto the movies film negative. In contemporary movie making, however, the title sequence may be seamlessly integrated into the beginning of the movie by an insertion method that not only allows for more diverse technical expression, including the use of both 2D and 3D graphics, but also for its emergence as an independent art form. As such a title sequence, in as little as 50 seconds or up to 10 minutes, is able to convey the films concept while also suggesting more implicit intricacies of plot and thereby eliciting greater interest in the movie. Moreover, according to the directors intent and for a variety of purposes, the title sequence, while maintaining its autonomy, is inseparable from the movie as an organic whole; therefore, it is possible to create works that are highly original in nature. The purpose of this study is to analyze the kinetic typography that appears in title sequences of ten films produced by the Korean entertainment industry in 2012. Production techniques are analyzed in a variety of ways in order to predict the future direction of opening title sequences, as well as present aesthetic and technical models for their creation.