• 제목/요약/키워드: Complex scene

검색결과 134건 처리시간 0.02초

A Novel Image Dehazing Algorithm Based on Dual-tree Complex Wavelet Transform

  • Huang, Changxin;Li, Wei;Han, Songchen;Liang, Binbin;Cheng, Peng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권10호
    • /
    • pp.5039-5055
    • /
    • 2018
  • The quality of natural outdoor images captured by visible camera sensors is usually degraded by the haze present in the atmosphere. In this paper, a fast image dehazing method based on visible image and near-infrared fusion is proposed. In the proposed method, a visible and a near-infrared (NIR) image of the same scene is fused based on the dual-tree complex wavelet transform (DT-CWT) to generate a dehazed color image. The color of the fusion image is regulated through haze concentration estimated by dark channel prior (DCP). The experiment results demonstrate that the proposed method outperforms the conventional dehazing methods and effectively solves the color distortion problem in the dehazing process.

Collecting the Information Needs of Skilled and Be-ginner Drivers Based on a User Mental Model for a Cus-tomized AR-HUD Interface

  • Zhang, Han;Lee, Seung Hee
    • 감성과학
    • /
    • 제24권4호
    • /
    • pp.53-68
    • /
    • 2021
  • The continuous development of in-vehicle information systems in recent years has dramatically enriched drivers' driving experience while occupying their cognitive resources to varying degrees, causing driving distraction. Under this complex information system, managing the complexity and priority of information and further improvement in driving safety has become a key issue that needs to be urgently solved by the in-vehicle information system. The new interactive methods incorporating the augmented reality (AR) and head-up display (HUD) technologies into in-vehicle information systems are currently receiving widespread attention. This superimposes various onboard information into an actual driving scene, thereby meeting the needs of complex tasks and improving driving safety. Based on the qualitative research methods of surveys and telephone interviews, this study collects the information needs of the target user groups (i.e., beginners and skilled drivers) and constructs a three-mode information database to provide the basis for a customized AR-HUD interface design.

자연배경에서 여러 객체 윤곽선의 추출을 위한 스네이크의 자동화 (Automation of Snake for Extraction of Multi-Object Contours from a Natural Scene)

  • 최재혁;서경석;김복만;최흥문
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제9권6호
    • /
    • pp.712-717
    • /
    • 2003
  • 자연배경으로부터 불특정 다수 객체의 윤곽선들을 자동 추출하는 다중 스네이크(Snake) 알고리즘을 제안하였다. 먼저 잡음에 강건한 문맥자유 주목연산자(context-free attention operator)를 이용하여 자연배경에 혼재하는 불특정 다수 객체들을 자동 검출하고, 각 객체별로 스네이크의 초기 윤곽들을 자동 설정함으로써 기존 스네이크 알고리즘에서는 어려웠던 초기 윤곽의 자동 설정과 여러 객체 윤곽선의 동시 추출 문제를 해결하였다. 이때 각 스네이크의 초기 윤곽들은 기존의 방법들에 비해 객체들의 실제윤곽선에 좀 더 가까이 설정하여 요철이 큰 객체들의 윤곽선도 쉽게 추출 할 수 있도록 하였다. 다양한 합성 영상과 자연배경의 실영상에 대해 실험하여 잡음이 있는 복잡한 배경으로부터도 불특정 다수 객체의 윤곽선을 효과적으로 자동 추출함을 확인하였다.

Text Extraction from Complex Natural Images

  • Kumar, Manoj;Lee, Guee-Sang
    • International Journal of Contents
    • /
    • 제6권2호
    • /
    • pp.1-5
    • /
    • 2010
  • The rapid growth in communication technology has led to the development of effective ways of sharing ideas and information in the form of speech and images. Understanding this information has become an important research issue and drawn the attention of many researchers. Text in a digital image contains much important information regarding the scene. Detecting and extracting this text is a difficult task and has many challenging issues. The main challenges in extracting text from natural scene images are the variation in the font size, alignment of text, font colors, illumination changes, and reflections in the images. In this paper, we propose a connected component based method to automatically detect the text region in natural images. Since text regions in mages contain mostly repetitions of vertical strokes, we try to find a pattern of closely packed vertical edges. Once the group of edges is found, the neighboring vertical edges are connected to each other. Connected regions whose geometric features lie outside of the valid specifications are considered as outliers and eliminated. The proposed method is more effective than the existing methods for slanted or curved characters. The experimental results are given for the validation of our approach.

무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법 (Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners)

  • 안승욱;최윤근;정명진
    • 로봇학회논문지
    • /
    • 제7권2호
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.

3차원 손 특징을 이용한 손 동작 인식에 관한 연구 (A study on hand gesture recognition using 3D hand feature)

  • 배철수
    • 한국정보통신학회논문지
    • /
    • 제10권4호
    • /
    • pp.674-679
    • /
    • 2006
  • 본 논문에서는 3차원 손 특징 데이터를 이용한 동작 인식 시스템을 제안하고자 한다. 제안된 시스템은 3차원 센서에 의해 조밀한 범위의 영상을 생성하여 손 동작에 대한 3차원 특징을 추출하여 손 동작을 분류한다. 또한 다양한 조명과 배경하에서의 손을 견실하게 분할하고 색상 정보와 상관이 없어 수화와 같은 복잡한 손 동작에 대해서도 견실한 인식능력을 나타낼 수가 있다. 제안된 방법의 전체적인 순서는 3차원 영상 획득, 팔 분할, 손과 팔목 분할, 손 자세 추정, 3차원 특징 추출, 그리고 동작 분류로 구성되어 있고, 수화 자세에 대한 인식 실험으로 제안된 시스템의 효율성을 입증하였다.

The Natural Way of Gestures for Interacting with Smart TV

  • Choi, Jin-Hae;Hong, Ji-Young
    • 대한인간공학회지
    • /
    • 제31권4호
    • /
    • pp.567-575
    • /
    • 2012
  • Objective: The aim of this study is to get an optimal mental model by investigating user's natural behavior for controlling smart TV by mid-air gestures and to identify which factor is most important for controlling behavior. Background: A lot of TV companies are trying to find simple controlling method for complex smart TV. Although plenty of gesture studies proposing they could get possible alternatives to resolve this pain-point, however, there is no fitted gesture work for smart TV market. So it is needed to find optimal gestures for it. Method: (1) Eliciting core control scene by in-house study. (2) Observe and analyse 20 users' natural behavior as types of hand-held devices and control scene. We also made taxonomies for gestures. Results: Users' are trying to do more manipulative gestures than symbolic gestures when they try to continuous control. Conclusion: The most natural way to control smart TV on the remote with gestures is give user a mental model grabbing and manipulating virtual objects in the mid-air. Application: The results of this work might help to make gesture interaction guidelines for smart TV.

Collective Interaction Filtering Approach for Detection of Group in Diverse Crowded Scenes

  • Wong, Pei Voon;Mustapha, Norwati;Affendey, Lilly Suriani;Khalid, Fatimah
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.912-928
    • /
    • 2019
  • Crowd behavior analysis research has revealed a central role in helping people to find safety hazards or crime optimistic forecast. Thus, it is significant in the future video surveillance systems. Recently, the growing demand for safety monitoring has changed the awareness of video surveillance studies from analysis of individuals behavior to group behavior. Group detection is the process before crowd behavior analysis, which separates scene of individuals in a crowd into respective groups by understanding their complex relations. Most existing studies on group detection are scene-specific. Crowds with various densities, structures, and occlusion of each other are the challenges for group detection in diverse crowded scenes. Therefore, we propose a group detection approach called Collective Interaction Filtering to discover people motion interaction from trajectories. This approach is able to deduce people interaction with the Expectation-Maximization algorithm. The Collective Interaction Filtering approach accurately identifies groups by clustering trajectories in crowds with various densities, structures and occlusion of each other. It also tackles grouping consistency between frames. Experiments on the CUHK Crowd Dataset demonstrate that approach used in this study achieves better than previous methods which leads to latest results.

Voxel-wise UV parameterization and view-dependent texture synthesis for immersive rendering of truncated signed distance field scene model

  • Kim, Soowoong;Kang, Jungwon
    • ETRI Journal
    • /
    • 제44권1호
    • /
    • pp.51-61
    • /
    • 2022
  • In this paper, we introduced a novel voxel-wise UV parameterization and view-dependent texture synthesis for the immersive rendering of a truncated signed distance field (TSDF) scene model. The proposed UV parameterization delegates a precomputed UV map to each voxel using the UV map lookup table and consequently, enabling efficient and high-quality texture mapping without a complex process. By leveraging the convenient UV parameterization, our view-dependent texture synthesis method extracts a set of local texture maps for each voxel from the multiview color images and separates them into a single view-independent diffuse map and a set of weight coefficients for an orthogonal specular map basis. Furthermore, the view-dependent specular maps for an arbitrary view are estimated by combining the specular weights of each source view using the location of the arbitrary and source viewpoints to generate the view-dependent textures for arbitrary views. The experimental results demonstrate that the proposed method effectively synthesizes texture for an arbitrary view, thereby enabling the visualization of view-dependent effects, such as specularity and mirror reflection.

텍스트의 은유적 구조 (The Metaphorical Structure of the Text)

  • 박찬부
    • 영어영문학
    • /
    • 제57권5호
    • /
    • pp.871-887
    • /
    • 2011
  • In Lacanian terms, the real, which is a non-representative Ding an sich, is indirectly approachable only in and through language. This 'speaking of the real' is made possible through a restoration of the missing link between one signifier, S1 and another signifier, S2, as is manifested in the Lacanian formula of metaphor. In Freudian terms of textual metaphor, the missing link is restored by substituting a new edition for an old edition of one's historical text of life. This is what this essay means by the metaphorical/dualistic structure of the analytic/literary text. And this is a way of talking about an intertextuality between literature and psychoanalysis in the sense of the 'text as psyche' and the 'psyche as text.' Applying the 'signifying substitution' to the Oedipus complex, the Oedipal child can find a meaning(s), "my erotic indulgement with my Mom is wrong" by metaphorically substituting S2: the Name of the Father for S1: the Desire of the Mother. This meaning leads to the constitution of the human subject and the formation of the incest taboo, one of the most significant distinctive features of the human being as distinguished from the animals. We can see a similar metaphorical structure of S1-S2 taking place in the literary texts such as Macbeth and "Dover Beach": in the course of the stage of life being substituted for the primal scene in the former, and the plain of Tucydides for a bed scene in the latter, respectively.