• 제목/요약/키워드: Complex scene

검색결과 136건 처리시간 0.03초

문자-에지 맵의 패턴 히스토그램을 이용한 자연이미지에세 텍스트 영역 추출 (Text Region Extraction Using Pattern Histogram of Character-Edge Map in Natural Images)

  • 박종천;황동국;이우람;전병민
    • 한국산학기술학회논문지
    • /
    • 제7권6호
    • /
    • pp.1167-1174
    • /
    • 2006
  • 자연이미지로부터 텍스트 영역 추출은 자동차 번호판 인식 등과 같은 많은 응용프로그램에서 유용하다. 따라서 본 논문은 문자-에지 맵의 패턴 히스토그램을 이용한 텍스트 영역을 추출하는 방법을 제안한다. 16종류의 에지맵을 생성하고, 이것을 조합하여 문자 특징을 갖는 8종류 문자-에지 맵 특징을 추출한다. 문자-에지 맵의 특징을 이용하여 텍스트 후보 영역을 추출하고, 텍스트 후보 영역에 대한 검증은 문자-에지 맵의 패턴 히스토그램 및 텍스트 영역의 구조적 특징을 이용하였다. 실험결과 제안한 방법은 복잡한 배경, 다양한 글꼴, 다양한 텍스트 컬러로 구성된 자연이미지로부터 텍스트 영역을 효과적으로 추출하였다.

  • PDF

A Study on the Scythian Buckle

  • Kim, Moon-Ja
    • 패션비즈니스
    • /
    • 제10권6호
    • /
    • pp.38-51
    • /
    • 2006
  • In Scythian art the multitude of animal representations well illustrates the preoccupation of this nomadic people with animals in their environment. Usually only wild animals are represented. The purpose and meaning of the animal motifs used in Scythian ornaments appears that in some cases the work was intended to be purely ornamental, while many times the motifs had symbolic meaning (such as the successful dominance of the aggressor over the victim portrayed in the attack scenes). Following earlier Scythian migrations, Sarmatian animal-style art is distinguished by complex compositions in which stylized animals are depicted twisted or turned back upon themselves or in combat with other animals. Without copying nature, they accurately conveyed the essence of every beast depicted. Scythian bound the leather belts that was hanged a hook that shaped of different kinds at the end on the upper garment. Through the antique records and tombs bequests the styles of Scythian Buckles was divided into six groups, animal-shaped, animal's head shaped, animal fight-shaped, rectangle-shaped, rectangle openwork-shaped, genre scene shaped Buckle. In Korea, through the antique records and tombs bequests the styles of Buckles was horse-shaped and tiger-shaped Buckles that were influenced by scythe style.

2.5D human pose estimation for shadow puppet animation

  • Liu, Shiguang;Hua, Guoguang;Li, Yang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권4호
    • /
    • pp.2042-2059
    • /
    • 2019
  • Digital shadow puppet has traditionally relied on expensive motion capture equipments and complex design. In this paper, a low-cost driven technique is presented, that captures human pose estimation data with simple camera from real scenarios, and use them to drive virtual Chinese shadow play in a 2.5D scene. We propose a special method for extracting human pose data for driving virtual Chinese shadow play, which is called 2.5D human pose estimation. Firstly, we use the 3D human pose estimation method to obtain the initial data. In the process of the following transformation, we treat the depth feature as an implicit feature, and map body joints to the range of constraints. We call the obtain pose data as 2.5D pose data. However, the 2.5D pose data can not better control the shadow puppet directly, due to the difference in motion pattern and composition structure between real pose and shadow puppet. To this end, the 2.5D pose data transformation is carried out in the implicit pose mapping space based on self-network and the final 2.5D pose expression data is produced for animating shadow puppets. Experimental results have demonstrated the effectiveness of our new method.

A Novel Text Sample Selection Model for Scene Text Detection via Bootstrap Learning

  • Kong, Jun;Sun, Jinhua;Jiang, Min;Hou, Jian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.771-789
    • /
    • 2019
  • Text detection has been a popular research topic in the field of computer vision. It is difficult for prevalent text detection algorithms to avoid the dependence on datasets. To overcome this problem, we proposed a novel unsupervised text detection algorithm inspired by bootstrap learning. Firstly, the text candidate in a novel form of superpixel is proposed to improve the text recall rate by image segmentation. Secondly, we propose a unique text sample selection model (TSSM) to extract text samples from the current image and eliminate database dependency. Specifically, to improve the precision of samples, we combine maximally stable extremal regions (MSERs) and the saliency map to generate sample reference maps with a double threshold scheme. Finally, a multiple kernel boosting method is developed to generate a strong text classifier by combining multiple single kernel SVMs based on the samples selected from TSSM. Experimental results on standard datasets demonstrate that our text detection method is robust to complex backgrounds and multilingual text and shows stable performance on different standard datasets.

A Bit Allocation Method Based on Proportional-Integral-Derivative Algorithm for 3DTV

  • Yan, Tao;Ra, In-Ho;Liu, Deyang;Zhang, Qian
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권5호
    • /
    • pp.1728-1743
    • /
    • 2021
  • Three-dimensional (3D) video scenes are complex and difficult to control, especially when scene switching occurs. In this paper, we propose two algorithms based on an incremental proportional-integral-derivative (PID) algorithm and a similarity analysis between views to improve the method of bit allocation for multi-view high efficiency video coding (MV-HEVC). Firstly, an incremental PID algorithm is introduced to control the buffer "liquid level" to reduce the negative impact on the target bit allocation of the view layer and frame layer owing to the fluctuation of the buffer "liquid level". Then, using the image similarity between views is used to establish, a bit allocation calculation model for the multi-view video main viewpoint and non-main viewpoint is established. Then, a bit allocation calculation method based on hierarchical B frames is proposed. Experimental simulation results verify that the algorithm ensures a smooth transition of image quality while increasing the coding efficiency, and the PSNR increases by 0.03 to 0.82dB while not significantly increasing the calculation complexity.

경량화된 임베디드 시스템에서 역 원근 변환 및 머신 러닝 기반 차선 검출 (Lane Detection Based on Inverse Perspective Transformation and Machine Learning in Lightweight Embedded System)

  • 홍성훈;박대진
    • 대한임베디드공학회논문지
    • /
    • 제17권1호
    • /
    • pp.41-49
    • /
    • 2022
  • This paper proposes a novel lane detection algorithm based on inverse perspective transformation and machine learning in lightweight embedded system. The inverse perspective transformation method is presented for obtaining a bird's-eye view of the scene from a perspective image to remove perspective effects. This method requires only the internal and external parameters of the camera without a homography matrix with 8 degrees of freedom (DoF) that maps the points in one image to the corresponding points in the other image. To improve the accuracy and speed of lane detection in complex road environments, machine learning algorithm that has passed the first classifier is used. Before using machine learning, we apply a meaningful first classifier to the lane detection to improve the detection speed. The first classifier is applied in the bird's-eye view image to determine lane regions. A lane region passed the first classifier is detected more accurately through machine learning. The system has been tested through the driving video of the vehicle in embedded system. The experimental results show that the proposed method works well in various road environments and meet the real-time requirements. As a result, its lane detection speed is about 3.85 times faster than edge-based lane detection, and its detection accuracy is better than edge-based lane detection.

조명을 위한 인간 자세와 다중 모드 이미지 융합 - 인간의 이상 행동에 대한 강력한 탐지 (Multimodal Image Fusion with Human Pose for Illumination-Robust Detection of Human Abnormal Behaviors)

  • ;공성곤
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2023년도 추계학술발표대회
    • /
    • pp.637-640
    • /
    • 2023
  • This paper presents multimodal image fusion with human pose for detecting abnormal human behaviors in low illumination conditions. Detecting human behaviors in low illumination conditions is challenging due to its limited visibility of the objects of interest in the scene. Multimodal image fusion simultaneously combines visual information in the visible spectrum and thermal radiation information in the long-wave infrared spectrum. We propose an abnormal event detection scheme based on the multimodal fused image and the human poses using the keypoints to characterize the action of the human body. Our method assumes that human behaviors are well correlated to body keypoints such as shoulders, elbows, wrists, hips. In detail, we extracted the human keypoint coordinates from human targets in multimodal fused videos. The coordinate values are used as inputs to train a multilayer perceptron network to classify human behaviors as normal or abnormal. Our experiment demonstrates a significant result on multimodal imaging dataset. The proposed model can capture the complex distribution pattern for both normal and abnormal behaviors.

응급구조사 직업윤리에 대한 인식조사 (Perceptions about the Professional Ethics of EMT)

  • 윤형완;이재민
    • 한국화재소방학회논문지
    • /
    • 제28권1호
    • /
    • pp.71-78
    • /
    • 2014
  • 응급구조사는 병원 밖 사고 현장과 응급실의 응급의료 행위에 대해서 복잡한 윤리적 문제가 야기된다. 소방현장 및 종합병원에서 근무하는 응급구조사 500명을 대상으로 직업윤리의식과 태도, 이송한 환자에 대한 논의와 대책 그리고 임종에 관한 윤리의식을 설문하였다. 직업윤리의식과 태도, 이송한 환자에 대한 논의와 대책 그리고 임종관련에 대한 윤리의식을 설문해 보았는데 직업적 윤리의식이나 응급구조사가 가져야 할 태도가 높게 나타났다. 현장에서 응급처치나 이송한 환자에 대해서 결과를 논의하거나 예후를 알아보는 군은 자격에 따라 유의하게 나타났다. 부적절한 응급처치나 이송에 대해서는 90% 이상이 토론 후 대책을 세우는 것으로 보여 졌으나, 지난 업무에 대해서는 그냥 넘어가기를 원하는 것과 책임문제로 상관에게 보고하는 경우도 있어 도덕적으로 비윤리적인 문제도 안고 있었다. 사망진단을 내릴 수 없는 응급구조사에게 임종관련 DNAR 문제로 윤리적 갈등을 심하게 겪고 있는데, 제도적 뒷받침이 미약하여 불필요한 치료를 하고 있다. 사고현장에서 윤리적인 문제들, 특히 DNAR 교육은 필요성에 비해 교육과 지침서의 지급률이 지역과 소속마다 차이가 심하였다. 따라서 응급구조사의 직업윤리교육과 지침이 반드시 필요하며, 응급현장에서 이용 시 많은 도덕적 오류들이 줄어들 것이다.

블록 정합을 이용한 비디오 자막 영역의 원 영상 복원 방법 (A Method for Reconstructing Original Images for Captions Areas in Videos Using Block Matching Algorithm)

  • 전병태;이재연;배영래
    • 방송공학회논문지
    • /
    • 제5권1호
    • /
    • pp.113-122
    • /
    • 2000
  • 이미 방송된 비디오 영상으로부터 자막 영역을 제거하고 원 영상으로 복원할 필요가 종종 발생한다. 복원될 영상의 량이 적을 경우 수 작업에 의한 복원이 가능하나, 비디오 영상과 같이 복원할 영상이 많아질 경우에는 수 작업에 복원은 어렵다고 볼 수 있다. 따라서 자동으로 자막 영역을 원 영상으로 복원할 수 있는 방법이 필요하게 된다. 기존의 영상 복원에 관한 연구는 주로 블러링(blurring)된 영상을 주파수 필터를 사용하여 선명하게 복원하거나, 영상 통신을 위한 비디오 코딩 방법에 대한 연구가 많이 이루어졌다. 본 논문에서는 블록 정합 알고리즘(Block Matching Algorithm)을 이용하여 자막 영역을 복원하는 방법을 제안하고자한다. 자막 복원을 위한 사전 정보로 자막 영역 정보와 장면 전환 정보를 추출한다. 추출된 자막 정보로부터 자막의 시작 프레임, 끝 프레임, 자막 문자의 구성 요소 정보를 얻을 수 있다. 자막 정보(자막의 시작 프레임, 끝 프레임)와 장면 전환 정보를 이용하여 복원의 방향성 및 복원의 종점을 결정한다. 복원의 방향성에 따라 각 프레임마다 문자의 구성 요소에 대한 블록 정합을 수행하여 원 영상을 복원한다. 실험결과 비교적 움직임이 적은 영상에서는 복원이 잘 됨을 볼 수 있었으며, 복잡한 배경을 갖고 있는 영상의 경우도 복원됨을 볼 수 있었다.

  • PDF

D4AR - A 4-DIMENSIONAL AUGMENTED REALITY - MODEL FOR AUTOMATION AND VISUALIZATION OF CONSTRUCTION PROGRESS MONITORING

  • Mani Golparvar-Fard;Feniosky Pena-Mora
    • 국제학술발표논문집
    • /
    • The 3th International Conference on Construction Engineering and Project Management
    • /
    • pp.30-31
    • /
    • 2009
  • Early detection of schedule delay in field construction activities is vital to project management. It provides the opportunity to initiate remedial actions and increases the chance of controlling such overruns or minimizing their impacts. This entails project managers to design, implement, and maintain a systematic approach for progress monitoring to promptly identify, process and communicate discrepancies between actual and as-planned performances as early as possible. Despite importance, systematic implementation of progress monitoring is challenging: (1) Current progress monitoring is time-consuming as it needs extensive as-planned and as-built data collection; (2) The excessive amount of work required to be performed may cause human-errors and reduce the quality of manually collected data and since only an approximate visual inspection is usually performed, makes the collected data subjective; (3) Existing methods of progress monitoring are also non-systematic and may also create a time-lag between the time progress is reported and the time progress is actually accomplished; (4) Progress reports are visually complex, and do not reflect spatial aspects of construction; and (5) Current reporting methods increase the time required to describe and explain progress in coordination meetings and in turn could delay the decision making process. In summary, with current methods, it may be not be easy to understand the progress situation clearly and quickly. To overcome such inefficiencies, this research focuses on exploring application of unsorted daily progress photograph logs - available on any construction site - as well as IFC-based 4D models for progress monitoring. Our approach is based on computing, from the images themselves, the photographer's locations and orientations, along with a sparse 3D geometric representation of the as-built scene using daily progress photographs and superimposition of the reconstructed scene over the as-planned 4D model. Within such an environment, progress photographs are registered in the virtual as-planned environment, allowing a large unstructured collection of daily construction images to be interactively explored. In addition, sparse reconstructed scenes superimposed over 4D models allow site images to be geo-registered with the as-planned components and consequently, a location-based image processing technique to be implemented and progress data to be extracted automatically. The result of progress comparison study between as-planned and as-built performances can subsequently be visualized in the D4AR - 4D Augmented Reality - environment using a traffic light metaphor. In such an environment, project participants would be able to: 1) use the 4D as-planned model as a baseline for progress monitoring, compare it to daily construction photographs and study workspace logistics; 2) interactively and remotely explore registered construction photographs in a 3D environment; 3) analyze registered images and quantify as-built progress; 4) measure discrepancies between as-planned and as-built performances; and 5) visually represent progress discrepancies through superimposition of 4D as-planned models over progress photographs, make control decisions and effectively communicate those with project participants. We present our preliminary results on two ongoing construction projects and discuss implementation, perceived benefits and future potential enhancement of this new technology in construction, in all fronts of automatic data collection, processing and communication.

  • PDF