• 제목/요약/키워드: 3-D feature extraction

Search Result 202, Processing Time 0.022 seconds

Robust Detection of Body Areas Using an Adaboost Algorithm (에이다부스트 알고리즘을 이용한 인체 영역의 강인한 검출)

  • Jang, Seok-Woo;Byun, Siwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.11
    • /
    • pp.403-409
    • /
    • 2016
  • Recently, harmful content (such as images and photos of nudes) has been widely distributed. Therefore, there have been various studies to detect and filter out such harmful image content. In this paper, we propose a new method using Haar-like features and an AdaBoost algorithm for robustly extracting navel areas in a color image. The suggested algorithm first detects the human nipples through color information, and obtains candidate navel areas with positional information from the extracted nipple areas. The method then selects real navel regions based on filtering using Haar-like features and an AdaBoost algorithm. Experimental results show that the suggested algorithm detects navel areas in color images 1.6 percent more robustly than an existing method. We expect that the suggested navel detection algorithm will be usefully utilized in many application areas related to 2D or 3D harmful content detection and filtering.

Recognition method using stereo images-based 3D information for improvement of face recognition (얼굴인식의 향상을 위한 스테레오 영상기반의 3차원 정보를 이용한 인식)

  • Park Chang-Han;Paik Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.30-38
    • /
    • 2006
  • In this paper, we improved to drops recognition rate according to distance using distance and depth information with 3D from stereo face images. A monocular face image has problem to drops recognition rate by uncertainty information such as distance of an object, size, moving, rotation, and depth. Also, if image information was not acquired such as rotation, illumination, and pose change for recognition, it has a very many fault. So, we wish to solve such problem. Proposed method consists of an eyes detection algorithm, analysis a pose of face, md principal component analysis (PCA). We also convert the YCbCr space from the RGB for detect with fast face in a limited region. We create multi-layered relative intensity map in face candidate region and decide whether it is face from facial geometry. It can acquire the depth information of distance, eyes, and mouth in stereo face images. Proposed method detects face according to scale, moving, and rotation by using distance and depth. We train by using PCA the detected left face and estimated direction difference. Simulation results with face recognition rate of 95.83% (100cm) in the front and 98.3% with the pose change were obtained successfully. Therefore, proposed method can be used to obtain high recognition rate with an appropriate scaling and pose change according to the distance.

Accurate Camera Calibration Method for Multiview Stereoscopic Image Acquisition (다중 입체 영상 획득을 위한 정밀 카메라 캘리브레이션 기법)

  • Kim, Jung Hee;Yun, Yeohun;Kim, Junsu;Yun, Kugjin;Cheong, Won-Sik;Kang, Suk-Ju
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.919-927
    • /
    • 2019
  • In this paper, we propose an accurate camera calibration method for acquiring multiview stereoscopic images. Generally, camera calibration is performed by using checkerboard structured patterns. The checkerboard pattern simplifies feature point extraction process and utilizes previously recognized lattice structure, which results in the accurate estimation of relations between the point on 2-dimensional image and the point on 3-dimensional space. Since estimation accuracy of camera parameters is dependent on feature matching, accurate detection of checkerboard corner is crucial. Therefore, in this paper, we propose the method that performs accurate camera calibration method through accurate detection of checkerboard corners. Proposed method detects checkerboard corner candidates by utilizing 1-dimensional gaussian filters with succeeding corner refinement process to remove outliers from corner candidates and accurately detect checkerboard corners in sub-pixel unit. In order to verify the proposed method, we check reprojection errors and camera location estimation results to confirm camera intrinsic parameters and extrinsic parameters estimation accuracy.

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

A Real Time Lane Detection Algorithm Using LRF for Autonomous Navigation of a Mobile Robot (LRF 를 이용한 이동로봇의 실시간 차선 인식 및 자율주행)

  • Kim, Hyun Woo;Hawng, Yo-Seup;Kim, Yun-Ki;Lee, Dong-Hyuk;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.11
    • /
    • pp.1029-1035
    • /
    • 2013
  • This paper proposes a real time lane detection algorithm using LRF (Laser Range Finder) for autonomous navigation of a mobile robot. There are many technologies for safety of the vehicles such as airbags, ABS, EPS etc. The real time lane detection is a fundamental requirement for an automobile system that utilizes outside information of automobiles. Representative methods of lane recognition are vision-based and LRF-based systems. By the vision-based system, recognition of environment for three dimensional space becomes excellent only in good conditions for capturing images. However there are so many unexpected barriers such as bad illumination, occlusions, and vibrations that the vision cannot be used for satisfying the fundamental requirement. In this paper, we introduce a three dimensional lane detection algorithm using LRF, which is very robust against the illumination. For the three dimensional lane detections, the laser reflection difference between the asphalt and lane according to the color and distance has been utilized with the extraction of feature points. Also a stable tracking algorithm is introduced empirically in this research. The performance of the proposed algorithm of lane detection and tracking has been verified through the real experiments.

A semi-automated method for integrating textural and material data into as-built BIM using TIS

  • Zabin, Asem;Khalil, Baha;Ali, Tarig;Abdalla, Jamal A.;Elaksher, Ahmed
    • Advances in Computational Design
    • /
    • v.5 no.2
    • /
    • pp.127-146
    • /
    • 2020
  • Building Information Modeling (BIM) is increasingly used throughout the facility's life cycle for various applications, such as design, construction, facility management, and maintenance. For existing buildings, the geometry of as-built BIM is often constructed using dense, three dimensional (3D) point clouds data obtained with laser scanners. Traditionally, as-built BIM systems do not contain the material and textural information of the buildings' elements. This paper presents a semi-automatic method for generation of material and texture rich as-built BIM. The method captures and integrates material and textural information of building elements into as-built BIM using thermal infrared sensing (TIS). The proposed method uses TIS to capture thermal images of the interior walls of an existing building. These images are then processed to extract the interior walls using a segmentation algorithm. The digital numbers in the resulted images are then transformed into radiance values that represent the emitted thermal infrared radiation. Machine learning techniques are then applied to build a correlation between the radiance values and the material type in each image. The radiance values were used to extract textural information from the images. The extracted textural and material information are then robustly integrated into the as-built BIM providing the data needed for the assessment of building conditions in general including energy efficiency, among others.

Implementation of Constructor-Oriented Visualization System for Occluded Construction via Mobile Augmented-Reality (모바일 증강현실을 이용한 작업자 중심의 폐색된 건축물 시각화 시스템 개발)

  • Kim, Tae-Ho;Kim, Kyung-Ho;Han, Yunsang;Lee, Seok-Han;Choi, Jong-Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.55-68
    • /
    • 2014
  • Some infrastructure these days is usually constructed under the ground for it to not interfere the foot-traffic of pedestrians, and thus, it is difficult to visually confirm the accurate location of the site where the establishments must be buried. These technical difficulties increase the magnitude of the problems that could arise from over-reliance on the experience of the worker or a mere blueprint. Such problems include exposure to flood and collapse. This paper proposes a constructor-oriented visualization system via mobile gadgets in general construction sites with occluded structures. This proposal is consisted with three stages. First, "Stage of detecting manhole and extracting features" detects and extracts the basis point of occluded structures which is unoccluded manhole. Next, "Stage of tracking features" tracks down the extracted features in the previous stage. Lastly, "Stage of visualizing occluded constructions" analyzes and synthesizes the GPS data and 3D objects obtained from mobile gadgets in the previous stages. This proposal implemented ideal method through parallel analysis of manhole detection, feature extraction, and tracking techniques in indoor environment, and confirmed the possibility through occluded water-pipe augmentation in real environment. Also, it offers a practical constructor-oriented environment derived from the augmented 3D results of occluded water-pipings.

Automated Satellite Image Co-Registration using Pre-Qualified Area Matching and Studentized Outlier Detection (사전검수영역기반정합법과 't-분포 과대오차검출법'을 이용한 위성영상의 '자동 영상좌표 상호등록')

  • Kim, Jong Hong;Heo, Joon;Sohn, Hong Gyoo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.4D
    • /
    • pp.687-693
    • /
    • 2006
  • Image co-registration is the process of overlaying two images of the same scene, one of which represents a reference image, while the other is geometrically transformed to the one. In order to improve efficiency and effectiveness of the co-registration approach, the author proposed a pre-qualified area matching algorithm which is composed of feature extraction with canny operator and area matching algorithm with cross correlation coefficient. For refining matching points, outlier detection using studentized residual was used and iteratively removes outliers at the level of three standard deviation. Throughout the pre-qualification and the refining processes, the computation time was significantly improved and the registration accuracy is enhanced. A prototype of the proposed algorithm was implemented and the performance test of 3 Landsat images of Korea. showed: (1) average RMSE error of the approach was 0.435 pixel; (2) the average number of matching points was over 25,573; (3) the average processing time was 4.2 min per image with a regular workstation equipped with a 3 GHz Intel Pentium 4 CPU and 1 Gbytes Ram. The proposed approach achieved robustness, full automation, and time efficiency.

Development of Multimedia Annotation and Retrieval System using MPEG-7 based Semantic Metadata Model (MPEG-7 기반 의미적 메타데이터 모델을 이용한 멀티미디어 주석 및 검색 시스템의 개발)

  • An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.573-584
    • /
    • 2007
  • As multimedia information recently increases fast, various types of retrieval of multimedia data are becoming issues of great importance. For the efficient multimedia data processing, semantics based retrieval techniques are required that can extract the meaning contents of multimedia data. Existing retrieval methods of multimedia data are annotation-based retrieval, feature-based retrieval and annotation and feature integration based retrieval. These systems take annotator a lot of efforts and time and we should perform complicated calculation for feature extraction. In addition. created data have shortcomings that we should go through static search that do not change. Also, user-friendly and semantic searching techniques are not supported. This paper proposes to develop S-MARS(Semantic Metadata-based Multimedia Annotation and Retrieval System) which can represent and extract multimedia data efficiently using MPEG-7. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is implemented on the basis of the semantic metadata model to represent multimedia information. The semantic metadata about multimedia data is organized on the basis of multimedia description schema using XML schema that basically comply with the MPEG-7 standard. In conclusion. the proposed scheme can be easily implemented on any multimedia platforms supporting XML technology. It can be utilized to enable efficient semantic metadata sharing between systems, and it will contribute to improving the retrieval correctness and the user's satisfaction on embedding based multimedia retrieval algorithm method.

Multi-modality MEdical Image Registration based on Moment Information and Surface Distance (모멘트 정보와 표면거리 기반 다중 모달리티 의료영상 정합)

  • 최유주;김민정;박지영;윤현주;정명진;홍승봉;김명희
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.3_4
    • /
    • pp.224-238
    • /
    • 2004
  • Multi-modality image registration is a widely used image processing technique to obtain composite information from two different kinds of image sources. This study proposes an image registration method based on moment information and surface distance, which improves the previous surface-based registration method. The proposed method ensures stable registration results with low registration error without being subject to the initial position and direction of the object. In the preprocessing step, the surface points of the object are extracted, and then moment information is computed based on the surface points. Moment information is matched prior to fine registration based on the surface distance, in order to ensure stable registration results even when the initial positions and directions of the objects are very different. Moreover, surface comer sampling algorithm has been used in extracting representative surface points of the image to overcome the limits of the existed random sampling or systematic sampling methods. The proposed method has been applied to brain MRI(Magnetic Resonance Imaging) and PET(Positron Emission Tomography), and its accuracy and stability were verified through registration error ratio and visual inspection of the 2D/3D registration result images.