• Title/Summary/Keyword: contour extraction

Search Result 256, Processing Time 0.021 seconds

Automatic Extraction of Ascending Aorta and Ostium in Cardiac CT Angiography Images (심장 CT 혈관 조영 영상에서 대동맥 및 심문 자동 검출)

  • Kim, Hye-Ryun;Kang, Mi-Sun;Kim, Myoung-Hee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.1
    • /
    • pp.49-55
    • /
    • 2017
  • Computed tomographic angiography (CTA) is widely used in the diagnosis and treatment of coronary artery disease because it shows not only the whole anatomical structure of the cardiovascular three-dimensionally but also provides information on the lesion and type of plaque. However, due to the large size of the image, there is a limitation in manually extracting coronary arteries, and related researches are performed to automatically extract coronary arteries accurately. As the coronary artery originate from the ascending aorta, the ascending aorta and ostium should be detected to extract the coronary tree accurately. In this paper, we propose an automatic segmentation for the ostium as a starting structure of coronary artery in CTA. First, the region of the ascending aorta is initially detected by using Hough circle transform based on the relative position and size of the ascending aorta. Second, the volume of interest is defined to reduce the search range based on the initial area. Third, the refined ascending aorta is segmented by using a two-dimensional geodesic active contour. Finally, the two ostia are detected within the region of the refined ascending aorta. For the evaluation of our method, we measured the Euclidean distance between the result and the ground truths annotated manually by medical experts in 20 CTA images. The experimental results showed that the ostia were accurately detected.

Pace and Facial Element Extraction in CCD-Camera Images by using Snake Algorithm (스네이크 알고리즘에 의한 CCD 카메라 영상에서의 얼굴 및 얼굴 요소 추출)

  • 판데홍;김영원;김정연;전병환
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2002.11a
    • /
    • pp.535-542
    • /
    • 2002
  • 최근 IT 산업이 급성장하면서 화상 회의, 게임, 채팅 등에서의 아바타(avatar) 제어를 위한 자연스러운 인터페이스 기술이 요구되고 있다. 본 논문에서는 동적 윤곽선 모델(active contour models; snakes)을 이용하여 복잡한 배경이 있는 컬러 CCD 카메라 영상에서 얼굴과 눈, 입, 눈썹, 코 등의 얼굴 요소에 대해 윤곽선을 추출하거나 위치를 파악하는 방법을 제안한다. 일반적으로 스네이크 알고리즘은 잡음에 민감하고 초기 모델을 어떻게 설정하는가에 따라 추출 성능이 크게 좌우되기 때문에 주로 단순한 배경의 영상에서 정면 얼굴의 추출에 사용되어왔다 본 연구에서는 이러한 단점을 파악하기 위해, 먼저 YIQ 색상 모델의 I 성분을 이용한 색상 정보와 차 영상 정보를 사용하여 얼굴의 최소 포함 사각형(minimum enclosing rectangle; MER)을 찾고, 이 얼굴 영역 내에서 기하학적인 위치 정보와 에지 정보를 이용하여 눈, 입, 눈썹, 코의 MER을 설정한다. 그런 다음, 각 요소의 MER 내에서 1차 미분과 2차 미분에 근거한 내부 에너지와 에지에 기반한 영상 에너지를 이용한 스네이크 알고리즘을 적용한다. 이때, 에지 영상에서 얼굴 주변의 복잡한 잡음을 제거하기 위하여 색상 정보 영상과 차 영상에 각각 모폴로지(morphology)의 팽창(dilation) 연산을 적용하고 이들의 AND 결합 영상에 팽창 연산을 다시 적용한 이진 영상을 필터로 사용한다. 총 7명으로부터 양 눈이 보이는 정면 유사 방향의 영상을 20장씩 취득하여 총 140장에 대해 실험한 결과, MER의 오차율은 얼굴, 눈, 입에 대해 각각 6.2%, 11.2%, 9.4%로 나타났다. 또한, 스네이크의 초기 제어점을 얼굴은 44개, 눈은 16개, 입은 24개로 지정하여 MER추출에 성공한 영상에 대해 스네이크 알고리즘을 수행한 결과, 추출된 영역의 오차율은 각각 2.2%, 2.6%, 2.5%로 나타났다.해서 Template-based reasoning 예를 보인다 본 방법론은 검색노력을 줄이고, 검색에 있어 Feasibility와 Admissibility를 보장한다.매김할 수 있는 중요한 계기가 될 것이다.재무/비재무적 지표를 고려한 인공신경망기법의 예측적중률이 높은 것으로 나타났다. 즉, 로지스틱회귀 분석의 재무적 지표모형은 훈련, 시험용이 84.45%, 85.10%인 반면, 재무/비재무적 지표모형은 84.45%, 85.08%로서 거의 동일한 예측적중률을 가졌으나 인공신경망기법 분석에서는 재무적 지표모형이 92.23%, 85.10%인 반면, 재무/비재무적 지표모형에서는 91.12%, 88.06%로서 향상된 예측적중률을 나타내었다.ting LMS according to increasing the step-size parameter $\mu$ in the experimentally computed. learning curve. Also we find that convergence speed of proposed algorithm is increased by (B+1) time proportional to B which B is the number of recycled data buffer without complexity of computation. Adaptive transversal filter with proposed data recycling buffer algorithm could efficiently reject ISI of channel and increase speed of convergence in avoidance burden of computational complexity in reality when it was experimented having the same condition of

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

A STUDY ON THE COLOR CHANGES ACCORDING TO THE AMOUNT OF REMAINING TOOTH MATERIAL (치질(齒質) 잔존량(殘存量)에 따른 색조변화(色調變化)에 관(關)한 연구(硏究))

  • Hoh, Sung-Yun;Min, Byung-Soon;Choi, Ho-Young;Park, Sang-Jin
    • Restorative Dentistry and Endodontics
    • /
    • v.12 no.1
    • /
    • pp.131-147
    • /
    • 1986
  • The purpose of this study was to observe the color matching of lining or filling materials according to the remaining tooth material. Twenty-seven freshly extracted human central incisors were used in this experiments. The teeth were stored in saline solution at room temperature after extraction. All teeth were cut parallel to the tangent to height of contour on labial surface from the lingual surface until the pulp were completely removed. Then 27 teeth were devided into 0.5mm, 1.0mm and 1.5mm reduction groups according to the thickness of cutting the lingual surfaces of teeth. The specimens of control group were three teeth of 27 teeth with cutting the lingual surface same mode as above described. In the specimens of experimental groups, 8 kinds of lining and filling materials; FUJI IONOMER TYPE II (G-C Co. Japan), LINING CEMENT (G-C Co. Japan), Dycal (Caulk, U.S.A.), CLEARFIL F II (Kuraray Co. Japan), Crown Bridge & Inlay Cement (G-C Co. Japan), Copalite (Harry J. Bosworth Co. U.S.A.), HY-BOND (G-C Co. Japan) and LIV-CENERA (G-C Co. Japan); applied on the back of 24 teeth with 0.5mm, 1.0mm and 1.5mm cut thickness of lingual surfaces. Three teeth of control group did not applied linging or filling materials on the back of 3 kinds of different thickness of cutting the lingual surfaces. The absorbances of total 27 specimens were obtained by reflection spectrophotometer. (Cary 17 D, Varian Co, U.S.A.) The following conclusions were drawn from above the results; 1. The absorbance patterns in both experiment and control groups were gradually decreased with increasing wavelength of spectra. 2. The absorbance patterns were not decreased in relation to the kinds of lining or filling materials, but the amount of the remaining tooth materials. 3. In 0.5mm reduction group, FUJI IONOMER TYPE II, LINING CEMENT, LIV-CENERA and Copalite applied on the back of cut lingual surface showed similar absorbance patterns as control group. 4. The specimens which were reduced up to 1.0mm thickness and lined with FUJI IONOMER TYPE II and LINING CEMENT showed the comparable absorbance patterns to the control group. 5. In case of HY-BOND application after 1.5mm reduction were observed the similar absorbance pattern as compared with control group. 6. When Dycal, CLEARFIL and Crown Bridge & Inlay Cement were applied to cut teeth surfaces, there were much differences of absorbance between control groups and experimental groups.

  • PDF

Intertidal DEM Generation Using Waterline Extracted from Remotely Sensed Data (원격탐사 자료로부터 해안선 추출에 의한 조간대 DEM 생성)

  • 류주형;조원진;원중선;이인태;전승수
    • Korean Journal of Remote Sensing
    • /
    • v.16 no.3
    • /
    • pp.221-233
    • /
    • 2000
  • An intertidal topography is continuously changed due to morphodynamics processes. Detection and measurement of topographic change for a tidal flat is important to make an integrated coastal area management plan as well as to carry out sedimentologic study. The objective of this study is to generate intertidal DEM using leveling data and waterlines extracted from optical and microwave remotely sensed data in a relatively short period. Waterline is defined as the border line between exposed tidal flat and water body. The contour of the terrain height in tidal flat is equivalent to the waterline. One can utilize satellite images to generate intertidal DEM over large areas. Extraction of the waterline in a SAR image is a difficult task to perform partly because of the presence of speckle and partly because of similarity between the signal returned from the sea surface and that from the exposed tidal flat surface or land. Waterlines in SAR intensity and coherence map can effectively be extracted with MSP-RoA edge detector. From multiple images obtained over a range of tide elevation, it is possible to build up a set of heighted waterline within intertidal zone, and then a gridded DEM can be interpolated. We have tested the proposed method over the Gomso Bay, and succeeded in generating intertidal DEM with relatively high accuracy.

Recognition of Resident Registration Card using ART2-based RBF Network and face Verification (ART2 기반 RBF 네트워크와 얼굴 인증을 이용한 주민등록증 인식)

  • Kim Kwang-Baek;Kim Young-Ju
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.1-15
    • /
    • 2006
  • In Korea, a resident registration card has various personal information such as a present address, a resident registration number, a face picture and a fingerprint. A plastic-type resident card currently used is easy to forge or alter and tricks of forgery grow to be high-degree as time goes on. So, whether a resident card is forged or not is difficult to judge by only an examination with the naked eye. This paper proposed an automatic recognition method of a resident card which recognizes a resident registration number by using a refined ART2-based RBF network newly proposed and authenticates a face picture by a template image matching method. The proposed method, first, extracts areas including a resident registration number and the date of issue from a resident card image by applying Sobel masking, median filtering and horizontal smearing operations to the image in turn. To improve the extraction of individual codes from extracted areas, the original image is binarized by using a high-frequency passing filter and CDM masking is applied to the binaried image fur making image information of individual codes better. Lastly, individual codes, which are targets of recognition, are extracted by applying 4-directional contour tracking algorithm to extracted areas in the binarized image. And this paper proposed a refined ART2-based RBF network to recognize individual codes, which applies ART2 as the loaming structure of the middle layer and dynamicaly adjusts a teaming rate in the teaming of the middle and the output layers by using a fuzzy control method to improve the performance of teaming. Also, for the precise judgement of forgey of a resident card, the proposed method supports a face authentication by using a face template database and a template image matching method. For performance evaluation of the proposed method, this paper maked metamorphoses of an original image of resident card such as a forgey of face picture, an addition of noise, variations of contrast variations of intensity and image blurring, and applied these images with original images to experiments. The results of experiment showed that the proposed method is excellent in the recognition of individual codes and the face authentication fur the automatic recognition of a resident card.

  • PDF