• Title/Summary/Keyword: 지역 정렬

Search Result 101, Processing Time 0.037 seconds

Shrink-Wrapped Boundary Face Algorithm for Surface Reconstruction from Unorganized 3D Points (비정렬 3차원 측정점으로부터의 표면 재구성을 위한 경계면 축소포장 알고리즘)

  • 최영규;구본기;진성일
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.10
    • /
    • pp.593-602
    • /
    • 2004
  • A new surface reconstruction scheme for approximating the surface from a set of unorganized 3D points is proposed. Our method, called shrink-wrapped boundary face (SWBF) algorithm, produces the final surface by iteratively shrinking the initial mesh generated from the definition of the boundary faces. Proposed method surmounts the genus-0 spherical topology restriction of previous shrink-wrapping based mesh generation technique, and can be applicable to any kind of surface topology. Furthermore, SWBF is much faster than the previous one since it requires only local nearest-point-search in the shrinking process. According to experiments, it is proved to be very robust and efficient for mesh generation from unorganized points cloud.

Mesh Geometry Compression for Mobile Graphics (모바일 그래픽스를 위한 메쉬 위치정보 압축)

  • Lee, Jong-Seok;Choe, Sung-Yul;Lee, Seung-Yong
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.403-408
    • /
    • 2008
  • 본 논문은 모바일 그래픽스 응용에 적합한 메쉬 위치정보의 압축 기법을 제시한다. 제시한 기법은 복원 에러를 최소화하기 위한 메쉬 분할 기법과 기존의 방법에서 방생하는 시각적 손상문제를 해결한 지역적 정량화 기법으로 구성된다. 기존 방법에서는 분할된 조각 메쉬들 간의 경계가 벌어지는 시각적 손상문제가 방생하는데, 모든 조각 메쉬의 지역적 양자화 셀이 같은 크기와 정렬된 지역 좌표축을 갖게 하여 이 문제를 해결했다. 제시한 기법은 메쉬를 렌더링할 때 압축된 위치정보를 메모리에서 그래픽스 하드웨어로 전송하여 실시간으로 복원함으로써 모바일 기기의 자원을 절약하는 특징을 갖는다. 압축된 위치정보의 복원을 표준화된 렌더링 파이프라인에 결합이 가능하도록 설계함으로써 조각 메쉬당 한번의 행렬 곱셈으로 복원이 가능하다. 실험에서는 32 비트 부동소수점 수로 표현되는 위치정보를 8 비트 정수로 지역적 정량화하여 70%의 압축률에서 11 비트 전역적 정량화와 대등한 수준의 시각적 품질을 달성했다.

  • PDF

Anterior Cruciate Ligament Segmentation in Knee MRI with Locally-aligned Probabilistic Atlas and Iterative Graph Cuts (무릎 자기공명영상에서 지역적 확률 아틀라스 정렬 및 반복적 그래프 컷을 이용한 전방십자인대 분할)

  • Lee, Han Sang;Hong, Helen
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1222-1230
    • /
    • 2015
  • Segmentation of the anterior cruciate ligament (ACL) in knee MRI remains a challenging task due to its inhomogeneous signal intensity and low contrast with surrounding soft tissues. In this paper, we propose a multi-atlas-based segmentation of the ACL in knee MRI with locally-aligned probabilistic atlas (PA) in an iterative graph cuts framework. First, a novel PA generation method is proposed with global and local multi-atlas alignment by means of rigid registration. Second, with the generated PA, segmentation of the ACL is performed by maximum-aposteriori (MAP) estimation and then by graph cuts. Third, refinement of ACL segmentation is performed by improving shape prior through mask-based PA generation and iterative graph cuts. Experiments were performed with a Dice similarity coefficients of 75.0%, an average surface distance of 1.7 pixels, and a root mean squared distance of 2.7 pixels, which increased accuracy by 12.8%, 22.7%, and 22.9%, respectively, from the graph cuts with patient-specific shape constraints.

A Method for Region-Specific Anomaly Detection on Patch-wise Segmented PA Chest Radiograph (PA 흉부 X-선 영상 패치 분할에 의한 지역 특수성 이상 탐지 방법)

  • Hyun-bin Kim;Jun-Chul Chun
    • Journal of Internet Computing and Services
    • /
    • v.24 no.1
    • /
    • pp.49-59
    • /
    • 2023
  • Recently, attention to the pandemic situation represented by COVID-19 emerged problems caused by unexpected shortage of medical personnel. In this paper, we present a method for diagnosing the presence or absence of lesional sign on PA chest X-ray images as computer vision solution to support diagnosis tasks. Method for visual anomaly detection based on feature modeling can be also applied to X-ray images. With extracting feature vectors from PA chest X-ray images and divide to patch unit, region-specific abnormality can be detected. As preliminary experiment, we created simulation data set containing multiple objects and present results of the comparative experiments in this paper. We present method to improve both efficiency and performance of the process through hard masking of patch features to aligned images. By summing up regional specificity and global anomaly detection results, it shows improved performance by 0.069 AUROC compared to previous studies. By aggregating region-specific and global anomaly detection results, it shows improved performance by 0.069 AUROC compared to our last study.

Incremental Generation of A Decision Tree Using Global Discretization For Large Data (대용량 데이터를 위한 전역적 범주화를 이용한 결정 트리의 순차적 생성)

  • Han, Kyong-Sik;Lee, Soo-Won
    • The KIPS Transactions:PartB
    • /
    • v.12B no.4 s.100
    • /
    • pp.487-498
    • /
    • 2005
  • Recently, It has focused on decision tree algorithm that can handle large dataset. However, because most of these algorithms for large datasets process data in a batch mode, if new data is added, they have to rebuild the tree from scratch. h more efficient approach to reducing the cost problem of rebuilding is an approach that builds a tree incrementally. Representative algorithms for incremental tree construction methods are BOAT and ITI and most of these algorithms use a local discretization method to handle the numeric data type. However, because a discretization requires sorted numeric data in situation of processing large data sets, a global discretization method that sorts all data only once is more suitable than a local discretization method that sorts in every node. This paper proposes an incremental tree construction method that efficiently rebuilds a tree using a global discretization method to handle the numeric data type. When new data is added, new categories influenced by the data should be recreated, and then the tree structure should be changed in accordance with category changes. This paper proposes a method that extracts sample points and performs discretiration from these sample points to recreate categories efficiently and uses confidence intervals and a tree restructuring method to adjust tree structure to category changes. In this study, an experiment using people database was made to compare the proposed method with the existing one that uses a local discretization.

Pre-arrangement Based Task Scheduling Scheme for Reducing MapReduce Job Processing Time (MapReduce 작업처리시간 단축을 위한 선 정렬 기반 태스크 스케줄링 기법)

  • Park, Jung Hyo;Kim, Jun Sang;Kim, Chang Hyeon;Lee, Won Joo;Jeon, Chang Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.23-30
    • /
    • 2013
  • In this paper, we propose pre-arrangement based task scheduling scheme to reduce MapReduce job processing time. If a task and data to be processed do not locate in same node, the data should be transmitted to node where the task is allocated on. In that case, a job processing time increases owing to data transmission time. To avoid that case, we schedule tasks into two steps. In the first step, tasks are sorted in the order of high data locality. In the second step, tasks are exchanged to improve their data localities based on a location information of data. In performance evaluation, we compare the proposed method based Hadoop with a default Hadoop on a small Hadoop cluster in term of the job processing time and the number of tasks sorted to node without data to be processed by them. The result shows that the proposed method lowers job processing time by around 18%. Also, we confirm that the number of tasks allocated to node without data to be processed by them decreases by around 25%.

Mining Sequential Patterns Using Multi-level Linear Location Tree (단계 선형 배치 트리를 이용한 순차 패턴 추출)

  • 최현화;이동하;이전영
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2003.10b
    • /
    • pp.70-72
    • /
    • 2003
  • 대용량 데이터베이스로부터 순차 패턴을 발견하는 문제는 지식 발견 또는 데이터 마이닝(Data Mining) 분야에서 주요한 패턴 추출 문제이다. 순차 패턴은 추출 기법에 있어 연관 규칙의 Apriori 알고리즘과 비슷한 방식을 사용하며 그 과정에서 시퀀스는 해쉬 트리 구조를 통해 다루어 진다. 이러한 해쉬 트리 구조는 항목들의 정렬과 데이터 시퀀스의 지역성을 무시한 저장 구조로 단순 검색을 통한 다수의 복잡한 포인터 연산수행을 기반으로 한다. 본 논문에서는 이러한 해쉬 트리 구조의 단정을 보완한 다단게 선형 배치 트리(MLLT, Multi-level Linear Location Tree)를 제안하고, 다단계 선형 배치 트리를 이용한 효율적인 마이닝 메소드(MLLT-Join)를 소개한다.

  • PDF

Comparison of Readability by Text Attributes of Self-Guided Interpretive Signs (자기안내식(自己案內式) 해설판(解說板) 글자 속성(屬性)에 따른 가독성(可讀性) 비교(比較)에 관한 연구(硏究))

  • Kim, Sang-Oh
    • Journal of Korean Society of Forest Science
    • /
    • v.95 no.1
    • /
    • pp.12-22
    • /
    • 2006
  • Understanding the readability of texts in signs is necessary to enhance the communication effectiveness of the self-guided interpretive signs. This study compared signs' readability by different text attributes. A total of 1391 respondents participated in the questionnaire survey at the 'Neodeolgeong' area in Mudeung-Mountain Provincial Park during September-November of 2004. This study found that 'Hy Gyunmyungjo' in letter style, 'both-side' in letter justification, 190% (HWP 2002) in space between lines, 10 (HWP 2002) in space between letters, and 25 in the number of letters in a line showed the highest readability in text size 58 point, respectively. This study illustrated an example of an interpretive sign made up by combing the five text attributes which show the highest readability. This study also discussed the interpretive signs' text design and future research questions.

An Adaptive Algorithm for Plagiarism Detection in a Controlled Program Source Set (제한된 프로그램 소스 집합에서 표절 탐색을 위한 적응적 알고리즘)

  • Ji, Jeong-Hoon;Woo, Gyun;Cho, Hwan-Gue
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.1090-1102
    • /
    • 2006
  • This paper suggests a new algorithm for detecting the plagiarism among a set of source codes, constrained to be functionally equivalent, such are submitted for a programming assignment or for a programming contest problem. The typical algorithms largely exploited up to now are based on Greedy-String Tiling, which seeks for a perfect match of substrings, and analysis of similarity between strings based on the local alignment of the two strings. This paper introduces a new method for detecting the similar interval of the given programs based on an adaptive similarity matrix, each entry of which is the logarithm of the probabilities of the keywords based on the frequencies of them in the given set of programs. We experimented this method using a set of programs submitted for more than 10 real programming contests. According to the experimental results, we can find several advantages of this method compared to the previous one which uses fixed similarity matrix(+1 for match, -1 for mismatch, -2 for gap) and also can find that the adaptive similarity matrix can be used for detecting various plagiarism cases.

Automatic Segmentation of the Prostate in MR Images using Image Intensity and Gradient Information (영상의 밝기값과 기울기 정보를 이용한 MR영상에서 전립선 자동분할)

  • Jang, Yj-Jin;Jo, Hyun-Hee;Hong, Helen
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.695-699
    • /
    • 2009
  • In this paper, we propose an automatic prostate segmentation technique using image intensity and gradient information. Our method is composed of four steps. First, rays at regular intervals are generated. To minimize the effect of noise, the start and end positions of the ray are calculated. Second, the profiles on each ray are sorted based on the gradient. And priorities are applied to the sorted gradient in the profile. Third, boundary points are extracted by using gradient priority and intensity distribution. Finally, to reduce the error, the extracted boundary points are corrected by using B-spline interpolation. For accuracy evaluation, the average distance differences and overlapping region ratio between results of manual and automatic segmentations are calculated. As the experimental results, the average distance difference error and standard deviation were 1.09mm $\pm0.20mm$. And the overlapping region ratio was 92%.