• Title/Summary/Keyword: 중복수 추출

Search Result 216, Processing Time 0.032 seconds

A Study on the Multivariate Stratified Random Sampling with Multiplicity (중복수가 있는 다변량 층화임의추출에 관한 연구(층별로 독립인 경우의 배분문제))

  • Kim, Ho-Il
    • Journal of the Korean Data and Information Science Society
    • /
    • v.10 no.1
    • /
    • pp.79-89
    • /
    • 1999
  • A counting rule that allows an element to be linked to more than one enumeration unit is called a multiplicity counting rule. Sample designs that use multiplicity counting rules are called network samples. Defining a network to be a set of observation units with a given linkage pattern, a network may be linked with more than one selection unit, and a single selection unit may be linked with more than one network. This paper considers allocation for multivariate stratified random sampling with multiplicity.

  • PDF

System Optimization Technique using Crosscutting Concern (크로스커팅 개념을 이용한 시스템 최적화 기법)

  • Lee, Seunghyung;Yoo, Hyun
    • Journal of Digital Convergence
    • /
    • v.15 no.3
    • /
    • pp.181-186
    • /
    • 2017
  • The system optimization is a technique that changes the structure of the program in order to extract the duplicated modules without changing the source code, reuse of the extracted module. Structure-oriented development and object-oriented development are efficient at crosscutting concern modular, however can't be modular of crosscutting concept. To apply the crosscutting concept in an existing system, there is a need to a extracting technique for distributed system optimization module within the system. This paper proposes a method for extracting the redundant modules in the completed system. The proposed method extracts elements that overlap over a source code analysis to analyze the data dependency and control dependency. The extracted redundant element is used to program dependency analysis for the system optimization. Duplicated dependency analysis result is converted into a control flow graph, it is possible to produce a minimum crosscutting module. The element extracted by dependency analysis proposes a system optimization method which minimizes the duplicated code within system by setting the crosscutting concern module.

Comparison of Match Candidate Pair Constitution Methods for UAV Images Without Orientation Parameters (표정요소 없는 다중 UAV영상의 대응점 추출 후보군 구성방법 비교)

  • Jung, Jongwon;Kim, Taejung;Kim, Jaein;Rhee, Sooahm
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.647-656
    • /
    • 2016
  • Growth of UAV technology leads to expansion of UAV image applications. Many UAV image-based applications use a method called incremental bundle adjustment. However, incremental bundle adjustment produces large computation overhead because it attempts feature matching from all image pairs. For efficient feature matching process we have to confine matching only for overlapping pairs using exterior orientation parameters. When exterior orientation parameters are not available, we cannot determine overlapping pairs. We need another methods for feature matching candidate constitution. In this paper we compare matching candidate constitution methods without exterior orientation parameters, including partial feature matching, Bag-of-keypoints, image intensity method. We use the overlapping pair determination method based on exterior orientation parameter as reference. Experiment results showed the partial feature matching method in the one with best efficiency.

Extracting Duplication for panoramic mosaics (파노라믹 모자이크를 위한 중복 정보 추출)

  • Lee, Ji-Hyun;Song, Bok-Deuk;Yun, Tae-Soo;Yang, Hwang-Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.571-574
    • /
    • 2003
  • 본 논문은 수평 이동 정보와 회전 정보가 있는 비디오 영상을 Mellin Transform 을 이용하여 이미지를 모자이킹하는 방법을 제안한다. Mellin Transform 후 나타나는 이미지의 이동 정보와 회전정보를 이용하여 각 이미지들을 접합하기 위한 투영 행렬을 계산한다. 그리고 모자이크 생성시 나타날 수 있는 각 이미지간의 오차가 누적되는 현상을 줄이기 위한 전체적인 접합으로 투영 행렬을 추출하여 적용함으로써 누적되는 오차를 줄여 정확한 모자이크를 얻을 수 있다. 지금까지 제안된 모자이킹 기법들은 중복성 계산에 시간이 많이 소모되고 수평이동 시켜 얻어진 영상만을 다루어 이미지가 회전되었을 정확한 모자이크 결과를 얻을 수 없었다. 따라서 본 논문에서는 Mellin Transform에 기반한 투영 행렬을 이용하여 이미지가 이동하거나 회전하였을 경우에도 빠른 시간에 이미지의 중복 정보를 찾아 정확한 모자이크를 생성할 수 있는 방법을 제안한다.

  • PDF

Non Duplicated Extract Method of Heterogeneous Data Sources for Efficient Spatial Data Load in Spatial Data Warehouse (공간 데이터웨어하우스에서 효율적인 공간 데이터 적재를 위한 이기종 데이터 소스의 비중복 추출기법)

  • Lee, Dong-Wook;Baek, Sung-Ha;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Spatial Information System Society
    • /
    • v.11 no.2
    • /
    • pp.143-150
    • /
    • 2009
  • Spatial data warehouses are a system managing manufactured data through ETL step with extracted spatial data from spatial DBMS or various data sources. In load period, duplicated spatial data in the same subject are not useful in extracted spatial data dislike aspatial data and waste the storage space by the feature of spatial data. Also, in case of extracting source data on heterogeneous system, as those have different spatial type and schema, the spatial extract method is required for them. Processing a step matching address about extracted spatial data using a standard Geocoding DB, the exiting methods load formal data set. However, the methods cause the comparison operation of extracted data with Geocoding DB, and according to integrate spatial data by subject it has problems which do not consider duplicated data among heterogeneous spatial DBMS. This paper proposes efficient extracting method to integrate update query extracted from heterogeneous source systems in data warehouse constructer. The method eliminates unnecessary extracting operation cost to choose related update queries like insertion or deletion on queries generated from loading to current point. Also, we eliminate and integrate extracted spatial data using update query in source spatial DBMS. The proposed method can reduce wasting storage space caused by duplicate storage and support rapidly analyzing spatial data by loading integrated data per loading point.

  • PDF

Automatic Segmentation of the Catheter in X-ray Angiography Images using Gradient Information and Mode (X-선 혈관조영영상에서 기울기 정보와 최대 빈도수를 이용한 카테터 자동 분할)

  • Baek, Jung-A;Lee, Min-Jin;Hong, He-Len
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2010.06c
    • /
    • pp.458-462
    • /
    • 2010
  • 본 논문은 X-선 혈관조영영상에서 기울기 정보 및 최대 빈도수를 이용한 카테터 자동 분할 방법을 제안한다. 제안방법은 세 단계로 이루어진다. 첫째, 분할하고자 하는 카테터 관심영역을 설정하고, 영상의 대조대비를 높이기 위한 밝기값 스트레칭을 수행한다. 둘째, 카테터 후보 경계점을 추출하기 위하여 카테터 방향을 고려한 경계 강조 마스크를 영상에 적용한다. 셋째, 카테터 후보 경계점에서 기울기가 크고 최대 빈도수 직경을 갖는 카테터 경계점을 추출하고 이들을 선형 보간하여 최종 카테터 경계를 분할한다. 제안 방법의 평가를 위하여 육안 평가 및 전문가가 수동 분할한 결과와 본 제안방법을 적용하여 얻은 중복 영역 비율과 평균 거리 차이를 측정한 정확성 평가를 수행하였고, 수행시간을 측정하였다. 실험결과 중복 영역 비율은 93.9%${\pm}$2.7%, 평균 거리 차이는 0.116-픽셀, 수행시간은 평균 0.011초로 측정되었다.

  • PDF

Image Mosaic using Multiresolution Wavelet Analysis (다해상도 웨이블렛 분석 기법을 이용한 영상 모자이크)

  • Yang, In-Tae;Oh, Myung-Jin;Lee, In-Yeub
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.12 no.2 s.29
    • /
    • pp.61-66
    • /
    • 2004
  • By the advent of the high-resolution Satellite imagery, there are increasing needs in image mosaicking technology which can be applied to various application fields such as GIS(Geographic Information system). To mosaic images, various methods such as image matching and histogram modification are needed. In this study, automated image mosaicking is performed using image matching method based on the multi-resolution wavelet analysis(MWA). Specifically, both area based and feature based matching method are embedded in the multi-resolution wavelet analysis to construct seam line.; seam points are extracted then polygon clipping method are applied to define overlapped area of two adjoining images. Before mosaicking, radiometric correction is proceeded by using histogram matching method. As a result, mosaicking area is automatically extracted by using polygon clipping method. Also, seamless image is acquired using multi-resolution wavelet analysis.

  • PDF

A Study on Iris Recognition by Iris Feature Extraction from Polar Coordinate Circular Iris Region (극 좌표계 원형 홍채영상에서의 특징 검출에 의한 홍채인식 연구)

  • Jeong, Dae-Sik;Park, Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.48-60
    • /
    • 2007
  • In previous researches for iris feature extraction, they transform a original iris image into rectangular one by stretching and interpolation, which causes the distortion of iris patterns. Consequently, it reduce iris recognition accuracy. So we are propose the method that extracts iris feature by using polar coordinates without distortion of iris patterns. Our proposed method has three strengths compared with previous researches. First, we extract iris feature directly from polar coordinate circular iris image. Though it requires a little more processing time, there is no degradation of accuracy for iris recognition and we compares the recognition performance of polar coordinate to rectangular type using by Hamming Distance, Cosine Distance and Euclidean Distance. Second, in general, the center position of pupil is different from that of iris due to camera angle, head position and gaze direction of user. So, we propose the method of iris feature detection based on polar coordinate circular iris region, which uses pupil and iris position and radius at the same time. Third, we overcome override point from iris patterns by using polar coordinates circular method. each overlapped point would be extracted from the same position of iris region. To overcome such problem, we modify Gabor filter's size and frequency on first track in order to consider low frequency iris patterns caused by overlapped points. Experimental results showed that EER is 0.29%, d' is 5,9 and EER is 0.16%, d' is 6,4 in case of using conventional rectangular image and proposed method, respectively.

Aspect Mining Process Design Using Abstract Syntax Tree (추상구문트리를 이용한 어스팩트 마이닝 프로세스 설계)

  • Lee, Seung-Hyung;Song, Young-Jae
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.5
    • /
    • pp.75-83
    • /
    • 2011
  • Aspect-oriented programming is the paradigm which extracts crosscutting concern from a system and solves scattering of a function and confusion of a code through software modularization. Existing aspect developing method has a difficult to extract a target area, so it is not easy to apply aspect mining. In an aspect minning, it is necessary a technique that convert existing program refactoring elements to crosscutting area. In the paper, it is suggested an aspect mining technique for extracting crosscutting concern in a system. Using abstract syntax structure specification, extract functional duplicated relation elements. Through Apriori algorithm, it is possible to create a duplicated syntax tree and automatic creation and optimization of a duplicated source module, target of crosscutting area. As a result of applying module of Berkeley Yacc(berbose.c) to mining process, it is confirmed that the length and volume of program has been decreased of 9.47% compared with original module, and it has been decreased of 4.92% in length and 5.11% in volume compared with CCFinder.

A Study of Method to Restore Deduplicated Files in Windows Server 2012 (윈도우 서버 2012에서 데이터 중복 제거 기능이 적용된 파일의 복원 방법에 관한 연구)

  • Son, Gwancheol;Han, Jaehyeok;Lee, Sangjin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.6
    • /
    • pp.1373-1383
    • /
    • 2017
  • Deduplication is a function to effectively manage data and improve the efficiency of storage space. When the deduplication is applied to the system, it makes it possible to efficiently use the storage space by dividing the stored file into chunks and storing only unique chunk. However, the commercial digital forensic tool do not support the file system analysis, and the original file extracted by the tool can not be executed or opened. Therefore, in this paper, we analyze the process of generating chunks of data for a Windows Server 2012 system that can apply deduplication, and the structure of the resulting file(Chunk Storage). We also analyzed the case where chunks that are not covered in the previous study are compressed. Based on these results, we propose the method to collect deduplicated data and reconstruct the original file for digital forensic investigation.