• Title/Summary/Keyword: data processing technique

Search Result 2,003, Processing Time 0.031 seconds

A Study on the Design and Development of Automatic Optical Fiber Aligner (자동 광섬유 정렬 장치의 설계 및 제작에 관한 연구)

  • Kim, Byung-Hee;Uhm, Chul;Choi, Young-Suk
    • Journal of Industrial Technology
    • /
    • v.22 no.B
    • /
    • pp.241-249
    • /
    • 2002
  • Optical fiber is indispensable for optical communication systems that transmit large volumes of data at high speed, but super precision technology in sub-micron units is required for optical axis adjustment. We developed the automatic optical fiber by image processing and automatic loading system. we have developed 6-axis micro stage system for I/O optical fiber arrays, the initial automatic aligning system software for a input optical array by the image processing technique, fast I/O-synchronous aligning strategy, the automatic loading/unloading system and the automatic UV bonding mechanism. In order to adjust the alignment it used on PC based motion controller, a $10{\mu}m$ repeat-detailed drawing of automatic loading system is developed by a primary line up for high detailed drawing. Also, at this researches used the image processing system and algorithm instead of the existing a primary hand-line up and fiber input array and waveguide chip formed in line by automatic.

  • PDF

A Parallel Kalman Filter for Discrete Linear Time-invariant System (이산 선형 시불변시스템에 대한 병렬칼만필터)

  • Lee, Jang-Gyu;Kim, Yong-Joon;Kim, Hyoung-Joong
    • Proceedings of the KIEE Conference
    • /
    • 1990.07a
    • /
    • pp.64-67
    • /
    • 1990
  • A parallel processing algorithm for discrete Kalman filter, which is one of the most commonly used filtering technique in modern control, signal processing, and communication. is proposed. Previously proposed parallel algorithms to decrease the number of computations needed in the Kalman filter are the hierachical structures by distributed processing of measurements, or the systolic structures to disperse the computational burden. In this paper, a new parallel Kalman filter employing a structure similar to recursive doubling is proposed. Estimated values of state variables by the new algorithm converge with two times faster data processing speed than that of the conventional Kalman filter. Moreover it maintains the optimality of the conventional Kalman filter.

  • PDF

Design and Verification of Algorithms for the Motion Detection of Vehicles using Hierarchical Motion Estimation and Parallel Processing (계층화 모션 추정법과 병렬처리 기반의 차량 움직임 측정 알고리즘 개발 및 검증1))

  • 강경훈;심현진;이은숙;정성태;남궁문;금기정;이상설
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.21-24
    • /
    • 2002
  • This paper presents a new method for the motion detection of vehicles using hierarchical motion estimation and parallel processing. It captures the road image by using a CMOS sensor. It divides the captured image into small blocks and detects the motion of each block by using a block-matching method which is based on a hierarchical motion estimation and parallel processing for the real-time processing. The parallelism is achieved by using the pipeline and the data flow technique. The proposed method has been implemented with an embedded system. Experimental results show that the proposed method detects the motion of vehicles in real-time.

  • PDF

Novel Parallel Approach for SIFT Algorithm Implementation

  • Le, Tran Su;Lee, Jong-Soo
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.4
    • /
    • pp.298-306
    • /
    • 2013
  • The scale invariant feature transform (SIFT) is an effective algorithm used in object recognition, panorama stitching, and image matching. However, due to its complexity, real-time processing is difficult to achieve with current software approaches. The increasing availability of parallel computers makes parallelizing these tasks an attractive approach. This paper proposes a novel parallel approach for SIFT algorithm implementation using a block filtering technique in a Gaussian convolution process on the SIMD Pixel Processor. This implementation fully exposes the available parallelism of the SIFT algorithm process and exploits the processing and input/output capabilities of the processor, which results in a system that can perform real-time image and video compression. We apply this implementation to images and measure the effectiveness of such an approach. Experimental simulation results indicate that the proposed method is capable of real-time applications, and the result of our parallel approach is outstanding in terms of the processing performance.

A fast high-resolution vibration measurement method based on vision technology for structures

  • Son, Ki-Sung;Jeon, Hyeong-Seop;Chae, Gyung-Sun;Park, Jae-Seok;Kim, Se-Oh
    • Nuclear Engineering and Technology
    • /
    • v.53 no.1
    • /
    • pp.294-303
    • /
    • 2021
  • Various types of sensors are used at industrial sites to measure vibration. With the increase in the diversity of vibration measurement methods, vibration monitoring methods using camera equipment have recently been introduced. However, owing to the physical limitations of the hardware, the measurement resolution is lower than that of conventional sensors, and real-time processing is difficult because of extensive image processing. As a result, most such methods in practice only monitor status trends. To address these disadvantages, a high-resolution vibration measurement method using image analysis of the edge region of the structure has been reported. While this method exhibits higher resolution than the existing vibration measurement technique using a camera, it requires significant amount of computation. In this study, a method is proposed for rapidly processing considerable amount of image data acquired from vision equipment, and measuring the vibration of structures with high resolution. The method is then verified through experiments. It was shown that the proposed method can fast measure vibrations of structures remotely.

Privacy-Preserving Kth Element Score over Vertically Partitioned Data on Multi-Party (다자 간 환경에서 수직 분할된 데이터에서 프라이버시 보존 k번째 항목의 score 계산)

  • Hong, Jun Hee;Jung, Jay Yeol;Jeong, Ik Rae
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.24 no.6
    • /
    • pp.1079-1090
    • /
    • 2014
  • Data mining is a technique to get the useful information that can be utilized for marketing and pattern analysis by processing the data that we have. However, when we use this technique, data provider's personal data can be leaked by accident. To protect these data from leakage, there were several techniques have been studied to preserve privacy. Vertically partitioned data is a state called that the data is separately provided to various number of user. On these vertically partitioned data, there was some methods developed to distinguishing kth element and (k+1) th element by using score. However, in previous method, we can only use on two-party case, so in this paper, we propose the extended technique by using paillier cryptosystem which can use on multi-party case.

Data Mining for Uncertain Data Based on Difference Degree of Concept Lattice

  • Qian Wang;Shi Dong;Hamad Naeem
    • Journal of Information Processing Systems
    • /
    • v.20 no.3
    • /
    • pp.317-327
    • /
    • 2024
  • Along with the rapid development of the database technology, as well as the widespread application of the database management systems are more and more large. Now the data mining technology has already been applied in scientific research, financial investment, market marketing, insurance and medical health and so on, and obtains widespread application. We discuss data mining technology and analyze the questions of it. Therefore, the research in a new data mining method has important significance. Some literatures did not consider the differences between attributes, leading to redundancy when constructing concept lattices. The paper proposes a new method of uncertain data mining based on the concept lattice of connotation difference degree (c_diff). The method defines the two rules. The construction of a concept lattice can be accelerated by excluding attributes with poor discriminative power from the process. There is also a new technique of calculating c_diff, which does not scan the full database on each layer, therefore reducing the number of database scans. The experimental outcomes present that the proposed method can save considerable time and improve the accuracy of the data mining compared with U-Apriori algorithm.

Development of a Computation Program for Automatic Processing of Calibration Data of Radiation Instrument (방사선 측정기 교정 데이터의 자동처리를 위한 전산프로그램 개발)

  • Jang, Ji-Woon;Shin, Hee-Sung;Youn, Cheung;Lee, Yun-Hee;Kim, Ho-Dong;Jung, Ki-Jung
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.26 no.4
    • /
    • pp.246-254
    • /
    • 2006
  • A computation program has been developed for automatic data processing in the calibration process of gamma survey meter. The automatic processing program has been developed based on Visual Basic. The program has been coded according to steps of calibration procedure. The OLE(object linking an embedding) Excel automation method fur automatic data processing is used in this program, which is a kind of programming technique for the Excel control. The performance test on the basis of reference data has been carried out by using the developed program. In the results of performance test, the values of calibration factors and uncertainties by the developed program were equal to those obtained from the reference data. In addition, It was revealed that the efficiency and precision of working are significantly increased by using the developed program.

High Speed Kernel Data Collection method for Analysis of Memory Workload (메모리 워크로드 분석을 위한 고속 커널 데이터 수집 기법)

  • Yoon, Jun Young;Jung, Seung Wan;Park, Jong Woo;Kim, Jung-Joon;Seo, Dae-Wha
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.11
    • /
    • pp.461-470
    • /
    • 2013
  • This paper proposes high speed kernel data collection method for analysis of memory workload, using technique of direct access to process's memory management structure. The conventional analysis tools have a slower data collection speed and they are lack of scalability due to collection only formalized memory information. The proposed method collects kernel data much faster than the conventional methods using technique of direct collect to process's memory information, page table, page structure in the memory management structure, and it can collect data which user wanted. We collect memory management data of the running process, and analyze its memory workload.

Applying Meta-Heuristic Algorithm based on Slicing Input Variables to Support Automated Test Data Generation (테스트 데이터 자동 생성을 위한 입력 변수 슬라이싱 기반 메타-휴리스틱 알고리즘 적용 방법)

  • Choi, Hyorin;Lee, Byungjeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.1
    • /
    • pp.1-8
    • /
    • 2018
  • Software testing is important to determine the reliability of the system, a task that requires a lot of effort and cost. Model-based testing has been proposed as a way to reduce these costs by automating test designs from models that regularly represent system requirements. For each path of model to generate an input value to perform a test, meta-heuristic technique is used to find the test data. In this paper, we propose an automatic test data generation method using a slicing method and a priority policy, and suppress unnecessary computation by excluding variables not related to target path. And then, experimental results show that the proposed method generates test data more effectively than conventional method.