• Title/Summary/Keyword: DATA PRE-PROCESSING

Search Result 808, Processing Time 0.034 seconds

A New Semi-Random Imterleaver Algorithm for the Noise Removal in Image Communication (영상통신에서 잡음 제거를 위한 새로운 세미 랜덤 인터리버 알고리즘)

  • Hong, Sung-Won;Park, Jin-Soo
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8
    • /
    • pp.2473-2483
    • /
    • 2000
  • In this paper, The turbo code is used to effectively remove noise which is generated on the image communication channel. Turbo code had excellent decoding performance. However, it had limitations for real time communication because of the system complexity and time delay in decoding procedure. To overcome this problem, this paper proposed a new SRI(Semi Random Interleaved algorithm, which decrease the time delay, when the image data, which reduced the interleaver size of turbo code encoder and decoder, transmitted. The SRI algorithm was composed of 0.5 interleaver size from input frame sequence. When the data inputs in interleaver, the data recorded by row such as block interleaver. But, When the data read in interleaver, the data was read by randomly and the next data located by the just address simultaneously. Therefore, the SRI reduced half-complexity when it was compared with pre-existing method such as block, helical, random interleaver. The image data could be the real time processing when the SRI applied to turbo code.

  • PDF

Attribute-based Approach for Multiple Continuous Queries over Data Streams (데이터 스트림 상에서 다중 연속 질의 처리를 위한 속성기반 접근 기법)

  • Lee, Hyun-Ho;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.14D no.5
    • /
    • pp.459-470
    • /
    • 2007
  • A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Query processing for such a data stream should also be continuous and rapid, which requires strict time and space constraints. In most DSMS(Data Stream Management System), the selection predicates of continuous queries are grouped or indexed to guarantee these constraints. This paper proposes a new scheme tailed an ASC(Attribute Selection Construct) that collectively evaluates selection predicates containing the same attribute in multiple continuous queries. An ASC contains valuable information, such as attribute usage status, partially pre calculated matching results and selectivity statistics for its multiple selection predicates. The processing order of those ASC's that are corresponding to the attributes of a base data stream can significantly influence the overall performance of multiple query evaluation. Consequently, a method of establishing an efficient evaluation order of multiple ASC's is also proposed. Finally, the performance of the proposed method is analyzed by a series of experiments to identify its various characteristics.

DEVELOPMENT OF ATMOSPHERIC CORRECTION ALGORITHM FOR HYPERSPECTRAL DATA USING MODTRAN MODEL

  • Kim, Sun-Hwa;Kang, Sung-Jin;Ji, Jun-Hwa;Lee, Kyu-Sung
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.619-622
    • /
    • 2006
  • Atmospheric correction is one of critical procedures to extract quantitative information related to biophysical variables from hyperspectral data. In this study, we attempted to generate the water vapor contents image from hyperspectral data itself and developed the atmospheric correction algorithm for EO-1 Hyperion data using pre-calculated atmospheric look-up-table (LUT) for fast processing. To apply the new atmospheric correction algorithm, Hyperion data acquired June 3, 2001 over Seoul area is used. Reflectance spectrums of various targets on atmospheric corrected Hyperion reflectance images showed the general spectral pattern although there must be further development to reduce the spectral noise.

  • PDF

Description of Range Control System in Space Center

  • Yun, Sek-Young;Choi,Yong-Tae;Lee, Hyo-Keun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.53.2-53
    • /
    • 2002
  • NARO Space Center is being developed as a national project for the Korea Space Development Program. Among the major missions of the Space Center, the Range Control System is the focal point for all command and control operation of the Space Center. The acquired data from the Tracking Stations and the on-site facilities is processed and distributed in the Control Center. Data processing or data fusion is needed for the exact tracking of the Launch Vehicle from several tracking systems. The first phase, which is the best telemetry source is selected among data streams that are received from each telemetry stations using some pre-defined criterion. Trajectory data and major telemetry parameters...

  • PDF

Video data output system design for CEU (camera electronic unit) of satellite

  • Park, Jong-Euk;Kong, Jong-Pil;Yong, Sang-Soon;Heo, Haeng-Pal;Kim, Young-Sun;Paik, Hong-Yul
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1118-1120
    • /
    • 2003
  • In MSC(Multi-spectral camera ), the incoming light is converted to electronic analog signals by the CCD(charge coupled device) detectors. The analog signals are amplified, biased and converted into digital signals (pixel data stream) in the FPE(Focal plane electronics ). The digital data is transmitted to the PMU for pre-processing to correct for nonuniformity, to partially reorder the pixel stream and to add header data for identification and synchronization In this paper, the video data streams is described in terms of hardware.

  • PDF

Big Data Analytics of Construction Safety Incidents Using Text Mining (텍스트 마이닝을 활용한 건설안전사고 빅데이터 분석)

  • Jeong Uk Seo;Chie Hoon Song
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.3
    • /
    • pp.581-590
    • /
    • 2024
  • This study aims to extract key topics through text mining of incident records (incident history, post-incident measures, preventive measures) from construction safety accident case data available on the public data portal. It also seeks to provide fundamental insights contributing to the establishment of manuals for disaster prevention by identifying correlations between these topics. After pre-processing the input data, we used the LDA-based topic modeling technique to derive the main topics. Consequently, we obtained five topics related to incident history, and four topics each related to post-incident measures and preventive measures. Although no dominant patterns emerged from the topic pattern analysis, the study holds significance as it provides quantitative information on the follow-up actions related to the incident history, thereby suggesting practical implications for the establishment of a preventive decision-making system through the linkage between accident history and subsequent measures for reccurrence prevention.

Development of the KnowledgeMatrix as an Informetric Analysis System (계량정보분석시스템으로서의 KnowledgeMatrix 개발)

  • Lee, Bang-Rae;Yeo, Woon-Dong;Lee, June-Young;Lee, Chang-Hoan;Kwon, Oh-Jin;Moon, Yeong-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.68-74
    • /
    • 2008
  • Application areas of Knowledge Discovery in Database(KDD) have been expanded to many R&D management processes including technology trends analysis, forecasting and evaluation etc. Established research field such as informetrics (or scientometrics) has utilized techniques or methods of KDD. Various systems have been developed to support works of analyzing large-scale R&D related databases such as patent DB or bibliographic DB by a few researchers or institutions. But extant systems have some problems for korean users to use. Their prices is not moderate, korean language processing is impossible, and user's demands not reflected. To solve these problems, Korea Institute of Science and Technology Information(KISTI) developed stand-alone type information analysis system named as KnowledgeMatrix. KnowledgeMatrix system offer various functions to analyze retrieved data set from databases. KnowledgeMatrix's main operation unit is composed of user-defined lists and matrix generation, cluster analysis, visualization, data pre-processing. Matrix generation unit help extract information items which will be analyzed, and calculate occurrence, co-occurrence, proximity of the items. Cluster analysis unit enable matrix data to be clustered by hierarchical or non-hierarchical clustering methods and present tree-type structure of clustered data. Visualization unit offer various methods such as chart, FDP, strategic diagram and PFNet. Data pre-processing unit consists of data import editor, string editor, thesaurus editor, grouping method, field-refining methods and sub-dataset generation methods. KnowledgeMatrix show better performances and offer more various functions than extant systems.

Malignant and Benign Classification of Liver Tumor in CT according to Data pre-processing and Deep running model (CT영상에서의 AlexNet과 VggNet을 이용한 간암 병변 분류 연구)

  • Choi, Bo Hye;Kim, Young Jae;Choi, Seung Jun;Kim, Kwang Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.39 no.6
    • /
    • pp.229-236
    • /
    • 2018
  • Liver cancer is one of the highest incidents in the world, and the mortality rate is the second most common disease after lung cancer. The purpose of this study is to evaluate the diagnostic ability of deep learning in the classification of malignant and benign tumors in CT images of patients with liver tumors. We also tried to identify the best data processing methods and deep learning models for classifying malignant and benign tumors in the liver. In this study, CT data were collected from 92 patients (benign liver tumors: 44, malignant liver tumors: 48) at the Gil Medical Center. The CT data of each patient were used for cross-sectional images of 3,024 liver tumors. In AlexNet and VggNet, the average of the overall accuracy at each image size was calculated: the average of the overall accuracy of the $200{\times}200$ image size is 69.58% (AlexNet), 69.4% (VggNet), $150{\times}150$ image size is 71.54%, 67%, $100{\times}100$ image size is 68.79%, 66.2%. In conclusion, the overall accuracy of each does not exceed 80%, so it does not have a high level of accuracy. In addition, the average accuracy in benign was 90.3% and the accuracy in malignant was 46.2%, which is a significant difference between benign and malignant. Also, the time it takes for AlexNet to learn is about 1.6 times faster than VggNet but statistically no different (p > 0.05). Since both models are less than 90% of the overall accuracy, more research and development are needed, such as learning the liver tumor data using a new model, or the process of pre-processing the data images in other methods. In the future, it will be useful to use specialists for image reading using deep learning.

Development of Intelligent Database Program for PSI/ISI Data Management of Nuclear Power Plant (Part II) (원자력발전소 PSI/ISI 데이더 관리를 위한 지능형 데이더베이스 프로그램 개발 (제 2보))

  • Park, Un-Su;Park, Ik-Keun;Um, Byong-Guk;Lee, Jong-Po;Han, Chi-Hyun
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.20 no.3
    • /
    • pp.200-205
    • /
    • 2000
  • In a previous paper, we have discussed the intelligent Windows 95-based data management program(IDPIN) which was developed for effective and efficient management of large amounts of pre-/in-service inspection(PSI/ISI) data of Kori nuclear power plants. The IDPIN program enables the prompt extraction of previously conducted PSI/ISI conditions and results so that the time-consuming data management, painstaking data processing and analysis of the past are avoided. In this study, the intelligent Windows based data management program(WS-IDPIN) has been developed as an effective data management of PSI/ISI data for the Wolsong nuclear power plants. The WS-IDPIN program includes the modules of comprehensive management and analysis of PSI/ISI results, statistical reliability assessment program of PSI/ISI results(depth and length sizing performance etc), standardization of UT report form and computerization of UT results. In addition, the program can be further developed as a unique PSI/ISI data management expert system which can be part of the PSI/ISI total support system for Korean nuclear power plants.

  • PDF

Accelerating Numerical Analysis of Reynolds Equation Using Graphic Processing Units (그래픽처리장치를 이용한 레이놀즈 방정식의 수치 해석 가속화)

  • Myung, Hun-Joo;Kang, Ji-Hoon;Oh, Kwang-Jin
    • Tribology and Lubricants
    • /
    • v.28 no.4
    • /
    • pp.160-166
    • /
    • 2012
  • This paper presents a Reynolds equation solver for hydrostatic gas bearings, implemented to run on graphics processing units (GPUs). The original analysis code for the central processing unit (CPU) was modified for the GPU by using the compute unified device architecture (CUDA). The red-black Gauss-Seidel (RBGS) algorithm was employed instead of the original Gauss-Seidel algorithm for the iterative pressure solver, because the latter has data dependency between neighboring nodes. The implemented GPU program was tested on the nVidia GTX580 system and compared to the original CPU program on the AMD Llano system. In the iterative pressure calculation, the implemented GPU program showed 20-100 times faster performance than the original CPU codes. Comparison of the wall-clock times including all of pre/post processing codes showed that the GPU codes still delivered 4-12 times faster performance than the CPU code for our target problem.