• Title/Summary/Keyword: 자동변환기

Search Result 294, Processing Time 0.03 seconds

Design of Translator for generating Secure Java Bytecode from Thread code of Multithreaded Models (다중스레드 모델의 스레드 코드를 안전한 자바 바이트코드로 변환하기 위한 번역기 설계)

  • 김기태;유원희
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2002.06a
    • /
    • pp.148-155
    • /
    • 2002
  • Multithreaded models improve the efficiency of parallel systems by combining inner parallelism, asynchronous data availability and the locality of von Neumann model. This model executes thread code which is generated by compiler and of which quality is given by the method of generation. But multithreaded models have the demerit that execution model is restricted to a specific platform. On the contrary, Java has the platform independency, so if we can translate from threads code to Java bytecode, we can use the advantages of multithreaded models in many platforms. Java executes Java bytecode which is intermediate language format for Java virtual machine. Java bytecode plays a role of an intermediate language in translator and Java virtual machine work as back-end in translator. But, Java bytecode which is translated from multithreaded models have the demerit that it is not secure. This paper, multhithread code whose feature of platform independent can execute in java virtual machine. We design and implement translator which translate from thread code of multithreaded code to Java bytecode and which check secure problems from Java bytecode.

  • PDF

Implementation of Form-based XML Document Editor (Form 기반의 XML 문서 편집기 구현)

  • Go, Tak-Hyeon;Hwang, In-Jun
    • The KIPS Transactions:PartD
    • /
    • v.9D no.2
    • /
    • pp.267-276
    • /
    • 2002
  • Existing XML editors, which are usually tree-based, require knowledge on the XML from users. But this requirement should be removed in order for any user to create XML documents easily. In this paper, we developed a new XML editor which provides both the usual tree-based interface and the form-based interface derided from the original document. Editing XML documents through forms will be especially effective in the places such as enterprise or municipal office where a large amount of documents of same format need to be generated. Forms, which are HTML documents, are generated automatically through the XSLT using both template XML document and XSL document, and displayed on the built-in HTML browser. When a form is filled out by user, it will he transformed into its corresponding XML document and stored into the XML repository.

A Design of Ultra-sonic Range Meter Front-end IC (초음파 거리 측정회로용 프론트-엔드 IC의 설계)

  • Lee, Jun-Sung
    • 전자공학회논문지 IE
    • /
    • v.47 no.4
    • /
    • pp.1-9
    • /
    • 2010
  • This paper describes a ultrasonic signal processing front-end IC for distance range meter and body detector. The burst shaped ultrasonic signal is generated by a self oscillator and its frequency range is about 40[kHz]-300[kHz]. The generated ultrasonic signal transmit through piezo resonator. The another piezo device transduce from received ultrasonic signal to electrical signals. This front-end IC contained low noise amplifier, band pass filter, busrt detector and time pulse generator and so on. This IC has two type of new idea for improve function and performance, which are self frequency control (SFC) and Variable Gain Control amplifier (VGC) scheme. The dimensions and number of external parts are minimized in order to get a smaller hardware size. This device has been fabricated in a O.6[um] double poly, double metal 40[V] High Voltage CMOS process.

Ship Detection by Satellite Data: Radiometric and Geometric Calibrations of RADARS AT Data (위성 데이터에 의한 선박 탐지: RADARSAT의 대기보정과 기하보정)

  • Yang, Chan-Su
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.10 no.1 s.20
    • /
    • pp.1-7
    • /
    • 2004
  • RADARSAT is one of many possible data sources that can play an important role in marine surveillance including ship detection because radar sensors have the two primary advantages: all-weather and day or night imaging. However, atmospheric effects on SAR imaging can not be bypassed and any remote sensing image has various geometric distortions, In this study, radiometric and geometric calibrations for RADARSAT/SAT data are tried using SGX products georeferenced as level 1. Even comparison of the near vs. far range sections of the same images requires such calibration Radiometric calibration is performed by compensating for effects of local illuminated area and incidence angle on the local backscatter, Conversion method of the pixel DNs to beta nought and sigma nought is also investigated. Finally, automatic geometric calibration based on the 4 pixels from the header file is compared to a marine chart. The errors for latitude and longitude directions are 300m and 260m, respectively. It can be concluded that the error extent is acceptable for an application to open sea and can be calibrated using a ground control point.

  • PDF

Research on the modified algorithm for improving accuracy of Random Forest classifier which identifies automatically arrhythmia (부정맥 증상을 자동으로 판별하는 Random Forest 분류기의 정확도 향상을 위한 수정 알고리즘에 대한 연구)

  • Lee, Hyun-Ju;Shin, Dong-Kyoo;Park, Hee-Won;Kim, Soo-Han;Shin, Dong-Il
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.341-348
    • /
    • 2011
  • ECG(Electrocardiogram), a field of Bio-signal, is generally experimented with classification algorithms most of which are SVM(Support Vector Machine), MLP(Multilayer Perceptron). But this study modified the Random Forest Algorithm along the basis of signal characteristics and comparatively analyzed the accuracies of modified algorithm with those of SVM and MLP to prove the ability of modified algorithm. The R-R interval extracted from ECG is used in this study and the results of established researches which experimented co-equal data are also comparatively analyzed. As a result, modified RF Classifier showed better consequences than SVM classifier, MLP classifier and other researches' results in accuracy category. The Band-pass filter is used to extract R-R interval in pre-processing stage. However, the Wavelet transform, median filter, and finite impulse response filter in addition to Band-pass filter are often used in experiment of ECG. After this study, selection of the filters efficiently deleting the baseline wandering in pre-processing stage and study of the methods correctly extracting the R-R interval are needed.

Adaptive Digital Predistorter Using the NLMS Algorithm for the Nonlinear Compensation of the OFDM Communication System (OFDM통신시스템의 비선형 왜곡 보상을 위한 NLMS 알고리즘 방식의 디지털 적응 전치 왜곡기)

  • Kim Sang-Woo;Hieu Nguyen Thanh;Kang Byoung-Moo;Ryu Heung-Gyoon
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.16 no.4 s.95
    • /
    • pp.389-396
    • /
    • 2005
  • In this paper, we propose a pre-distortion method using the NLMS(Normalized Least Mean Square) algorithm to cope with hish PAPR(Peak to Average Power Ratio) problem in OFDM communication system. This proposed scheme estimates the distortion characteristics of HPA, and changes the characteristic against the distortion. Therefore, it can be shown that the adaptive characteristic of the NLMS pre-distorter is good to track the various nonlinear characteristic of HPA, even though HPA characteristic is changed by temperature variation or aging. From the performance analysis, SNR efficiency of NLMS pre-distorter is about $0.5\;\cal{dB}$ less than that of common numerical non-adaptive pre-distorter, when IBO(Input Back Off) is $0\;\cal{dB}$. However, the NLMS pre-distorter is better than the common numerical pre-distorter, because these two pre-distorters have similar performance in higher than $3\;\cal{dB}$ IBO, and the NLMS pre-distorter maintains the constant performance even though characteristic of HPA is changed.

Hemispheric Characteristics of Processing Hangul and Color (대뇌반구간 한글 단어처리와 색채처리 특성)

  • Han, Kwang-Hee;Kham, Kee-Taek
    • Annual Conference on Human and Language Technology
    • /
    • 1994.11a
    • /
    • pp.57-63
    • /
    • 1994
  • 인간의 정보처리 과정의 특성을 알아보기 위하여 반구별로 색채와 단어의 처리과정을 분석하였다. 단어와 색깔이라는 두가지 자극 속성이 있는 한 개의 자극에 대하여 각 자극 속성에 대한 판단과정을 반응키를 이용하여 반구별로 알아보았다. 단어에 대한 판단과 색깔에 대한 판단을 반구별로 분석한 결과, 색깔처리나 단어처리에 있어서 반구간 비대칭성은 나타나지 않았으나 색깔에 대한 판단이 단어에 대한 판단보다 신속하게 이루어지는 것으로 나타나 색채가 단어보다 기초적인 자극 속성임을 확인하였다. 단어와 색깔이라는 두가지 자극 속성을 이용한 경우에 한가지 자극속성을 처리할 때 다른 자극속성이 자동적으로 영향을 주는 것으로 나타났으나 그 정도에 있어서 반구간의 차이는 없었다. 그러나 색채가 단어처리를 간섭하는 정도가 단어가 색채처리를 간섭하는 정도보다 큰 것으로 나타나 기존의 스트룹 연구결과들과는 상반되는 결과를 얻었는데 이는 과제의 특성이라는 측면으로 기술되었다. 단어 처리에서 반구간 차이가 발견되지 않은 것은 한글의 시각적 특성과 관련지어 논의되었다. 자극의 한 속성이 자동적으로 다른 속성에 영향을 주지만 그 효과의 크기도 반구별로 차이가 없다는 것은 이전의 반구별 스트룹 효과를 알아본 연구들과 상반되는 결과이다. 따라서 자극속성이 상호영향을 줄 수 있는 좀더 일반적인 상황에서는 한 자극 속성이 다른 자극 속성의 처리에 자동적으로 영향을 주는 효과에서 반구 비대칭성이 발견되지 않으며 스트룹 효과는 두 자극 속성의 관계가 밀접한 특별한 경우에 나타나는 반구비대칭성 효과인 것으로 논의되었다.양 발생과 유의적으로 상관관계가 있었다. 본 연구의 결과는 phenol의 종류에 상관없이 식이 phenol에 조직의 항산화(산화억제)를 통해 암 예방(cancer prevention)에 영향을 미친다는 것을 제시해준다.물을 첨가하여 물내리기를 한 후 김이 오른 후 물내린 쌀가루에 15% 이상의 설탕을 첨가하여 20분간 쪄서 만든 백설기가 가장 바람직하다는 것을 알 수 있었다. 이 실험 중 가장 중요한 조건은 첨가하는 물의 양이 10%이며 첨가하는 당이 설탕일 경우는 김이 오른 후 설탕을 섞어 바로 쪄야 하며 설탕의 양이 15% 이상이라는 것이다. 이 조건은 대체적으로 hardness, adhesiveness, gumminess가 큰 수치를 나타낸다.순구조의 Tonpilz형 초음파 변환기와 비교하여 비록 송파전압감도에 있어서는 약 5 dB 정도의 음향출력의 손실이 불가피하지만, 그 대신 주파수 대역폭을 약 5 재 정도 확대시킬 수 있는 장점이 있기 때문에 이 넓은 주파수 대역을 효과적으로 활용하면 어종식별을 위한 음향산란신호를 정량적으로 수집 및 평가하는 것이 가능하다고 판단된다.n A was 11 ug.이, 0.9 ug/g and 3.7 ug/g in the blood, liver and kidney, respectively.sional-managerial who secure the higher autonomy and stability in their work have the highest life chance in the labor and health, and leisure life

  • PDF

Corpus-based Korean Text-to-speech Conversion System (콜퍼스에 기반한 한국어 문장/음성변환 시스템)

  • Kim, Sang-hun; Park, Jun;Lee, Young-jik
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.3
    • /
    • pp.24-33
    • /
    • 2001
  • this paper describes a baseline for an implementation of a corpus-based Korean TTS system. The conventional TTS systems using small-sized speech still generate machine-like synthetic speech. To overcome this problem we introduce the corpus-based TTS system which enables to generate natural synthetic speech without prosodic modifications. The corpus should be composed of a natural prosody of source speech and multiple instances of synthesis units. To make a phone level synthesis unit, we train a speech recognizer with the target speech, and then perform an automatic phoneme segmentation. We also detect the fine pitch period using Laryngo graph signals, which is used for prosodic feature extraction. For break strength allocation, 4 levels of break indices are decided as pause length and also attached to phones to reflect prosodic variations in phrase boundaries. To predict the break strength on texts, we utilize the statistical information of POS (Part-of-Speech) sequences. The best triphone sequences are selected by Viterbi search considering the minimization of accumulative Euclidean distance of concatenating distortion. To get high quality synthesis speech applicable to commercial purpose, we introduce a domain specific database. By adding domain specific database to general domain database, we can greatly improve the quality of synthetic speech on specific domain. From the subjective evaluation, the new Korean corpus-based TTS system shows better naturalness than the conventional demisyllable-based one.

  • PDF

IMToon: Image-based Cartoon Authoring System using Image Processing (IMToon: 영상처리를 활용한 영상기반 카툰 저작 시스템)

  • Seo, Banseok;Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.2
    • /
    • pp.11-22
    • /
    • 2017
  • This study proposes IMToon(IMage-based carToon) which is an image-based cartoon authoring system using an image processing algorithm. The proposed IMToon allows general users to easily and efficiently produce frames comprising cartoons based on image. The authoring system is designed largely with two functions: cartoon effector and interactive story editor. Cartoon effector automatically converts input images into a cartoon-style image, which consists of image-based cartoon shading and outline drawing steps. Image-based cartoon shading is to receive images of the desired scenes from users, separate brightness information from the color model of the input images, simplify them to a shading range of desired steps, and recreate them as cartoon-style images. Then, the final cartoon style images are created through the outline drawing step in which the outlines of the shaded images are applied through edge detection. Interactive story editor is used to enter text balloons and subtitles in a dialog structure to create one scene of the completed cartoon that delivers a story such as web-toon or comic book. In addition, the cartoon effector, which converts images into cartoon style, is expanded to videos so that it can be applied to videos as well as still images. Finally, various experiments are conducted to verify the possibility of easy and efficient production of cartoons that users want based on images with the proposed IMToon system.

Automated Analyses of Ground-Penetrating Radar Images to Determine Spatial Distribution of Buried Cultural Heritage (매장 문화재 공간 분포 결정을 위한 지하투과레이더 영상 분석 자동화 기법 탐색)

  • Kwon, Moonhee;Kim, Seung-Sep
    • Economic and Environmental Geology
    • /
    • v.55 no.5
    • /
    • pp.551-561
    • /
    • 2022
  • Geophysical exploration methods are very useful for generating high-resolution images of underground structures, and such methods can be applied to investigation of buried cultural properties and for determining their exact locations. In this study, image feature extraction and image segmentation methods were applied to automatically distinguish the structures of buried relics from the high-resolution ground-penetrating radar (GPR) images obtained at the center of Silla Kingdom, Gyeongju, South Korea. The major purpose for image feature extraction analyses is identifying the circular features from building remains and the linear features from ancient roads and fences. Feature extraction is implemented by applying the Canny edge detection and Hough transform algorithms. We applied the Hough transforms to the edge image resulted from the Canny algorithm in order to determine the locations the target features. However, the Hough transform requires different parameter settings for each survey sector. As for image segmentation, we applied the connected element labeling algorithm and object-based image analysis using Orfeo Toolbox (OTB) in QGIS. The connected components labeled image shows the signals associated with the target buried relics are effectively connected and labeled. However, we often find multiple labels are assigned to a single structure on the given GPR data. Object-based image analysis was conducted by using a Large-Scale Mean-Shift (LSMS) image segmentation. In this analysis, a vector layer containing pixel values for each segmented polygon was estimated first and then used to build a train-validation dataset by assigning the polygons to one class associated with the buried relics and another class for the background field. With the Random Forest Classifier, we find that the polygons on the LSMS image segmentation layer can be successfully classified into the polygons of the buried relics and those of the background. Thus, we propose that these automatic classification methods applied to the GPR images of buried cultural heritage in this study can be useful to obtain consistent analyses results for planning excavation processes.