• Title/Summary/Keyword: Parsing Algorithm

Search Result 70, Processing Time 0.028 seconds

Gate Locations Optimization of an Automotive Instrument Panel for Minimizing Cavity Pressure (금형 내부 압력 최소화를 위한 자동차 인스트루먼트 패널의 게이트 위치 최적화)

  • Cho, Sung-Bin;Park, Chang-Hyun;Pyo, Byung-Gi;Cho, Dong-Hoon
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.6
    • /
    • pp.648-653
    • /
    • 2012
  • Cavity pressure, an important factor in injection molding process, should be minimized to enhance injection molding quality. In this study, we decided the locations of valve gates to minimize the maximum cavity pressure. To solve this problem, we integrated MAPS-3D (Mold Analysis and Plastic Solution-3Dimension), a commercial injection molding analysis CAE tool, using the file parsing method of PIAnO (Process Integration, Automation and Optimization) as a commercial process integration and design optimization tool. In order to reduce the computational time for obtaining the optimal design solution, we performed an approximate optimization using a meta-model that replaced expensive computer simulations. To generate the meta-model, computer simulations were performed at the design points selected using the optimal Latin hypercube design as an experimental design. Then, we used micro genetic algorithm equipped in PIAnO to obtain the optimal design solution. Using the proposed design approach, the maximum cavity pressure was reduced by 17.3% compared to the initial one, which clearly showed the validity of the proposed design approach.

Korean Syntactic Rules using Composite Labels (복합 레이블을 적용한 한국어 구문 규칙)

  • 김성용;이공주;최기선
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.235-244
    • /
    • 2004
  • We propose a format of a binary phrase structure grammar with composite labels. The grammar adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labeling algorithm. Since the proposed rule description scheme is binary and uses only part-of-speech information, it can readily be used in dependency grammar and be applied to other languages as well. In the best-1 context-free cross validation on 31,080 tree-tagged corpus, the labeled precision is 79.30%, which outperforms phrase structure grammar and dependency grammar by 5% and by 4%, respectively. It shows that the proposed rule description scheme is effective for parsing Korean.

Vision Based Position Detection System of Used Oil Filter using Line Laser (라인형 레이저를 이용한 비전기반 차량용 폐오일필터 검출 시스템)

  • Xing, Xiong;Song, Un-Ji;Choi, Byung-Jae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.20 no.3
    • /
    • pp.332-336
    • /
    • 2010
  • There are so many successful applications to image processing systems in industries. In this study we propose a position detection system for used oil filter by using a line laser. We have been done on the development of line laser as interaction devices. A camera captures images of a display surface of a used oil filter and then a laser beam location is extracted from the captured image. This image is processed and used as a cursor position. We also discuss an algorithm that can distinguish the front part and rear part. In particular we present a robust and efficient linear detection algorithm that allows us to use our system under a variety lighting conditions, and allows us to reduce the amount of image parsing required to find a laser position by an order of magnitude.

News Video Shot Boundary Detection using Singular Value Decomposition and Incremental Clustering (특이값 분해와 점증적 클러스터링을 이용한 뉴스 비디오 샷 경계 탐지)

  • Lee, Han-Sung;Im, Young-Hee;Park, Dai-Hee;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.2
    • /
    • pp.169-177
    • /
    • 2009
  • In this paper, we propose a new shot boundary detection method which is optimized for news video story parsing. This new news shot boundary detection method was designed to satisfy all the following requirements: 1) minimizing the incorrect data in data set for anchor shot detection by improving the recall ratio 2) detecting abrupt cuts and gradual transitions with one single algorithm so as to divide news video into shots with one scan of data set; 3) classifying shots into static or dynamic, therefore, reducing the search space for the subsequent stage of anchor shot detection. The proposed method, based on singular value decomposition with incremental clustering and mercer kernel, has additional desirable features. Applying singular value decomposition, the noise or trivial variations in the video sequence are removed. Therefore, the separability is improved. Mercer kernel improves the possibility of detection of shots which is not separable in input space by mapping data to high dimensional feature space. The experimental results illustrated the superiority of the proposed method with respect to recall criteria and search space reduction for anchor shot detection.

Postprocessing of Inter-Frame Coded Images Based on Convex Projection and Regularization (POCS와 정규화를 기반으로한 프레임간 압출 영사의 후처리)

  • Kim, Seong-Jin;Jeong, Si-Chang;Hwang, In-Gyeong;Baek, Jun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.58-65
    • /
    • 2002
  • In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8$\times$8 DCT and 16$\times$16 macroblock based motion compensation, while that of intra-coded images is caused by 8$\times$8 DCT only. According to the observation, we Propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporate!; spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block-based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior Those constraints have been used for defining convex sets for restoring differential images.

A Study to Improve Recovery Ratio of Deleted File Using the Parsing Algorithm of the HFS + Journal File (HFS+ 저널 파일 파싱 알고리즘을 이용한 삭제된 파일 복구 기법 향상 방안)

  • Bang, Seung Gyu;Jeon, Sang Jun;Kim, Do Hyun;Lee, Sang Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.12
    • /
    • pp.463-470
    • /
    • 2016
  • With the growing demand for MAC-based system, the need for digital forensic techniques of these system has been increasing. In the digital forensic analysis process, sometimes analysts have recovered the deleted files when they prove the allegations if system user try to remove the evidence deliberately. Research and analysis that recover the deleted files from a file system constantly been made and HFS+ that is a file system of MAC-based system also has been researched. Carving techniques primarily has been used to recover the deleted file from HFS+ a file system because metadata of folder or file overwrite metadata of a deleted file when file is deleted from a file system on HFS+ characteristic. But if the file content is saved by separated state in a file system, Carving techniques also can't recover the whole or a part of the deleted file. In this paper we describe technique the deleted file recovery technique using HFS+ file system a journal. This technique that is suggested by existing research and analysis result is the technique that recover the deleted file by metadata that is maintained in a journal on HFS+ file system. but this technique excludes specific files and this problem needs to be reformed. In this paper we suggest algorithm that analysis a journal of HFS+ file system in detail. And we demonstrate that the deleted file cat be recovered from the extracted metadata by this algorithm without the excluded file.

Determination of Valve Gate Open Timing for Minimizing Injection Pressure of an Automotive Instrument Panel (자동차용 인스트루먼트 패널의 사출압력 최소화를 위한 밸브 게이트 열림 시점 결정)

  • Cho, Sung-Bin;Park, Chang-Hyun;Pyo, Byung-Gi;Choi, Dong-Hoon
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.20 no.4
    • /
    • pp.46-51
    • /
    • 2012
  • Injection pressure, an important factor in filling process, should be minimized to enhance injection molding quality. Injection pressure can be controlled by valve gate open timing. In this work, we decided the valve gate open timing to minimize the injection pressure. To solve this design problem, we integrated MAPS-3D (Mold Analysis and Plastic Solution-3Dimension), a commercial injection molding CAE tool, to PIAnO (Process Integration, Automation and Optimization), a commercial PIDO (Process Integration, and Design Optimization) tool using the file parsing method. In order to reduce computational cost, we performed an approximate optimization using meta-models that replaced expensive computer simulations. At first, we carried out DOE (Design of Experiments) using OLHD (Optimal Latin Hypercube Design) available in PIAnO. Then, we built Kriging models using the simulation results at the sampling points. Finally, we used micro GA (Genetic Algorithm) available in PIAnO. Using the proposed design approach, the injection pressure has been reduced by 13.7% compared to the initial one. This design result clearly shows the validity of the proposed design approach.

An Area-efficient Design of SHA-256 Hash Processor for IoT Security (IoT 보안을 위한 SHA-256 해시 프로세서의 면적 효율적인 설계)

  • Lee, Sang-Hyun;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.1
    • /
    • pp.109-116
    • /
    • 2018
  • This paper describes an area-efficient design of SHA-256 hash function that is widely used in various security protocols including digital signature, authentication code, key generation. The SHA-256 hash processor includes a padder block for padding and parsing input message, so that it can operate without software for preprocessing. Round function was designed with a 16-bit data-path that processed 64 round computations in 128 clock cycles, resulting in an optimized area per throughput (APT) performance as well as small area implementation. The SHA-256 hash processor was verified by FPGA implementation using Virtex5 device, and it was estimated that the throughput was 337 Mbps at maximum clock frequency of 116 MHz. The synthesis for ASIC implementation using a $0.18-{\mu}m$ CMOS cell library shows that it has 13,251 gate equivalents (GEs) and it can operate up to 200 MHz clock frequency.

The Research on Data Concealing and Detection of SQLite Database (SQLite 데이터베이스 파일에 대한 데이터 은닉 및 탐지 기법 연구)

  • Lee, Jae-hyoung;Cho, Jaehyung;Hong, Kiwon;Kim, Jongsung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.27 no.6
    • /
    • pp.1347-1359
    • /
    • 2017
  • SQLite database is a file-based DBMS(Database Management System) that provides transactions, and it is loaded on smartphone because it is appropriate for lightweight platform. AS the usage of smartphone increased, SQLite-related crimes can occur. In this paper, we proposed a new concealing method for SQLite db file and a detection method against it. As a result of concealing experiments, it is possible to intentionally conceal 70bytes in the DB file header and conceal original data by inserting artificial pages. But it can be detected by parsing 70bytes based on SQLite structure or using the number of record and index. After that, we proposed detection algorithm for concealed data.

Competition Relation Extraction based on Combining Machine Learning and Filtering (기계학습 및 필터링 방법을 결합한 경쟁관계 인식)

  • Lee, ChungHee;Seo, YoungHoon;Kim, HyunKi
    • Journal of KIISE
    • /
    • v.42 no.3
    • /
    • pp.367-378
    • /
    • 2015
  • This study was directed at the design of a hybrid algorithm for competition relation extraction. Previous works on relation extraction have relied on various lexical and deep parsing indicators and mostly utilize only the machine learning method. We present a new algorithm integrating machine learning with various filtering methods. Some simple but useful features for competition relation extraction are also introduced, and an optimum feature set is proposed. The goal of this paper was to increase the precision of competition relation extraction by combining supervised learning with various filtering methods. Filtering methods were employed for classifying compete relation occurrence, using distance restriction for the filtering of feature pairs, and classifying whether or not the candidate entity pair is spam. For evaluation, a test set consisting of 2,565 sentences was examined. The proposed method was compared with the rule-based method and general relation extraction method. As a result, the rule-based method achieved positive precision of 0.812 and accuracy of 0.568, while the general relation extraction method achieved 0.612 and 0.563, respectively. The proposed system obtained positive precision of 0.922 and accuracy of 0.713. These results demonstrate that the developed method is effective for competition relation extraction.