• Title/Summary/Keyword: Automated Detection

Search Result 598, Processing Time 0.022 seconds

An automated memory error detection technique using source code analysis in C programs (C언어 기반 프로그램의 소스코드 분석을 이용한 메모리 접근오류 자동검출 기법)

  • Cho, Dae-Wan;Oh, Seung-Uk;Kim, Hyeon-Soo
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.675-688
    • /
    • 2007
  • Memory access errors are frequently occurred in C programs. A number of tools and research works have been trying to detect the errors automatically. However, they have one or more of the following problems: inability to detect all memory errors, changing the memory allocation mechanism, incompatibility with libraries, and excessive performance overhead. In this paper, we suggest a new method to solve these problems, and then present a result of comparison to the previous research works through the experiments. Our approach consists of two phases. First is to transform source code at compile time through inserting instrumentation into the source code. And second is to detect memory errors at run time with a bitmap that maintains information about memory allocation. Our approach has improved the error detection abilities against the binary code analysis based ones by using the source code analysis technique, and enhanced performance in terms of both space and time, too. In addition, our approach has no problem with respect to compatibility with shared libraries as well as does not need to modify memory allocation mechanism.

Automated Detecting and Tracing for Plagiarized Programs using Gumbel Distribution Model (굼벨 분포 모델을 이용한 표절 프로그램 자동 탐색 및 추적)

  • Ji, Jeong-Hoon;Woo, Gyun;Cho, Hwan-Gue
    • The KIPS Transactions:PartA
    • /
    • v.16A no.6
    • /
    • pp.453-462
    • /
    • 2009
  • Studies on software plagiarism detection, prevention and judgement have become widespread due to the growing of interest and importance for the protection and authentication of software intellectual property. Many previous studies focused on comparing all pairs of submitted codes by using attribute counting, token pattern, program parse tree, and similarity measuring algorithm. It is important to provide a clear-cut model for distinguishing plagiarism and collaboration. This paper proposes a source code clustering algorithm using a probability model on extreme value distribution. First, we propose an asymmetric distance measure pdist($P_a$, $P_b$) to measure the similarity of $P_a$ and $P_b$ Then, we construct the Plagiarism Direction Graph (PDG) for a given program set using pdist($P_a$, $P_b$) as edge weights. And, we transform the PDG into a Gumbel Distance Graph (GDG) model, since we found that the pdist($P_a$, $P_b$) score distribution is similar to a well-known Gumbel distribution. Second, we newly define pseudo-plagiarism which is a sort of virtual plagiarism forced by a very strong functional requirement in the specification. We conducted experiments with 18 groups of programs (more than 700 source codes) collected from the ICPC (International Collegiate Programming Contest) and KOI (Korean Olympiad for Informatics) programming contests. The experiments showed that most plagiarized codes could be detected with high sensitivity and that our algorithm successfully separated real plagiarism from pseudo plagiarism.

Research on text mining based malware analysis technology using string information (문자열 정보를 활용한 텍스트 마이닝 기반 악성코드 분석 기술 연구)

  • Ha, Ji-hee;Lee, Tae-jin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.45-55
    • /
    • 2020
  • Due to the development of information and communication technology, the number of new / variant malicious codes is increasing rapidly every year, and various types of malicious codes are spreading due to the development of Internet of things and cloud computing technology. In this paper, we propose a malware analysis method based on string information that can be used regardless of operating system environment and represents library call information related to malicious behavior. Attackers can easily create malware using existing code or by using automated authoring tools, and the generated malware operates in a similar way to existing malware. Since most of the strings that can be extracted from malicious code are composed of information closely related to malicious behavior, it is processed by weighting data features using text mining based method to extract them as effective features for malware analysis. Based on the processed data, a model is constructed using various machine learning algorithms to perform experiments on detection of malicious status and classification of malicious groups. Data has been compared and verified against all files used on Windows and Linux operating systems. The accuracy of malicious detection is about 93.5%, the accuracy of group classification is about 90%. The proposed technique has a wide range of applications because it is relatively simple, fast, and operating system independent as a single model because it is not necessary to build a model for each group when classifying malicious groups. In addition, since the string information is extracted through static analysis, it can be processed faster than the analysis method that directly executes the code.

Sensor Fault Detection Scheme based on Deep Learning and Support Vector Machine (딥 러닝 및 서포트 벡터 머신기반 센서 고장 검출 기법)

  • Yang, Jae-Wan;Lee, Young-Doo;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.2
    • /
    • pp.185-195
    • /
    • 2018
  • As machines have been automated in the field of industries in recent years, it is a paramount importance to manage and maintain the automation machines. When a fault occurs in sensors attached to the machine, the machine may malfunction and further, a huge damage will be caused in the process line. To prevent the situation, the fault of sensors should be monitored, diagnosed and classified in a proper way. In the paper, we propose a sensor fault detection scheme based on SVM and CNN to detect and classify typical sensor errors such as erratic, drift, hard-over, spike, and stuck faults. Time-domain statistical features are utilized for the learning and testing in the proposed scheme, and the genetic algorithm is utilized to select the subset of optimal features. To classify multiple sensor faults, a multi-layer SVM is utilized, and ensemble technique is used for CNN. As a result, the SVM that utilizes a subset of features selected by the genetic algorithm provides better performance than the SVM that utilizes all the features. However, the performance of CNN is superior to that of the SVM.

Current status and future plans of KMTNet microlensing experiments

  • Chung, Sun-Ju;Gould, Andrew;Jung, Youn Kil;Hwang, Kyu-Ha;Ryu, Yoon-Hyun;Shin, In-Gu;Yee, Jennifer C.;Zhu, Wei;Han, Cheongho;Cha, Sang-Mok;Kim, Dong-Jin;Kim, Hyun-Woo;Kim, Seung-Lee;Lee, Chung-Uk;Lee, Yongseok
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.1
    • /
    • pp.41.1-41.1
    • /
    • 2018
  • We introduce a current status and future plans of Korea Microlensing Telescope Network (KMTNet) microlensing experiments, which include an observational strategy, pipeline, event-finder, and collaborations with Spitzer. The KMTNet experiments were initiated in 2015. From 2016, KMTNet observes 27 fields including 6 main fields and 21 subfields. In 2017, we have finished the DIA photometry for all 2016 and 2017 data. Thus, it is possible to do a real-time DIA photometry from 2018. The DIA photometric data is used for finding events from the KMTNet event-finder. The KMTNet event-finder has been improved relative to the previous version, which already found 857 events in 4 main fields of 2015. We have applied the improved version to all 2016 data. As a result, we find that 2597 events are found, and out of them, 265 are found in KMTNet-K2C9 overlapping fields. For increasing the detection efficiency of event-finder, we are working on filtering false events out by machine-learning method. In 2018, we plan to measure event detection efficiency of KMTNet by injecting fake events into the pipeline near the image level. Thanks to high-cadence observations, KMTNet found fruitful interesting events including exoplanets and brown dwarfs, which were not found by other groups. Masses of such exoplanets and brown dwarfs are measured from collaborations with Spitzer and other groups. Especially, KMTNet has been closely cooperating with Spitzer from 2015. Thus, KMTNet observes Spitzer fields. As a result, we could measure the microlens parallaxes for many events. Also, the automated KMTNet PySIS pipeline was developed before the 2017 Spitzer season and it played a very important role in selecting the Spitzer target. For the 2018 Spitzer season, we will improve the PySIS pipeline to obtain better photometric results.

  • PDF

Determination of N-nitrosamines in Water by Gas Chromatography Coupled with Electron Impact Ionization Tandem Mass Spectrometry (EI-GC/MS/MS를 이용한 니트로사민류의 수질분석)

  • Lee, Ki-Chang;Park, Jae-Hyung;Lee, Wontae
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.36 no.11
    • /
    • pp.764-770
    • /
    • 2014
  • This study assessed analysis of N-nitrosamines by separation, identification, and quantification using a gas chromatography (GC) mass spectrometer (MS) with electron impact (EI) mode. Samples were pretreated by a automated solid phase extraction (SPE) and a nitrogen concentration technique to detect low concentration ranges. The analysis results by EI-GC/MS (SIM) and EI-GC/MS/MS (MRM) on standard samples with no pretreatment exhibited similar results. On the other hand, the analysis of pretreated samples at low concentrations (i.e. ng/L levels) were not reliable with a EI-GC/MS due to the interferences from impurity peaks. The method detection limits of eight (8) N-nitrosamines by EI-GC/MS/MS analysis ranged from 0.76 to 2.09 ng/L, and the limits of quantification ranged from 2.41 to 6.65 ng/L. The precision and accuracy of the method were evaluated using spiked samples at concentrations of 10, 20 and 100 ng/L. The precision were 1.2~13.6%, and the accuracy were 80.4~121.8%. The $R^2$ of the calibration curves were greater than 0.999. The recovery rates for various environmental samples were evaluated with a surrogate material (NDPA-$d_{14}$) and ranged 86.2~122.3%. Thus, this method can be used to determine low (ng/L) levels of N-nitrosamines in water samples.

The Development of Image Processing System Using Area Camera for Feeding Lumber (영역카메라를 이용한 이송중인 제재목의 화상처리시스템 개발)

  • Kim, Byung Nam;Lee, Hyoung Woo;Kim, Kwang Mo
    • Journal of the Korean Wood Science and Technology
    • /
    • v.37 no.1
    • /
    • pp.37-47
    • /
    • 2009
  • For the inspection of wood, machine vision is the most common automated inspection method used at present. It is required to sort wood products by grade and to locate surface defects prior to cut-up. Many different sensing methods have been applied to inspection of wood including optical, ultrasonic, X-ray sensing in the wood industry. Nowadays the scanning system mainly employs CCD line-scan camera to meet the needs of accurate detection of lumber defects and real-time image processing. But this system needs exact feeding system and low deviation of lumber thickness. In this study low cost CCD area sensor was used for the development of image processing system for lumber being fed. When domestic red pine being fed on the conveyer belt, lumber images of irregular term of captured area were acquired because belt conveyor slipped between belt and roller. To overcome incorrect image merging by the unstable feeding speed of belt conveyor, it was applied template matching algorithm which was a measure of the similarity between the pattern of current image and the next one. Feeding the lumber over 13.8 m/min, general area sensor generates unreadable image pattern by the motion blur. The red channel of RGB filter showed a good performance for removing background of the green conveyor belt from merged image. Threshold value reduction method that was a image-based thresholding algorithm performed well for knot detection.

A Study on the Automatic Detection of Railroad Power Lines Using LiDAR Data and RANSAC Algorithm (LiDAR 데이터와 RANSAC 알고리즘을 이용한 철도 전력선 자동탐지에 관한 연구)

  • Jeon, Wang Gyu;Choi, Byoung Gil
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.331-339
    • /
    • 2013
  • LiDAR has been one of the widely used and important technologies for 3D modeling of ground surface and objects because of its ability to provide dense and accurate range measurement. The objective of this research is to develop a method for automatic detection and modeling of railroad power lines using high density LiDAR data and RANSAC algorithms. For detecting railroad power lines, multi-echoes properties of laser data and shape knowledge of railroad power lines were employed. Cuboid analysis for detecting seed line segments, tracking lines, connecting and labeling are the main processes. For modeling railroad power lines, iterative RANSAC and least square adjustment were carried out to estimate the lines parameters. The validation of the result is very challenging due to the difficulties in determining the actual references on the ground surface. Standard deviations of 8cm and 5cm for x-y and z coordinates, respectively are satisfactory outcomes. In case of completeness, the result of visual inspection shows that all the lines are detected and modeled well as compare with the original point clouds. The overall processes are fully automated and the methods manage any state of railroad wires efficiently.

Application of Terrestrial LiDAR for Displacement Detecting on Risk Slope (위험 경사면의 변위 검출을 위한 지상 라이다의 활용)

  • Lee, Keun-Wang;Park, Joon-Kyu
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.1
    • /
    • pp.323-328
    • /
    • 2019
  • In order to construct 3D geospatial information about the terrain, current measurement using a total station, remote sensing, GNSS(Global Navigation Satellite System) have been used. However, ground survey and GNSS survey have time and economic disadvantages because they have to be surveyed directly in the field. In case of using aerial photographs and satellite images, these methods have the disadvantage that it is difficult to obtain the three-dimensional shape of the terrain. The terrestrial LiDAR can acquire 3D information of X, Y, Z coordinate and shape obtained by scanning innumerable laser pulses at densely spaced intervals on the surface of the object to be observed at high density, and the processing can also be automated. In this study, terrestrial LiDAR was used to analyze slope displacement. Study area slopes were selected and data were acquired using LiDAR in 2016 and 2017. Data processing has been used to generate slope cross section and slope data, and the overlay analysis of the generated data identifies slope displacements within 0.1 m and suggests the possibility of using slope LiDAR on land to manage slopes. If periodic data acquisition and analysis is performed in the future, the method using the terrestrial lidar will contribute to effective risk slope management.

Application of Integrated Security Control of Artificial Intelligence Technology and Improvement of Cyber-Threat Response Process (인공지능 기술의 통합보안관제 적용 및 사이버침해대응 절차 개선 )

  • Ko, Kwang-Soo;Jo, In-June
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.10
    • /
    • pp.59-66
    • /
    • 2021
  • In this paper, an improved integrated security control procedure is newly proposed by applying artificial intelligence technology to integrated security control and unifying the existing security control and AI security control response procedures. Current cyber security control is highly dependent on the level of human ability. In other words, it is practically unreasonable to analyze various logs generated by people from different types of equipment and analyze and process all of the security events that are rapidly increasing. And, the signature-based security equipment that detects by matching a string and a pattern has insufficient functions to accurately detect advanced and advanced cyberattacks such as APT (Advanced Persistent Threat). As one way to solve these pending problems, the artificial intelligence technology of supervised and unsupervised learning is applied to the detection and analysis of cyber attacks, and through this, the analysis of logs and events that occur innumerable times is automated and intelligent through this. The level of response has been raised in the overall aspect by making it possible to predict and block the continuous occurrence of cyberattacks. And after applying AI security control technology, an improved integrated security control service model was newly proposed by integrating and solving the problem of overlapping detection of AI and SIEM into a unified breach response process(procedure).