• Title/Summary/Keyword: detection technique

검색결과 4,102건 처리시간 0.034초

Anomaly behavior detection using Negative Selection algorithm based anomaly detector (Negative Selection 알고리즘 기반 이상탐지기를 이용한 이상행 위 탐지)

  • 김미선;서재현
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국해양정보통신학회 2004년도 춘계종합학술대회
    • /
    • pp.391-394
    • /
    • 2004
  • Change of paradigm of network attack technique was begun by fast extension of the latest Internet and new attack form is appearing. But, Most intrusion detection systems detect informed attack type because is doing based on misuse detection, and active correspondence is difficult in new attack. Therefore, to heighten detection rate for new attack pattern, visibilitys to apply human immunity mechanism are appearing. In this paper, we create self-file from normal behavior profile about network packet and embody self recognition algorithm to use self-nonself discrimination in the human immune system to detect anomaly behavior. Sense change because monitors self-file creating anomaly detector based on Negative Selection Algorithm that is self recognition algorithm's one and detects anomaly behavior. And we achieve simulation to use DARPA Network Dataset and verify effectiveness of algorithm through the anomaly detection rate.

  • PDF

A Study on Smoke Detection using LBP and GLCM in Engine Room (선박의 기관실에서의 연기 검출을 위한 LBP-GLCM 알고리즘에 관한 연구)

  • Park, Kyung-Min
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • 제25권1호
    • /
    • pp.111-116
    • /
    • 2019
  • The fire detectors used in the engine rooms of ships offer only a slow response to emergencies because smoke or heat must reach detectors installed on ceilings, but the air flow in engine rooms can be very fluid depending on the use of equipment. In order to overcome these disadvantages, much research on video-based fire detection has been conducted in recent years. Video-based fire detection is effective for initial detection of fire because it is not affected by air flow and transmission speed is fast. In this paper, experiments were performed using images of smoke from a smoke generator in an engine room. Data generated using LBP and GLCM operators that extract the textural features of smoke was classified using SVM, which is a machine learning classifier. Even if smoke did not rise to the ceiling, where detectors were installed, smoke detection was confirmed using the image-based technique.

Real-time Phishing Site Detection Method (피싱사이트 실시간 탐지 기법)

  • Sa, Joon-Ho;Lee, Sang-Jin
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • 제22권4호
    • /
    • pp.819-825
    • /
    • 2012
  • Nowadays many phishing sites contain HTTP links to victim web-site's contents such as images, bulletin board etc. to make the phishing sites look more real and similar to the victim web-site. We introduce a real-time phishing site detection system which makes use of the characteristic that the phishing sites' URLs flow into the victim web-site via the HTTP referer header field when the phishing site is visited. The detection system is designed to adopt an out-of-path network configuration to minimize effect on the running system, and a phishing site source code analysis technique to alert administrators in real-time when phishing site is detected. The detection system was installed on a company's web-site which had been targeted for phishing. As result, the detection system detected 40 phishing sites in 6 days of test period.

Study on Detection Technique for Coastal Debris by using Unmanned Aerial Vehicle Remote Sensing and Object Detection Algorithm based on Deep Learning (무인항공기 영상 및 딥러닝 기반 객체인식 알고리즘을 활용한 해안표착 폐기물 탐지 기법 연구)

  • Bak, Su-Ho;Kim, Na-Kyeong;Jeong, Min-Ji;Hwang, Do-Hyun;Enkhjargal, Unuzaya;Kim, Bo-Ram;Park, Mi-So;Yoon, Hong-Joo;Seo, Won-Chan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • 제15권6호
    • /
    • pp.1209-1216
    • /
    • 2020
  • In this study, we propose a method for detecting coastal surface wastes using an UAV(Unmanned Aerial Vehicle) remote sensing method and an object detection algorithm based on deep learning. An object detection algorithm based on deep neural networks was proposed to detect coastal debris in aerial images. A deep neural network model was trained with image datasets of three classes: PET, Styrofoam, and plastics. And the detection accuracy of each class was compared with Darknet-53. Through this, it was possible to monitor the wastes landing on the shore by type through unmanned aerial vehicles. In the future, if the method proposed in this study is applied, a complete enumeration of the whole beach will be possible. It is believed that it can contribute to increase the efficiency of the marine environment monitoring field.

A deep learning-based approach for feeding behavior recognition of weanling pigs

  • Kim, MinJu;Choi, YoHan;Lee, Jeong-nam;Sa, SooJin;Cho, Hyun-chong
    • Journal of Animal Science and Technology
    • /
    • 제63권6호
    • /
    • pp.1453-1463
    • /
    • 2021
  • Feeding is the most important behavior that represents the health and welfare of weanling pigs. The early detection of feed refusal is crucial for the control of disease in the initial stages and the detection of empty feeders for adding feed in a timely manner. This paper proposes a real-time technique for the detection and recognition of small pigs using a deep-leaning-based method. The proposed model focuses on detecting pigs on a feeder in a feeding position. Conventional methods detect pigs and then classify them into different behavior gestures. In contrast, in the proposed method, these two tasks are combined into a single process to detect only feeding behavior to increase the speed of detection. Considering the significant differences between pig behaviors at different sizes, adaptive adjustments are introduced into a you-only-look-once (YOLO) model, including an angle optimization strategy between the head and body for detecting a head in a feeder. According to experimental results, this method can detect the feeding behavior of pigs and screen non-feeding positions with 95.66%, 94.22%, and 96.56% average precision (AP) at an intersection over union (IoU) threshold of 0.5 for YOLOv3, YOLOv4, and an additional layer and with the proposed activation function, respectively. Drinking behavior was detected with 86.86%, 89.16%, and 86.41% AP at a 0.5 IoU threshold for YOLOv3, YOLOv4, and the proposed activation function, respectively. In terms of detection and classification, the results of our study demonstrate that the proposed method yields higher precision and recall compared to conventional methods.

Spherical Point Tracing for Synthetic Vehicle Data Generation with 3D LiDAR Point Cloud Data (3차원 LiDAR 점군 데이터에서의 가상 차량 데이터 생성을 위한 구면 점 추적 기법)

  • Sangjun Lee;Hakil Kim
    • Journal of Broadcast Engineering
    • /
    • 제28권3호
    • /
    • pp.329-332
    • /
    • 2023
  • 3D Object Detection using deep neural network has been developed a lot for obstacle detection in autonomous vehicles because it can recognize not only the class of target object but also the distance from the object. But in the case of 3D Object Detection models, the detection performance for distant objects is lower than that for nearby objects, which is a critical issue for autonomous vehicles. In this paper, we introduce a technique that increases the performance of 3D object detection models, particularly in recognizing distant objects, by generating virtual 3D vehicle data and adding it to the dataset used for model training. We used a spherical point tracing method that leverages the characteristics of 3D LiDAR sensor data to create virtual vehicles that closely resemble real ones, and we demonstrated the validity of the virtual data by using it to improve recognition performance for objects at all distances in model training.

A deep and multiscale network for pavement crack detection based on function-specific modules

  • Guolong Wang;Kelvin C.P. Wang;Allen A. Zhang;Guangwei Yang
    • Smart Structures and Systems
    • /
    • 제32권3호
    • /
    • pp.135-151
    • /
    • 2023
  • Using 3D asphalt pavement surface data, a deep and multiscale network named CrackNet-M is proposed in this paper for pixel-level crack detection for improvements in both accuracy and robustness. The CrackNet-M consists of four function-specific architectural modules: a central branch net (CBN), a crack map enhancement (CME) module, three pooling feature pyramids (PFP), and an output layer. The CBN maintains crack boundaries using no pooling reductions throughout all convolutional layers. The CME applies a pooling layer to enhance potential thin cracks for better continuity, consuming no data loss and attenuation when working jointly with CBN. The PFP modules implement direct down-sampling and pyramidal up-sampling with multiscale contexts specifically for the detection of thick cracks and exclusion of non-crack patterns. Finally, the output layer is optimized with a skip layer supervision technique proposed to further improve the network performance. Compared with traditional supervisions, the skip layer supervision brings about not only significant performance gains with respect to both accuracy and robustness but a faster convergence rate. CrackNet-M was trained on a total of 2,500 pixel-wise annotated 3D pavement images and finely scaled with another 200 images with full considerations on accuracy and efficiency. CrackNet-M can potentially achieve crack detection in real-time with a processing speed of 40 ms/image. The experimental results on 500 testing images demonstrate that CrackNet-M can effectively detect both thick and thin cracks from various pavement surfaces with a high level of Precision (94.28%), Recall (93.89%), and F-measure (94.04%). In addition, the proposed CrackNet-M compares favorably to other well-developed networks with respect to the detection of thin cracks as well as the removal of shoulder drop-offs.

Enhanced Deep Feature Reconstruction : Texture Defect Detection and Segmentation through Preservation of Multi-scale Features (개선된 Deep Feature Reconstruction : 다중 스케일 특징의 보존을 통한 텍스쳐 결함 감지 및 분할)

  • Jongwook Si;Sungyoung Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • 제16권6호
    • /
    • pp.369-377
    • /
    • 2023
  • In the industrial manufacturing sector, quality control is pivotal for minimizing defect rates; inadequate management can result in additional costs and production delays. This study underscores the significance of detecting texture defects in manufactured goods and proposes a more precise defect detection technique. While the DFR(Deep Feature Reconstruction) model adopted an approach based on feature map amalgamation and reconstruction, it had inherent limitations. Consequently, we incorporated a new loss function using statistical methodologies, integrated a skip connection structure, and conducted parameter tuning to overcome constraints. When this enhanced model was applied to the texture category of the MVTec-AD dataset, it recorded a 2.3% higher Defect Segmentation AUC compared to previous methods, and the overall defect detection performance was improved. These findings attest to the significant contribution of the proposed method in defect detection through the reconstruction of feature map combinations.

Marine-Life-Detection and Density-Estimation Algorithms Based on Underwater Images and Scientific Sonar Systems (수중영상과 과학어탐 시스템 기반 해양생물 탐지 밀도추정 알고리즘 연구)

  • Young-Tae Son;Sang-yeup Jin;Jongchan Lee;Mookun Kim;Ju Young Byon;Hyung Tae Moo;Choong Hun Shin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • 제30권5호
    • /
    • pp.373-386
    • /
    • 2024
  • The aim of this study is to establish a system for the early detection of high-density harmful marine organisms. Considering its accuracy and processing speed, YOLOv8m (You Only Look Once version 8 medium) is selected as a suitable model for real-time underwater image-based object detection. Applying the detection algorithm allows one to detect numerous fish and the occasional occurrence of jellyfish. The average precision, recall rate, and mAP (mean Average Precision) of the trained model are 0.931, 0.881, and 0.948 for the validation data, respectively. Also, the mAP for each class is 0.97 for fish, 0.97 for jellyfish and 0.91 for salpa, all of which exceed 0.9 (90%) for classes demonstrating the excellent performance of the model. A scientific sonar system is used to address the object-detection range and validate the detection results. Additionally, integrating and grid averaging the echo strength allows the detection results to be smoothed in space and time. Mean-volume back-scattering strength values are obtained to reflect the detection variability within the analysis domain. Furthermore, an underwater image-based object (marine lives) detection algorithm, an image-correction technique based on the underwater environmental conditions (including nights), and quantified detection results based on a scientific sonar system are presented, which demonstrate the utility of the detection system in various applications.

Detection of transgene in early developmental stage by GFP monitoring enhances the efficiency of genetic transformation of pepper

  • Jung, Min;Shin, Sun-Hee;Park, Jeong-Mi;Lee, Sung-Nam;Lee, Mi-Yeon;Ryu, Ki-Hyun;Paek, Kee-Yoeup;Harn, Chee-Hark
    • Plant Biotechnology Reports
    • /
    • 제5권2호
    • /
    • pp.157-167
    • /
    • 2011
  • In order to establish a reliable and highly efficient method for genetic transformation of pepper, a monitoring system featuring GFP (green fluorescent protein) as a report marker was applied to Agrobacteriummediated transformation. A callus-induced transformation (CIT) system was used to transform the GFP gene. GFP expression was observed in all tissues of $T_0$, $T_1$ and $T_2$ peppers, constituting the first instance in which the whole pepper plant has exhibited GFP fluorescence. A total of 38 T0 peppers were obtained from 4,200 explants. The transformation rate ranged from 0.47 to 1.83% depending on the genotype, which was higher than that obtained by CIT without the GFP monitoring system. This technique could enhance selection power by monitoring GFP expression at the early stage of callus in vitro. The detection of GFP expression in the callus led to successful identification of the shoot that contained the transgene. Thus, this technique saved lots of time and money for conducting the genetic transformation process of pepper. In addition, a co-transformation technique was applied to the target transgene, CaCS (encoding capsaicinoid synthetase of Capsicum) along with GFP. Paprika varieties were transformed by the CaCS::GFP construct, and GFP expression in callus tissues of paprika was monitored to select the right transformant.