• Title/Summary/Keyword: Deep Features

Search Result 1,096, Processing Time 0.025 seconds

Case Study of Building a Malicious Domain Detection Model Considering Human Habitual Characteristics: Focusing on LSTM-based Deep Learning Model (인간의 습관적 특성을 고려한 악성 도메인 탐지 모델 구축 사례: LSTM 기반 Deep Learning 모델 중심)

  • Jung Ju Won
    • Convergence Security Journal
    • /
    • v.23 no.5
    • /
    • pp.65-72
    • /
    • 2023
  • This paper proposes a method for detecting malicious domains considering human habitual characteristics by building a Deep Learning model based on LSTM (Long Short-Term Memory). DGA (Domain Generation Algorithm) malicious domains exploit human habitual errors, resulting in severe security threats. The objective is to swiftly and accurately respond to changes in malicious domains and their evasion techniques through typosquatting to minimize security threats. The LSTM-based Deep Learning model automatically analyzes and categorizes generated domains as malicious or benign based on malware-specific features. As a result of evaluating the model's performance based on ROC curve and AUC accuracy, it demonstrated 99.21% superior detection accuracy. Not only can this model detect malicious domains in real-time, but it also holds potential applications across various cyber security domains. This paper proposes and explores a novel approach aimed at safeguarding users and fostering a secure cyber environment against cyber attacks.

A Personal Video Event Classification Method based on Multi-Modalities by DNN-Learning (DNN 학습을 이용한 퍼스널 비디오 시퀀스의 멀티 모달 기반 이벤트 분류 방법)

  • Lee, Yu Jin;Nang, Jongho
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1281-1297
    • /
    • 2016
  • In recent years, personal videos have seen a tremendous growth due to the substantial increase in the use of smart devices and networking services in which users create and share video content easily without many restrictions. However, taking both into account would significantly improve event detection performance because videos generally have multiple modalities and the frame data in video varies at different time points. This paper proposes an event detection method. In this method, high-level features are first extracted from multiple modalities in the videos, and the features are rearranged according to time sequence. Then the association of the modalities is learned by means of DNN to produce a personal video event detector. In our proposed method, audio and image data are first synchronized and then extracted. Then, the result is input into GoogLeNet as well as Multi-Layer Perceptron (MLP) to extract high-level features. The results are then re-arranged in time sequence, and every video is processed to extract one feature each for training by means of DNN.

Environmental Sound Classification for Selective Noise Cancellation in Industrial Sites (산업현장에서의 선택적 소음 제거를 위한 환경 사운드 분류 기술)

  • Choi, Hyunkook;Kim, Sangmin;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.845-853
    • /
    • 2020
  • In this paper, we propose a method for classifying environmental sound for selective noise cancellation in industrial sites. Noise in industrial sites causes hearing loss in workers, and researches on noise cancellation have been widely conducted. However, the conventional methods have a problem of blocking all sounds and cannot provide the optimal operation per noise type because of common cancellation method for all types of noise. In order to perform selective noise cancellation, therefore, we propose a method for environmental sound classification based on deep learning. The proposed method uses new sets of acoustic features consisting of temporal and statistical properties of Mel-spectrogram, which can overcome the limitation of Mel-spectrogram features, and uses convolutional neural network as a classifier. We apply the proposed method to five-class sound classification with three noise classes and two non-noise classes. We confirm that the proposed method provides improved classification accuracy by 6.6% point, compared with that using conventional Mel-spectrogram features.

Fall Detection Based on 2-Stacked Bi-LSTM and Human-Skeleton Keypoints of RGBD Camera (RGBD 카메라 기반의 Human-Skeleton Keypoints와 2-Stacked Bi-LSTM 모델을 이용한 낙상 탐지)

  • Shin, Byung Geun;Kim, Uung Ho;Lee, Sang Woo;Yang, Jae Young;Kim, Wongyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.491-500
    • /
    • 2021
  • In this study, we propose a method for detecting fall behavior using MS Kinect v2 RGBD Camera-based Human-Skeleton Keypoints and a 2-Stacked Bi-LSTM model. In previous studies, skeletal information was extracted from RGB images using a deep learning model such as OpenPose, and then recognition was performed using a recurrent neural network model such as LSTM and GRU. The proposed method receives skeletal information directly from the camera, extracts 2 time-series features of acceleration and distance, and then recognizes the fall behavior using the 2-Stacked Bi-LSTM model. The central joint was obtained for the major skeletons such as the shoulder, spine, and pelvis, and the movement acceleration and distance from the floor were proposed as features of the central joint. The extracted features were compared with models such as Stacked LSTM and Bi-LSTM, and improved detection performance compared to existing studies such as GRU and LSTM was demonstrated through experiments.

Adversarial Example Detection Based on Symbolic Representation of Image (이미지의 Symbolic Representation 기반 적대적 예제 탐지 방법)

  • Park, Sohee;Kim, Seungjoo;Yoon, Hayeon;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.975-986
    • /
    • 2022
  • Deep learning is attracting great attention, showing excellent performance in image processing, but is vulnerable to adversarial attacks that cause the model to misclassify through perturbation on input data. Adversarial examples generated by adversarial attacks are minimally perturbated where it is difficult to identify, so visual features of the images are not generally changed. Unlikely deep learning models, people are not fooled by adversarial examples, because they classify the images based on such visual features of images. This paper proposes adversarial attack detection method using Symbolic Representation, which is a visual and symbolic features such as color, shape of the image. We detect a adversarial examples by comparing the converted Symbolic Representation from the classification results for the input image and Symbolic Representation extracted from the input images. As a result of measuring performance on adversarial examples by various attack method, detection rates differed depending on attack targets and methods, but was up to 99.02% for specific target attack.

A study on age estimation of facial images using various CNNs (Convolutional Neural Networks) (다양한 CNN 모델을 이용한 얼굴 영상의 나이 인식 연구)

  • Sung Eun Choi
    • Journal of Platform Technology
    • /
    • v.11 no.5
    • /
    • pp.16-22
    • /
    • 2023
  • There is a growing interest in facial age estimation because many applications require age estimation techniques from facial images. In order to estimate the exact age of a face, a technique for extracting aging features from a face image and classifying the age according to the extracted features is required. Recently, the performance of various CNN-based deep learning models has been greatly improved in the image recognition field, and various CNN-based deep learning models are being used to improve performance in the field of facial age estimation. In this paper, age estimation performance was compared by learning facial features based on various CNN-based models such as AlexNet, VGG-16, VGG-19, ResNet-18, ResNet-34, ResNet-50, ResNet-101, ResNet-152. As a result of experiment, it was confirmed that the performance of the facial age estimation models using ResNet-34 was the best.

  • PDF

A Novel Approach to COVID-19 Diagnosis Based on Mel Spectrogram Features and Artificial Intelligence Techniques

  • Alfaidi, Aseel;Alshahrani, Abdullah;Aljohani, Maha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.195-207
    • /
    • 2022
  • COVID-19 has remained one of the most serious health crises in recent history, resulting in the tragic loss of lives and significant economic impacts on the entire world. The difficulty of controlling COVID-19 poses a threat to the global health sector. Considering that Artificial Intelligence (AI) has contributed to improving research methods and solving problems facing diverse fields of study, AI algorithms have also proven effective in disease detection and early diagnosis. Specifically, acoustic features offer a promising prospect for the early detection of respiratory diseases. Motivated by these observations, this study conceptualized a speech-based diagnostic model to aid in COVID-19 diagnosis. The proposed methodology uses speech signals from confirmed positive and negative cases of COVID-19 to extract features through the pre-trained Visual Geometry Group (VGG-16) model based on Mel spectrogram images. This is used in addition to the K-means algorithm that determines effective features, followed by a Genetic Algorithm-Support Vector Machine (GA-SVM) classifier to classify cases. The experimental findings indicate the proposed methodology's capability to classify COVID-19 and NOT COVID-19 of varying ages and speaking different languages, as demonstrated in the simulations. The proposed methodology depends on deep features, followed by the dimension reduction technique for features to detect COVID-19. As a result, it produces better and more consistent performance than handcrafted features used in previous studies.

Preparing for low-surface-brightness science with the Rubin Observatory: characterisation of LSB tidal features from mock images

  • Martin, Garreth W.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.40.3-41
    • /
    • 2021
  • Minor mergers leave behind long lived, but extremely faint and extended tidal features including tails, streams, loops and plumes. These act as a fossil record for the host galaxy's past interactions, allowing us to infer recent accretion histories and place constraints on the properties and nature of a galaxy's dark matter halo. However, shallow imaging or small homogeneous samples of past surveys have resulted in weak observational constraints on the role of galaxy mergers and interactions in galaxy assembly. The Rubin Observatory, which is optimised to deliver fast, wide field-of-view imaging, will enable deep and unbiased observations over the 18,000 square degrees of the Legacy Survey of Space and Time (LSST), resulting in samples of potentially of millions of objects undergoing tidal interactions. Using realistic mock images produced with state-of-the-art cosmological simulations we perform a comprehensive theoretical investigation of the extended diffuse light around galaxies and galaxy groups down to low stellar mass densities. We consider the nature, frequency and visibility of tidal features and debris across a range of environments and stellar masses as well as their reliability as an indicator of galaxy accretion histories. We consider how observational biases such as projection effects, the point-spread-function and survey depth may effect the proper characterisation and measurement of tidal features, finding that LSST will be capable of recovering much of the flux found in the outskirts of L* galaxies at redshifts beyond local volume. In our simulated sample, tidal features are ubiquitous In L* galaxies and remain common even at significantly lower masses (M*>10^10 Msun). The fraction of stellar mass found in tidal features increases towards higher masses, rising to 5-10% for the most massive objects in our sample (M*~10^11.5 Msun). Such objects frequently exhibit many distinct tidal features often with complex morphologies, becoming increasingly numerous with increased depth. The interpretation and characterisation of such features can vary significantly with orientation and imaging depth. Our findings demonstrate the importance of accounting for the biases that arise from projection effects and surface-brightness limits and suggest that, even after the LSST is complete, much of the discovery space in low surface-brightness Universe will remain to be explored.

  • PDF

Injection Molding of High Aspect Ratio Nano Features Using Stamper Heating/Cooling Process (스탬퍼 가열/냉각을 이용한 고세장비 나노 구조물 성형)

  • Yoo, Y.E.;Choi, S.J.;Kim, S.K.;Choi, D.S.;Whang, K.H.
    • Transactions of Materials Processing
    • /
    • v.16 no.1 s.91
    • /
    • pp.20-24
    • /
    • 2007
  • Polypropylene substrate with hair-like nano features(aspect $ratio{\sim}10$) on the surface is fabricated by injection molding process. Pure aluminum plate is anodized to have nano pore array on the surface and used as a stamper for molding nano features, The size and the thickness of the stamper is $30mm{\times}30mm$ and 1mm. The fabricated pore is about 120nm in diameter and 1.5 um deep. For molding of a substrate with nano-hair type of surface features, the stamper is heated up over $150^{\circ}C$ before the filling stage and cooled down below $70^{\circ}C$ after filling to release the molded part. For heating the stamper, stamper itself is used as a heating element by applying electrical power directly to each end of the stamper. The stamper becomes cooled down without circulation of coolant such as water or oil. With this new stamper heating method, nano hairs with aspect ratio of about 10 was successfully injection molded. We also found the heating & cooling process of the stamper is good for releasing of molded nano-hairs.

Newly discovered Footprints of Galaxy Interaction around Sefert 2 galaxy NGC 7743

  • Kim, Yongjung;Im, Myungshin;Choi, Changsu;Hyun, Minhee;Yoon, Yongmin;Taak, Yoonchan
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.43.1-43.1
    • /
    • 2014
  • It has been suggested that only the most luminous AGNs ($L{\geq}$ [10] $^{45}L_{\odot}$ ) are triggered by galaxy mergers, while less luminous AGNs (L~ [10] $^{43}L_{\odot}$) are driven by other internal processes. Lack of merging features in low luminosity AGN host galaxies has been a main argument against the idea of merger triggering of low luminosity AGNs, but merging, especially a rather minor one, might still have played an important role in low luminosity AGNs since minor merging features in low luminosity are more difficult to identify than major merging features. Using SNUCAM on the 1.5m telescope at Madanak observatory, we obtained deep images of NGC 7743 which is a barred spiral galaxy classified as a Seyfert 2 AGN with a low bolometric luminosity of $5{\times}$ [10] $^{42}L_{\odot}$. Surprisingly, we newly discovered merging features around the galaxy, which indicate past merging activity on the galaxy. This example indicates the merging fraction of low luminosity AGNs may be much higher than previously thought, hinting the importance of galaxy merger even in low luminosity AGN.

  • PDF