• Title/Summary/Keyword: 문자열 탐지

Search Result 42, Processing Time 0.031 seconds

The Influence of perceptual load on target identification and negative repetition effect in post-cueing forced choice task (순간 노출되는 표적의 식별과 부적 반복효과에 지각부하가 미치는 영향)

  • Kim, Inik;Park, ChangHo
    • Korean Journal of Cognitive Science
    • /
    • v.33 no.1
    • /
    • pp.1-22
    • /
    • 2022
  • Lavie's perceptual load theory (Lavie, 1995) proposes that the influence of distractors would be blocked as the load gets higher. Studies of perceptual load have usually adopted the flanker task, developed by Eriksen and Eriksen (1974), which measures reaction time on the target flanked by distractors. In the post-cueing forced task, participants should report the identity of the target cued later, and negative repetition effect (NRE) has often been observed. NRE means the effect that the accuracy of identification is worse when the target is flanked by the same nontargets than when flanked by different nontargets. This study has tried to check whether perceptual load has an effect on identification rate and NRE. Experiment 1 manipulated the similarity between targets and a distractor, and observed a tendency of NRE, but not the effect of perceptual load. Experiment 2 used 4, 2 (in two kinds of diagonal arrangement), or none distractors of the same identity to burden more perceptual load. NRE was significant and perceptual load showed significance but not a linear trend. Experiment 3 checked again whether NRE would be varied according to two levels of perceptual load strengthened by positional variability of load stimuli, but did not find the effect of perceptual load. It is concluded that perceptual load might have a limited effect on the early stage of perceptual processing due to divided attentional processing of the targets briefly exposed. Implications of this study were discussed.

The Method for Real-time Complex Event Detection of Unstructured Big data (비정형 빅데이터의 실시간 복합 이벤트 탐지를 위한 기법)

  • Lee, Jun Heui;Baek, Sung Ha;Lee, Soon Jo;Bae, Hae Young
    • Spatial Information Research
    • /
    • v.20 no.5
    • /
    • pp.99-109
    • /
    • 2012
  • Recently, due to the growth of social media and spread of smart-phone, the amount of data has considerably increased by full use of SNS (Social Network Service). According to it, the Big Data concept is come up and many researchers are seeking solutions to make the best use of big data. To maximize the creative value of the big data held by many companies, it is required to combine them with existing data. The physical and theoretical storage structures of data sources are so different that a system which can integrate and manage them is needed. In order to process big data, MapReduce is developed as a system which has advantages over processing data fast by distributed processing. However, it is difficult to construct and store a system for all key words. Due to the process of storage and search, it is to some extent difficult to do real-time processing. And it makes extra expenses to process complex event without structure of processing different data. In order to solve this problem, the existing Complex Event Processing System is supposed to be used. When it comes to complex event processing system, it gets data from different sources and combines them with each other to make it possible to do complex event processing that is useful for real-time processing specially in stream data. Nevertheless, unstructured data based on text of SNS and internet articles is managed as text type and there is a need to compare strings every time the query processing should be done. And it results in poor performance. Therefore, we try to make it possible to manage unstructured data and do query process fast in complex event processing system. And we extend the data complex function for giving theoretical schema of string. It is completed by changing the string key word into integer type with filtering which uses keyword set. In addition, by using the Complex Event Processing System and processing stream data at real-time of in-memory, we try to reduce the time of reading the query processing after it is stored in the disk.

Modified File Title Normalization Techniques for Copyright Protection (저작권 보호를 위한 변형된 파일 제목 정규화 기법)

  • Hwang, Chan Woong;Ha, Ji Hee;Lee, Tea Jin
    • Convergence Security Journal
    • /
    • v.19 no.4
    • /
    • pp.133-142
    • /
    • 2019
  • Although torrents and P2P sites or web hard are frequently used by users simply because they can be easily downloaded freely or at low prices, domestic torrent and P2P sites or web hard are very sensitive to copyright. Techniques have been researched and applied. Among these, title and string comparison method filtering techniques that block the number of cases such as file titles or combinations of key words are blocked by changing the title and spacing. Bypass is easy through. In order to detect and block illegal works for copyright protection, a technique for normalizing modified file titles is essential. In this paper, we compared the detection rate by searching before and after normalizing the modified file title of illegal works and normalizing the file title. Before the normalization, the detection rate was 77.72%, which was unfortunate while the detection rate was 90.23% after the normalization. In the future, it is expected that better handling of nonsense terms, such as common date and quality display, will yield better results.

Function partitioning methods for malware variant similarity comparison (변종 악성코드 유사도 비교를 위한 코드영역의 함수 분할 방법)

  • Park, Chan-Kyu;Kim, Hyong-Shik;Lee, Tae Jin;Ryou, Jae-Cheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.25 no.2
    • /
    • pp.321-330
    • /
    • 2015
  • There have been found many modified malwares which could avoid detection simply by replacing a sequence of characters or a part of code. Since the existing anti-virus program performs signature-based analysis, it is difficult to detect a malware which is slightly different from the well-known malware. This paper suggests a method of detecting modified malwares by extending a hash-value based code comparison. We generated hash values for individual functions and individual code blocks as well as the whole code, and thus use those values to find whether a pair of codes are similar in a certain degree. We also eliminated some numeric data such as constant and address before generating hash values to avoid incorrectness incurred from them. We found that the suggested method could effectively find inherent similarity between original malware and its derived ones.

Research on Malicious code hidden website detection method through WhiteList-based Malicious code Behavior Analysis (WhiteList 기반의 악성코드 행위분석을 통한 악성코드 은닉 웹사이트 탐지 방안 연구)

  • Ha, Jung-Woo;Kim, Huy-Kang;Lim, Jong-In
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.4
    • /
    • pp.61-75
    • /
    • 2011
  • Recently, there is significant increasing of massive attacks, which try to infect PCs that visit websites containing pre-implanted malicious code. When visiting the websites, these hidden malicious codes can gain monetary profit or can send various cyber attacks such as BOTNET for DDoS attacks, personal information theft and, etc. Also, this kind of malicious activities is continuously increasing, and their evasion techniques become professional and intellectual. So far, the current signature-based detection to detect websites, which contain malicious codes has a limitation to prevent internet users from being exposed to malicious codes. Since, it is impossible to detect with only blacklist when an attacker changes the string in the malicious codes proactively. In this paper, we propose a novel approach that can detect unknown malicious code, which is not well detected by a signature-based detection. Our method can detect new malicious codes even though the codes' signatures are not in the pattern database of Anti-Virus program. Moreover, our method can overcome various obfuscation techniques such as the frequent change of the included redirection URL in the malicious codes. Finally, we confirm that our proposed system shows better detection performance rather than MC-Finder, which adopts pattern matching, Google's crawling based malware site detection, and McAfee.

A File Name Identification Method for P2P and Web Hard Applications through Traffic Monitoring (트래픽 모니터링을 통한 P2P 및 웹 하드 다운로드 응용의 파일이름 식별 방법)

  • Son, Hyeon-Gu;Kim, Ki-Su;Lee, Young-Seok
    • Journal of KIISE:Information Networking
    • /
    • v.37 no.6
    • /
    • pp.477-482
    • /
    • 2010
  • Recently, advanced Internet applications such as Internet telephone, multimedia streaming, and file sharing have appeared. Especially, P2P or web-based file sharing applications have been notorious for their illegal usage of contents and massive traffic consumption by a few users. This paper presents a novel method to identify the P2P or web-based file names with traffic monitoring. For this purpose, we have utilized the Korean decoding method on the IP packet payload. From experiments, we have shown that the file names requested by BitTorrent, Clubbox, and Tple applications could be correctly identified.

Efficient Buffer-Overflow Prevention Technique Using Binary Rewriting (이진 코드 변환을 이용한 효과적인 버퍼 오버플로우 방지기법)

  • Kim Yun-Sam;Cho Eun-Sun
    • The KIPS Transactions:PartC
    • /
    • v.12C no.3 s.99
    • /
    • pp.323-330
    • /
    • 2005
  • Buffer overflow is one of the most prevalent and critical internet security vulnerabilities. Recently, various methods to prevent buffer overflow attacks have been investigated, but they are still difficult to apply to real applications due to their run-time overhead. This paper suggests an efficient rewrite method to prevent buffer-overflow attacks only with lower costs by generating a redundant copy of the return address in stack frame and comparing return address to copied return address. Not to be overwritten by the attack data the new copy will have the lower address number than local buffers have. In addition, for a safer execution environment, every vulnerable function call is transformed during the rewriting procedure.

Application of Integrated Security Control of Artificial Intelligence Technology and Improvement of Cyber-Threat Response Process (인공지능 기술의 통합보안관제 적용 및 사이버침해대응 절차 개선 )

  • Ko, Kwang-Soo;Jo, In-June
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.10
    • /
    • pp.59-66
    • /
    • 2021
  • In this paper, an improved integrated security control procedure is newly proposed by applying artificial intelligence technology to integrated security control and unifying the existing security control and AI security control response procedures. Current cyber security control is highly dependent on the level of human ability. In other words, it is practically unreasonable to analyze various logs generated by people from different types of equipment and analyze and process all of the security events that are rapidly increasing. And, the signature-based security equipment that detects by matching a string and a pattern has insufficient functions to accurately detect advanced and advanced cyberattacks such as APT (Advanced Persistent Threat). As one way to solve these pending problems, the artificial intelligence technology of supervised and unsupervised learning is applied to the detection and analysis of cyber attacks, and through this, the analysis of logs and events that occur innumerable times is automated and intelligent through this. The level of response has been raised in the overall aspect by making it possible to predict and block the continuous occurrence of cyberattacks. And after applying AI security control technology, an improved integrated security control service model was newly proposed by integrating and solving the problem of overlapping detection of AI and SIEM into a unified breach response process(procedure).

A Study on Generation Quality Comparison of Concrete Damage Image Using Stable Diffusion Base Models (Stable diffusion의 기저 모델에 따른 콘크리트 손상 영상의 생성 품질 비교 연구)

  • Seung-Bo Shim
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.4
    • /
    • pp.55-61
    • /
    • 2024
  • Recently, the number of aging concrete structures is steadily increasing. This is because many of these structures are reaching their expected lifespan. Such structures require accurate inspections and persistent maintenance. Otherwise, their original functions and performance may degrade, potentially leading to safety accidents. Therefore, research on objective inspection technologies using deep learning and computer vision is actively being conducted. High-resolution images can accurately observe not only micro cracks but also spalling and exposed rebar, and deep learning enables automated detection. High detection performance in deep learning is only guaranteed with diverse and numerous training datasets. However, surface damage to concrete is not commonly captured in images, resulting in a lack of training data. To overcome this limitation, this study proposed a method for generating concrete surface damage images, including cracks, spalling, and exposed rebar, using stable diffusion. This method synthesizes new damage images by paired text and image data. For this purpose, a training dataset of 678 images was secured, and fine-tuning was performed through low-rank adaptation. The quality of the generated images was compared according to three base models of stable diffusion. As a result, a method to synthesize the most diverse and high-quality concrete damage images was developed. This research is expected to address the issue of data scarcity and contribute to improving the accuracy of deep learning-based damage detection algorithms in the future.

Design and Implementation of Advanced Web Log Preprocess Algorithm for Rule based Web IDS (룰 기반 웹 IDS 시스템을 위한 효율적인 웹 로그 전처리 기법 설계 및 구현)

  • Lee, Hyung-Woo
    • Journal of Internet Computing and Services
    • /
    • v.9 no.5
    • /
    • pp.23-34
    • /
    • 2008
  • The number of web service user is increasing steadily as web-based service is offered in various form. But, web service has a vulnerability such as SQL Injection, Parameter Injection and DoS attack. Therefore, it is required for us to develop Web IDS system and additionally to offer Rule-base intrusion detection/response mechanism against those attacks. However, existing Web IDS system didn't correspond properly on recent web attack mechanism because they didn't including suitable pre-processing procedure on huge web log data. Therfore, we propose an efficient web log pre-processing mechanism for enhancing rule based detection and improving the performance of web IDS base attack response system. Proposed algorithm provides both a field unit parsing and a duplicated string elimination procedure on web log data. And it is also possible for us to construct improved web IDS system.

  • PDF