• Title/Summary/Keyword: machine learning techniques

Search Result 1,117, Processing Time 0.026 seconds

Reliable Image-Text Fusion CAPTCHA to Improve User-Friendliness and Efficiency (사용자 편의성과 효율성을 증진하기 위한 신뢰도 높은 이미지-텍스트 융합 CAPTCHA)

  • Moon, Kwang-Ho;Kim, Yoo-Sung
    • The KIPS Transactions:PartC
    • /
    • v.17C no.1
    • /
    • pp.27-36
    • /
    • 2010
  • In Web registration pages and online polling applications, CAPTCHA(Completely Automated Public Turing Test To Tell Computers and Human Apart) is used for distinguishing human users from automated programs. Text-based CAPTCHAs have been widely used in many popular Web sites in which distorted text is used. However, because the advanced optical character recognition techniques can recognize the distorted texts, the reliability becomes low. Image-based CAPTCHAs have been proposed to improve the reliability of the text-based CAPTCHAs. However, these systems also are known as having some drawbacks. First, some image-based CAPTCHA systems with small number of image files in their image dictionary is not so reliable since attacker can recognize images by repeated executions of machine learning programs. Second, users may feel uncomfortable since they have to try CAPTCHA tests repeatedly when they fail to input a correct keyword. Third, some image-base CAPTCHAs require high communication cost since they should send several image files for one CAPTCHA. To solve these problems of image-based CAPTCHA, this paper proposes a new CAPTCHA based on both image and text. In this system, an image and keywords are integrated into one CAPTCHA image to give user a hint for the answer keyword. The proposed CAPTCHA can help users to input easily the answer keyword with the hint in the fused image. Also, the proposed system can reduce the communication costs since it uses only a fused image file for one CAPTCHA. To improve the reliability of the image-text fusion CAPTCHA, we also propose a dynamic building method of large image dictionary from gathering huge amount of images from theinternet with filtering phase for preserving the correctness of CAPTCHA images. In this paper, we proved that the proposed image-text fusion CAPTCHA provides users more convenience and high reliability than the image-based CAPTCHA through experiments.

Development of CCTV Cooperation Tracking System for Real-Time Crime Monitoring (실시간 범죄 모니터링을 위한 CCTV 협업 추적시스템 개발 연구)

  • Choi, Woo-Chul;Na, Joon-Yeop
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.546-554
    • /
    • 2019
  • Typically, closed-circuit television (CCTV) monitoring is mainly used for post-processes (i.e. to provide evidence after an incident has occurred), but by using a streaming video feed, machine-based learning, and advanced image recognition techniques, current technology can be extended to respond to crimes or reports of missing persons in real time. The multi-CCTV cooperation technique developed in this study is a program model that delivers similarity information about a suspect (or moving object) extracted via CCTV at one location and sent to a monitoring agent to track the selected suspect or object when he, she, or it moves out of range to another CCTV camera. To improve the operating efficiency of local government CCTV control centers, we describe here the partial automation of a CCTV control system that currently relies upon monitoring by human agents. We envisage an integrated crime prevention service, which incorporates the cooperative CCTV network suggested in this study and that can easily be experienced by citizens in ways such as determining a precise individual location in real time and providing a crime prevention service linked to smartphones and/or crime prevention/safety information.

Data Mining Tool for Stock Investors' Decision Support (주식 투자자의 의사결정 지원을 위한 데이터마이닝 도구)

  • Kim, Sung-Dong
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.2
    • /
    • pp.472-482
    • /
    • 2012
  • There are many investors in the stock market, and more and more people get interested in the stock investment. In order to avoid risks and make profit in the stock investment, we have to determine several aspects using various information. That is, we have to select profitable stocks and determine appropriate buying/selling prices and holding period. This paper proposes a data mining tool for the investors' decision support. The data mining tool makes stock investors apply machine learning techniques and generate stock price prediction model. Also it helps determine buying/selling prices and holding period. It supports individual investor's own decision making using past data. Using the proposed tool, users can manage stock data, generate their own stock price prediction models, and establish trading policy via investment simulation. Users can select technical indicators which they think affect future stock price. Then they can generate stock price prediction models using the indicators and test the models. They also perform investment simulation using proper models to find appropriate trading policy consisting of buying/selling prices and holding period. Using the proposed data mining tool, stock investors can expect more profit with the help of stock price prediction model and trading policy validated on past data, instead of with an emotional decision.

Performance Optimization Strategies for Fully Utilizing Apache Spark (아파치 스파크 활용 극대화를 위한 성능 최적화 기법)

  • Myung, Rohyoung;Yu, Heonchang;Choi, Sukyong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.1
    • /
    • pp.9-18
    • /
    • 2018
  • Enhancing performance of big data analytics in distributed environment has been issued because most of the big data related applications such as machine learning techniques and streaming services generally utilize distributed computing frameworks. Thus, optimizing performance of those applications at Spark has been actively researched. Since optimizing performance of the applications at distributed environment is challenging because it not only needs optimizing the applications themselves but also requires tuning of the distributed system configuration parameters. Although prior researches made a huge effort to improve execution performance, most of them only focused on one of three performance optimization aspect: application design, system tuning, hardware utilization. Thus, they couldn't handle an orchestration of those aspects. In this paper, we deeply analyze and model the application processing procedure of the Spark. Through the analyzed results, we propose performance optimization schemes for each step of the procedure: inner stage and outer stage. We also propose appropriate partitioning mechanism by analyzing relationship between partitioning parallelism and performance of the applications. We applied those three performance optimization schemes to WordCount, Pagerank, and Kmeans which are basic big data analytics and found nearly 50% performance improvement when all of those schemes are applied.

Load Fidelity Improvement of Piecewise Integrated Composite Beam by Construction Training Data of k-NN Classification Model (k-NN 분류 모델의 학습 데이터 구성에 따른 PIC 보의 하중 충실도 향상에 관한 연구)

  • Ham, Seok Woo;Cheon, Seong S.
    • Composites Research
    • /
    • v.33 no.3
    • /
    • pp.108-114
    • /
    • 2020
  • Piecewise Integrated Composite (PIC) beam is composed of different stacking against loading type depending upon location. The aim of current study is to assign robust stacking sequences against external loading to every corresponding part of the PIC beam based on the value of stress triaxiality at generated reference points using the k-NN (k-Nearest Neighbor) classification, which is one of representative machine learning techniques, in order to excellent superior bending characteristics. The stress triaxiality at reference points is obtained by three-point bending analysis of the Al beam with training data categorizing the type of external loading, i.e., tension, compression or shear. Loading types of each plane of the beam were classified by independent plane scheme as well as total beam scheme. Also, loading fidelities were calibrated for each case with the variation of hyper-parameters. Most effective stacking sequences were mapped into the PIC beam based on the k-NN classification model with the highest loading fidelity. FE analysis result shows the PIC beam has superior external loading resistance and energy absorption compared to conventional beam.

K-means clustering analysis and differential protection policy according to 3D NAND flash memory error rate to improve SSD reliability

  • Son, Seung-Woo;Kim, Jae-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.1-9
    • /
    • 2021
  • 3D-NAND flash memory provides high capacity per unit area by stacking 2D-NAND cells having a planar structure. However, due to the nature of the lamination process, there is a problem that the frequency of error occurrence may vary depending on each layer or physical cell location. This phenomenon becomes more pronounced as the number of write/erase(P/E) operations of the flash memory increases. Most flash-based storage devices such as SSDs use ECC for error correction. Since this method provides a fixed strength of data protection for all flash memory pages, it has limitations in 3D NAND flash memory, where the error rate varies depending on the physical location. Therefore, in this paper, pages and layers with different error rates are classified into clusters through the K-means machine learning algorithm, and differentiated data protection strength is applied to each cluster. We classify pages and layers based on the number of errors measured after endurance test, where the error rate varies significantly for each page and layer, and add parity data to stripes for areas vulnerable to errors to provides differentiate data protection strength. We show the possibility that this differentiated data protection policy can contribute to the improvement of reliability and lifespan of 3D NAND flash memory compared to the protection techniques using RAID-like or ECC alone.

Study on Anomaly Detection Method of Improper Foods using Import Food Big data (수입식품 빅데이터를 이용한 부적합식품 탐지 시스템에 관한 연구)

  • Cho, Sanggoo;Choi, Gyunghyun
    • The Journal of Bigdata
    • /
    • v.3 no.2
    • /
    • pp.19-33
    • /
    • 2018
  • Owing to the increase of FTA, food trade, and versatile preferences of consumers, food import has increased at tremendous rate every year. While the inspection check of imported food accounts for about 20% of the total food import, the budget and manpower necessary for the government's import inspection control is reaching its limit. The sudden import food accidents can cause enormous social and economic losses. Therefore, predictive system to forecast the compliance of food import with its preemptive measures will greatly improve the efficiency and effectiveness of import safety control management. There has already been a huge data accumulated from the past. The processed foods account for 75% of the total food import in the import food sector. The analysis of big data and the application of analytical techniques are also used to extract meaningful information from a large amount of data. Unfortunately, not many studies have been done regarding analyzing the import food and its implication with understanding the big data of food import. In this context, this study applied a variety of classification algorithms in the field of machine learning and suggested a data preprocessing method through the generation of new derivative variables to improve the accuracy of the model. In addition, the present study compared the performance of the predictive classification algorithms with the general base classifier. The Gaussian Naïve Bayes prediction model among various base classifiers showed the best performance to detect and predict the nonconformity of imported food. In the future, it is expected that the application of the abnormality detection model using the Gaussian Naïve Bayes. The predictive model will reduce the burdens of the inspection of import food and increase the non-conformity rate, which will have a great effect on the efficiency of the food import safety control and the speed of import customs clearance.

Domain Knowledge Incorporated Counterfactual Example-Based Explanation for Bankruptcy Prediction Model (부도예측모형에서 도메인 지식을 통합한 반사실적 예시 기반 설명력 증진 방법)

  • Cho, Soo Hyun;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.307-332
    • /
    • 2022
  • One of the most intensively conducted research areas in business application study is a bankruptcy prediction model, a representative classification problem related to loan lending, investment decision making, and profitability to financial institutions. Many research demonstrated outstanding performance for bankruptcy prediction models using artificial intelligence techniques. However, since most machine learning algorithms are "black-box," AI has been identified as a prominent research topic for providing users with an explanation. Although there are many different approaches for explanations, this study focuses on explaining a bankruptcy prediction model using a counterfactual example. Users can obtain desired output from the model by using a counterfactual-based explanation, which provides an alternative case. This study introduces a counterfactual generation technique based on a genetic algorithm (GA) that leverages both domain knowledge (i.e., causal feasibility) and feature importance from a black-box model along with other critical counterfactual variables, including proximity, distribution, and sparsity. The proposed method was evaluated quantitatively and qualitatively to measure the quality and the validity.

A Study on the Performance Degradation Pattern of Caisson-type Quay Wall Port Facilities (케이슨식 안벽 항만시설의 성능저하패턴 연구)

  • Na, Yong Hyoun;Park, Mi Yeon;Jang, Shinwoo
    • Journal of the Society of Disaster Information
    • /
    • v.18 no.1
    • /
    • pp.146-153
    • /
    • 2022
  • Purpose: In the case of domestic port facilities, port structures that have been in use for a long time have many problems in terms of safety performance and functionality due to the enlargement of ships, increased frequency of use, and the effects of natural disasters due to climate change. A big data analysis method was studied to develop an approximate model that can predict the aging pattern of a port facility based on the maintenance history data of the port facility. Method: In this study, member-level maintenance history data for caisson-type quay walls were collected, defined as big data, and based on the data, a predictive approximation model was derived to estimate the aging pattern and deterioration of the facility at the project level. A state-based aging pattern prediction model generated through Gaussian process (GP) and linear interpolation (SLPT) techniques was proposed, and models suitable for big data utilization were compared and proposed through validation. Result: As a result of examining the suitability of the proposed method, the SLPT method has RMSE of 0.9215 and 0.0648, and the predictive model applied with the SLPT method is considered suitable. Conclusion: Through this study, it is expected that the study of predicting performance degradation of big data-based facilities will become an important system in decision-making regarding maintenance.

A study on time series linkage in the Household Income and Expenditure Survey (가계동향조사 지출부문 시계열 연계 방안에 관한 연구)

  • Kim, Sihyeon;Seong, Byeongchan;Choi, Young-Geun;Yeo, In-kwon
    • The Korean Journal of Applied Statistics
    • /
    • v.35 no.4
    • /
    • pp.553-568
    • /
    • 2022
  • The Household Income and Expenditure Survey is a representative survey of Statistics Korea, which aims to measure and analyze national income and consumption levels and their changes by understanding the current state of household balances. Recently, the disconnection problem in these time series caused by the large-scale reorganization of the survey methods in 2017 and 2019 has become an issue. In this study, we model the characteristics of the time series in the Household Income and Expenditure Survey up to 2016, and use the modeling to compute forecasts for linking the expenditures in 2017 and 2018. In order to evenly reflect the characteristics across all expenditure item series and to reduce the impact of a specific forecast model, we synthesize a total of 8 models such as regression models, time series models, and machine learning techniques. In particular, the noteworthy aspect of this study is that it improves the forecast by using the optimal combination technique that can exactly reflect the hierarchical structure of the Household Income and Expenditure Survey without loss of information as in the top-down or bottom-up methods. As a result of applying the proposed method to forecast expenditure series from 2017 to 2019, it contributed to the recovery of time series linkage and improved the forecast. In addition, it was confirmed that the hierarchical time series forecasts by the optimal combination method make linkage results closer to the actual survey series.