• Title/Summary/Keyword: MachineLearning

Search Result 5,657, Processing Time 0.036 seconds

The study of Defense Artificial Intelligence and Block-chain Convergence (국방분야 인공지능과 블록체인 융합방안 연구)

  • Kim, Seyong;Kwon, Hyukjin;Choi, Minwoo
    • Journal of Internet Computing and Services
    • /
    • v.21 no.2
    • /
    • pp.81-90
    • /
    • 2020
  • The purpose of this study is to study how to apply block-chain technology to prevent data forgery and alteration in the defense sector of AI(Artificial intelligence). AI is a technology for predicting big data by clustering or classifying it by applying various machine learning methodologies, and military powers including the U.S. have reached the completion stage of technology. If data-based AI's data forgery and modulation occurs, the processing process of the data, even if it is perfect, could be the biggest enemy risk factor, and the falsification and modification of the data can be too easy in the form of hacking. Unexpected attacks could occur if data used by weaponized AI is hacked and manipulated by North Korea. Therefore, a technology that prevents data from being falsified and altered is essential for the use of AI. It is expected that data forgery prevention will solve the problem by applying block-chain, a technology that does not damage data, unless more than half of the connected computers agree, even if a single computer is hacked by a distributed storage of encrypted data as a function of seawater.

Clustering of Smart Meter Big Data Based on KNIME Analytic Platform (KNIME 분석 플랫폼 기반 스마트 미터 빅 데이터 클러스터링)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.13-20
    • /
    • 2020
  • One of the major issues surrounding big data is the availability of massive time-based or telemetry data. Now, the appearance of low cost capture and storage devices has become possible to get very detailed time data to be used for further analysis. Thus, we can use these time data to get more knowledge about the underlying system or to predict future events with higher accuracy. In particular, it is very important to define custom tailored contract offers for many households and businesses having smart meter records and predict the future electricity usage to protect the electricity companies from power shortage or power surplus. It is required to identify a few groups with common electricity behavior to make it worth the creation of customized contract offers. This study suggests big data transformation as a side effect and clustering technique to understand the electricity usage pattern by using the open data related to smart meter and KNIME which is an open source platform for data analytics, providing a user-friendly graphical workbench for the entire analysis process. While the big data components are not open source, they are also available for a trial if required. After importing, cleaning and transforming the smart meter big data, it is possible to interpret each meter data in terms of electricity usage behavior through a dynamic time warping method.

Current status and future plans of KMTNet microlensing experiments

  • Chung, Sun-Ju;Gould, Andrew;Jung, Youn Kil;Hwang, Kyu-Ha;Ryu, Yoon-Hyun;Shin, In-Gu;Yee, Jennifer C.;Zhu, Wei;Han, Cheongho;Cha, Sang-Mok;Kim, Dong-Jin;Kim, Hyun-Woo;Kim, Seung-Lee;Lee, Chung-Uk;Lee, Yongseok
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.1
    • /
    • pp.41.1-41.1
    • /
    • 2018
  • We introduce a current status and future plans of Korea Microlensing Telescope Network (KMTNet) microlensing experiments, which include an observational strategy, pipeline, event-finder, and collaborations with Spitzer. The KMTNet experiments were initiated in 2015. From 2016, KMTNet observes 27 fields including 6 main fields and 21 subfields. In 2017, we have finished the DIA photometry for all 2016 and 2017 data. Thus, it is possible to do a real-time DIA photometry from 2018. The DIA photometric data is used for finding events from the KMTNet event-finder. The KMTNet event-finder has been improved relative to the previous version, which already found 857 events in 4 main fields of 2015. We have applied the improved version to all 2016 data. As a result, we find that 2597 events are found, and out of them, 265 are found in KMTNet-K2C9 overlapping fields. For increasing the detection efficiency of event-finder, we are working on filtering false events out by machine-learning method. In 2018, we plan to measure event detection efficiency of KMTNet by injecting fake events into the pipeline near the image level. Thanks to high-cadence observations, KMTNet found fruitful interesting events including exoplanets and brown dwarfs, which were not found by other groups. Masses of such exoplanets and brown dwarfs are measured from collaborations with Spitzer and other groups. Especially, KMTNet has been closely cooperating with Spitzer from 2015. Thus, KMTNet observes Spitzer fields. As a result, we could measure the microlens parallaxes for many events. Also, the automated KMTNet PySIS pipeline was developed before the 2017 Spitzer season and it played a very important role in selecting the Spitzer target. For the 2018 Spitzer season, we will improve the PySIS pipeline to obtain better photometric results.

  • PDF

The Implementable Functions of the CoreNet of a Multi-Valued Single Neuron Network (단층 코어넷 다단입력 인공신경망회로의 함수에 관한 구현가능 연구)

  • Park, Jong Joon
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.593-602
    • /
    • 2014
  • One of the purposes of an artificial neural netowrk(ANNet) is to implement the largest number of functions as possible with the smallest number of nodes and layers. This paper presents a CoreNet which has a multi-leveled input value and a multi-leveled output value with a 2-layered ANNet, which is the basic structure of an ANNet. I have suggested an equation for calculating the capacity of the CoreNet, which has a p-leveled input and a q-leveled output, as $a_{p,q}={\frac{1}{2}}p(p-1)q^2-{\frac{1}{2}}(p-2)(3p-1)q+(p-1)(p-2)$. I've applied this CoreNet into the simulation model 1(5)-1(6), which has 5 levels of an input and 6 levels of an output with no hidden layers. The simulation result of this model gives, the maximum 219 convergences for the number of implementable functions using the cot(${\sqrt{x}}$) input leveling method. I have also shown that, the 27 functions are implementable by the calculation of weight values(w, ${\theta}$) with the multi-threshold lines in the weight space, which are diverged in the simulation results. Therefore the 246 functions are implementable in the 1(5)-1(6) model, and this coincides with the value from the above eqution $a_{5,6}(=246)$. I also show the implementable function numbering method in the weight space.

Implementation of a Spam Message Filtering System using Sentence Similarity Measurements (문장유사도 측정 기법을 통한 스팸 필터링 시스템 구현)

  • Ou, SooBin;Lee, Jongwoo
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.1
    • /
    • pp.57-64
    • /
    • 2017
  • Short message service (SMS) is one of the most important communication methods for people who use mobile phones. However, illegal advertising spam messages exploit people because they can be used without the need for friend registration. Recently, spam message filtering systems that use machine learning have been developed, but they have some disadvantages such as requiring many calculations. In this paper, we implemented a spam message filtering system using the set-based POI search algorithm and sentence similarity without servers. This algorithm can judge whether the input query is a spam message or not using only letter composition without any server computing. Therefore, we can filter the spam message although the input text message has been intentionally modified. We added a specific preprocessing option which aims to enable spam filtering. Based on the experimental results, we observe that our spam message filtering system shows better performance than the original set-based POI search algorithm. We evaluate the proposed system through extensive simulation. According to the simulation results, the proposed system can filter the text message and show high accuracy performance against the text message which cannot be filtered by the 3 major telecom companies.

Analysis on the Determinants of Land Compensation Cost: The Use of the Construction CALS Data (토지 보상비 결정 요인 분석 - 건설CALS 데이터 중심으로)

  • Lee, Sang-Gyu;Seo, Myoung-Bae;Kim, Jin-Uk
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.461-470
    • /
    • 2020
  • This study analyzed the determinants of land compensation costs using the CALS (Continuous Acquisition & Life-Cycle Support) system to generate data for the construction (planning, design, building, management) process. For analysis, variables used in the related research on land costs were used, which included eight variables (Land Area, Individual Public Land Price, Appraisal & Assessment, Land Category, Use District 1, Terrain Elevation, Terrain Shape, and Road). Also, the variables were analyzed using the machine learning-based Xgboost algorithm. Individual Public Land Price was identified as the most important variable in determining land cost. We used a linear multiple regression analysis to verify the determinants of land compensation. For this verification, the dependent variable included was the Individual Public Land Price, and the independent variables were the numeric variable (Land Area) and factor variables (Land Category, Use District 1, Terrain Elevation, Terrain Shape, Road). This study found that the significant variables were Land Category, Use District 1, and Road.

Studies of Automatic Dental Cavity Detection System as an Auxiliary Tool for Diagnosis of Dental Caries in Digital X-ray Image (디지털 X-선 영상을 통한 치아우식증 진단 보조 시스템으로써 치아 와동 자동 검출 프로그램 연구)

  • Huh, Jangyong;Nam, Haewon;Kim, Juhae;Park, Jiman;Shin, Sukyoung;Lee, Rena
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.52-58
    • /
    • 2015
  • The automated dental cavity detection program for a new concept intra-oral dental x-ray imaging device, an auxiliary diagnosis system, which is able to assist a dentist to identify dental caries in an early stage and to make an accurate diagnosis, was to be developed. The primary theory of the automatic dental cavity detection program is divided into two algorithms; one is an image segmentation skill to discriminate between a dental cavity and a normal tooth and the other is a computational method to analyze feature of an tooth image and take an advantage of it for detection of dental cavities. In the present study, it is, first, evaluated how accurately the DRLSE (Direct Regularized Level Set Evolution) method extracts demarcation surrounding the dental cavity. In order to evaluate the ability of the developed algorithm to automatically detect dental cavities, 7 tooth phantoms from incisor to molar were fabricated which contained a various form of cavities. Then, dental cavities in the tooth phantom images were analyzed with the developed algorithm. Except for two cavities whose contours were identified partially, the contours of 12 cavities were correctly discriminated by the automated dental caries detection program, which, consequently, proved the practical feasibility of the automatic dental lesion detection algorithm. However, an efficient and enhanced algorithm is required for its application to the actual dental diagnosis since shapes or conditions of the dental caries are different between individuals and complicated. In the future, the automatic dental cavity detection system will be improved adding pattern recognition or machine learning based algorithm which can deal with information of tooth status.

Reliable Image-Text Fusion CAPTCHA to Improve User-Friendliness and Efficiency (사용자 편의성과 효율성을 증진하기 위한 신뢰도 높은 이미지-텍스트 융합 CAPTCHA)

  • Moon, Kwang-Ho;Kim, Yoo-Sung
    • The KIPS Transactions:PartC
    • /
    • v.17C no.1
    • /
    • pp.27-36
    • /
    • 2010
  • In Web registration pages and online polling applications, CAPTCHA(Completely Automated Public Turing Test To Tell Computers and Human Apart) is used for distinguishing human users from automated programs. Text-based CAPTCHAs have been widely used in many popular Web sites in which distorted text is used. However, because the advanced optical character recognition techniques can recognize the distorted texts, the reliability becomes low. Image-based CAPTCHAs have been proposed to improve the reliability of the text-based CAPTCHAs. However, these systems also are known as having some drawbacks. First, some image-based CAPTCHA systems with small number of image files in their image dictionary is not so reliable since attacker can recognize images by repeated executions of machine learning programs. Second, users may feel uncomfortable since they have to try CAPTCHA tests repeatedly when they fail to input a correct keyword. Third, some image-base CAPTCHAs require high communication cost since they should send several image files for one CAPTCHA. To solve these problems of image-based CAPTCHA, this paper proposes a new CAPTCHA based on both image and text. In this system, an image and keywords are integrated into one CAPTCHA image to give user a hint for the answer keyword. The proposed CAPTCHA can help users to input easily the answer keyword with the hint in the fused image. Also, the proposed system can reduce the communication costs since it uses only a fused image file for one CAPTCHA. To improve the reliability of the image-text fusion CAPTCHA, we also propose a dynamic building method of large image dictionary from gathering huge amount of images from theinternet with filtering phase for preserving the correctness of CAPTCHA images. In this paper, we proved that the proposed image-text fusion CAPTCHA provides users more convenience and high reliability than the image-based CAPTCHA through experiments.

Development of CCTV Cooperation Tracking System for Real-Time Crime Monitoring (실시간 범죄 모니터링을 위한 CCTV 협업 추적시스템 개발 연구)

  • Choi, Woo-Chul;Na, Joon-Yeop
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.546-554
    • /
    • 2019
  • Typically, closed-circuit television (CCTV) monitoring is mainly used for post-processes (i.e. to provide evidence after an incident has occurred), but by using a streaming video feed, machine-based learning, and advanced image recognition techniques, current technology can be extended to respond to crimes or reports of missing persons in real time. The multi-CCTV cooperation technique developed in this study is a program model that delivers similarity information about a suspect (or moving object) extracted via CCTV at one location and sent to a monitoring agent to track the selected suspect or object when he, she, or it moves out of range to another CCTV camera. To improve the operating efficiency of local government CCTV control centers, we describe here the partial automation of a CCTV control system that currently relies upon monitoring by human agents. We envisage an integrated crime prevention service, which incorporates the cooperative CCTV network suggested in this study and that can easily be experienced by citizens in ways such as determining a precise individual location in real time and providing a crime prevention service linked to smartphones and/or crime prevention/safety information.

Development of Artificial Neural Network Model for Estimation of Cable Tension of Cable-Stayed Bridge (사장교 케이블의 장력 추정을 위한 인공신경망 모델 개발)

  • Kim, Ki-Jung;Park, Yoo-Sin;Park, Sung-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.3
    • /
    • pp.414-419
    • /
    • 2020
  • An artificial intelligence-based cable tension estimation model was developed to expand the utilization of data obtained from cable accelerometers of cable-stayed bridges. The model was based on an algorithm for selecting the natural frequency in the tension estimation process based on the vibration method and an applied artificial neural network (ANN). The training data of the ANN was composed after converting the cable acceleration data into the frequency, and machine learning was carried out using the characteristics with a pattern on the natural frequency. When developing the training data, the frequencies with various amplitudes can be used to represent the frequencies of multiple shapes to improve the selection performance for natural frequencies. The performance of the model was estimated by comparing it with the control criteria of the tension estimated by an expert. As a result of the verification using 139 frequencies obtained from the cable accelerometer as the input, the natural frequency was determined to be similar to the real criteria and the estimated tension of the cable by the natural frequency was 96.4% of the criteria.