• Title/Summary/Keyword: Software Learning

Search Result 2,153, Processing Time 0.027 seconds

A Quantative Evaluation Method of the Quality of Natural Language Sentences based on Genetic Algorithm (유전자 알고리즘에 기반한 자연언어 문장의 정량적 질 평가 방법)

  • Yang, Seung-Hyeon;Kim, Yeong-Seom
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.11
    • /
    • pp.1372-1380
    • /
    • 1999
  • 본 논문에서는 자연언어 문장의 객관적 정량적인 질 측정 방법의 구축에 대해 설명하고, 이를 문장 퇴고 시스템의 사례에 적용해 본다. 문장의 질을 평가한다는 것은 본질적으로 주관적이고 정량화가 어려운 작업이기 때문에, 이 과정에서 질의 객관적 계량화가 가능한지 여부가 가장 중요한 문제가 된다. 이 논문에서는 이러한 문제를 해결하기 위해 유전자 알고리즘을 이용한 진화적 접근 방법을 통해 객관적이고 정량적인 질의 측정 공식을 유도하는 방법론을 제시하였다. 이 논문에서 제시한 방법론의 핵심은 간단히 말해서 사람이 행하는 정성적인 판단을, 이에 가장 근접하는 정량적 측정 체계로 전환시키는 것이라고 보면 된다. 이것을 위해 정량화 문제를 문장의 단순 언어 특징들의 변화값을 이용한 최적화 문제로 환원시키고, 다시 이 최적화 문제를 유전자 알고리즘을 이용해 해결함으로써 문제를 효과적으로 해결할 수 있었다. 실험 결과를 보면, 본 논문에서 제시한 최적화 방법은 주어진 훈련용 예제와 검증용 예제 중 각각 99.84%, 99.88%를 만족시키는 해를 찾아내었으므로 정량적 질 평가 공식의 유도에 매우 효과적임을 알 수 있었다. 또한 도출된 측정 공식을 이용해서 실제 퇴고 시스템 평가에 적용한 결과 문장 질의 측정에 매우 유용하게 이용될 수 있음을 알 수 있었다. 이와 같이 질의 정량적 평가가 가능하다는 사실이 갖는 또 한가지 중요한 의미는 최종 사용자의 구매 의사나 개발자의 공학적 의사 결정을 위한 객관적 성능 평가 자료의 제공에 이 방법이 유용하게 사용될 수 있다는 점이다.Abstract This paper describes a method of building a quantitative measure of the quality of natural language sentences, particularly produced by document revision systems. Evaluating the quality of natural language sentences is intrinsically subjective, so what is most important as to the evaluation is whether the quality can be measured objectively. To solve such problem of objective measurability, genetic algorithm, an evolutionary learning method, is employed in this paper. The underlying standpoint of this approach is that building the quality measures is a task of constructing a formulae that produces as close results as can to the qualitative decisions made by humans. For doing this, the problem of measurability has been simply reduced to an optimization problem using the change of the values of simple linguistic parameters found in sentences, and the reduced problem has been solved effectively by the genetic algorithm. Experimental result shows that the optimization task satisfied 99.84% and 99.88% of the given objectives for training and validation samples, respectively, which means the method is quite effective in constructing the quantitative measure of the quality of natural language sentences. The actual evaluation result of a revision system shows that the measure is useful to quantize the quality of sentences. Another important contribution of this measure would be to provide an objective performance evaluation data of natural language systems on a basis of which end-users and developers can make their decision to fit their own needs.

Design of an Arm Gesture Recognition System Using Feature Transformation and Hidden Markov Models (특징 변환과 은닉 마코프 모델을 이용한 팔 제스처 인식 시스템의 설계)

  • Heo, Se-Kyeong;Shin, Ye-Seul;Kim, Hye-Suk;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.10
    • /
    • pp.723-730
    • /
    • 2013
  • This paper presents the design of an arm gesture recognition system using Kinect sensor. A variety of methods have been proposed for gesture recognition, ranging from the use of Dynamic Time Warping(DTW) to Hidden Markov Models(HMM). Our system learns a unique HMM corresponding to each arm gesture from a set of sequential skeleton data. Whenever the same gesture is performed, the trajectory of each joint captured by Kinect sensor may much differ from the previous, depending on the length and/or the orientation of the subject's arm. In order to obtain the robust performance independent of these conditions, the proposed system executes the feature transformation, in which the feature vectors of joint positions are transformed into those of angles between joints. To improve the computational efficiency for learning and using HMMs, our system also performs the k-means clustering to get one-dimensional integer sequences as inputs for discrete HMMs from high-dimensional real-number observation vectors. The dimension reduction and discretization can help our system use HMMs efficiently to recognize gestures in real-time environments. Finally, we demonstrate the recognition performance of our system through some experiments using two different datasets.

The Effect of the Integrative Education Using a 3D Printer on the Computational Thinking Ability of Elementary School Students (3D프린터를 활용한 융합교육이 초등학생의 컴퓨팅 사고력에 미치는 영향)

  • Lim, Donghun;Kim, Taeyoung
    • Journal of The Korean Association of Information Education
    • /
    • v.23 no.5
    • /
    • pp.469-480
    • /
    • 2019
  • One of the goals of the new 2015 revised curriculum is to cultivate the creativity of students who will live in the era of the Fourth Industrial Revolution to create new things through diverse ideas and challenges based on basic learning skills. Accordingly, in order to solve the given problems rationally, the convergence problem solving ability that can process and utilize various areas of knowledge and information is becoming important. Therefore, in this study, we designed the integrative education using a 3D printer based on Tinkercad modeling and applied it to the class to investigate the effect on the improvement of computing thinking ability of elementary school students. To verify the contents of the study, two classes of 25 sixth-grade elementary school students were divided into an experimental group and a controlled group. For the experimental group, 12 classes of convergence education programs using a 3D printer were applied for about three months, and the same amount of general curriculum was conducted for the control group. After that, the t-tests were carried out using the pre-post test to measure the effectiveness of the computational thinking ability. After the application of the program, the experimental group showed statistically significant improvement in computational thinking ability, but the controlled group showed no statistically significant difference. The results show that convergence education using the Tinkercad modeling-based 3D printer has a positive effect on the improvement of computing thinking ability of elementary school students.

Seq2Seq model-based Prognostics and Health Management of Robot Arm (Seq2Seq 모델 기반의 로봇팔 고장예지 기술)

  • Lee, Yeong-Hyeon;Kim, Kyung-Jun;Lee, Seung-Ik;Kim, Dong-Ju
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.3
    • /
    • pp.242-250
    • /
    • 2019
  • In this paper, we propose a method to predict the failure of industrial robot using Seq2Seq (Sequence to Sequence) model, which is a model for transforming time series data among Artificial Neural Network models. The proposed method uses the data of the joint current and angular value, which can be measured by the robot itself, without additional sensor for fault diagnosis. After preprocessing the measured data for the model to learn, the Seq2Seq model was trained to convert the current to angle. Abnormal degree for fault diagnosis uses RMSE (Root Mean Squared Error) during unit time between predicted angle and actual angle. The performance evaluation of the proposed method was performed using the test data measured under different conditions of normal and defective condition of the robot. When the Abnormal degree exceed the threshold, it was classified as a fault, and the accuracy of the fault diagnosis was 96.67% from the experiment. The proposed method has the merit that it can perform fault prediction without additional sensor, and it has been confirmed from the experiment that high diagnostic performance and efficiency are available without requiring deep expert knowledge of the robot.

Outlier Detection By Clustering-Based Ensemble Model Construction (클러스터링 기반 앙상블 모델 구성을 이용한 이상치 탐지)

  • Park, Cheong Hee;Kim, Taegong;Kim, Jiil;Choi, Semok;Lee, Gyeong-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.11
    • /
    • pp.435-442
    • /
    • 2018
  • Outlier detection means to detect data samples that deviate significantly from the distribution of normal data. Most outlier detection methods calculate an outlier score that indicates the extent to which a data sample is out of normal state and determine it to be an outlier when its outlier score is above a given threshold. However, since the range of an outlier score is different for each data and the outliers exist at a smaller ratio than the normal data, it is very difficult to determine the threshold value for an outlier score. Further, in an actual situation, it is not easy to acquire data including a sufficient amount of outliers available for learning. In this paper, we propose a clustering-based outlier detection method by constructing a model representing a normal data region using only normal data and performing binary classification of outliers and normal data for new data samples. Then, by dividing the given normal data into chunks, and constructing a clustering model for each chunk, we expand it to the ensemble method combining the decision by the models and apply it to the streaming data with dynamic changes. Experimental results using real data and artificial data show high performance of the proposed method.

Spatial Conservation Prioritization Considering Development Impacts and Habitat Suitability of Endangered Species (개발영향과 멸종위기종의 서식적합성을 고려한 보전 우선순위 선정)

  • Mo, Yongwon
    • Korean Journal of Environment and Ecology
    • /
    • v.35 no.2
    • /
    • pp.193-203
    • /
    • 2021
  • As endangered species are gradually increasing due to land development by humans, it is essential to secure sufficient protected areas (PAs) proactively. Therefore, this study checked priority conservation areas to select candidate PAs when considering the impact of land development. We determined the conservation priorities by analyzing four scenarios based on existing conservation areas and reflecting the development impact using MARXAN, the decision-making support software for the conservation plan. The development impact was derived using the developed area ratio, population density, road network system, and traffic volume. The conservation areas of endangered species were derived using the data of the appearance points of birds, mammals, and herptiles from the 3rd National Ecosystem Survey. These two factors were used as input data to map conservation priority areas with the machine learning-based optimization methodology. The result identified many non-PAs areas that were expected to play an important role conserving endangered species. When considering the land development impact, it was found that the areas with priority for conservation were fragmented. Even when both the development impact and existing PAs were considered, the priority was higher in areas from the current PAs because many road developments had already been completed around the current PAs. Therefore, it is necessary to consider areas other than the current PAs to protect endangered species and seek alternative measures to fragmented conservation priority areas.

Clustering Performance Analysis of Autoencoder with Skip Connection (스킵연결이 적용된 오토인코더 모델의 클러스터링 성능 분석)

  • Jo, In-su;Kang, Yunhee;Choi, Dong-bin;Park, Young B.
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.403-410
    • /
    • 2020
  • In addition to the research on noise removal and super-resolution using the data restoration (Output result) function of Autoencoder, research on the performance improvement of clustering using the dimension reduction function of autoencoder are actively being conducted. The clustering function and data restoration function using Autoencoder have common points that both improve performance through the same learning. Based on these characteristics, this study conducted an experiment to see if the autoencoder model designed to have excellent data recovery performance is superior in clustering performance. Skip connection technique was used to design autoencoder with excellent data recovery performance. The output result performance and clustering performance of both autoencoder model with Skip connection and model without Skip connection were shown as graph and visual extract. The output result performance was increased, but the clustering performance was decreased. This result indicates that the neural network models such as autoencoders are not sure that each layer has learned the characteristics of the data well if the output result is good. Lastly, the performance degradation of clustering was compensated by using both latent code and skip connection. This study is a prior study to solve the Hanja Unicode problem by clustering.

Development of Noise and AI-based Pavement Condition Rating Evaluation System (소음도·인공지능 기반 포장상태등급 평가시스템 개발)

  • Han, Dae-Seok;Kim, Young-Rok
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • This study developed low-cost and high-efficiency pavement condition monitoring technology to produce the key information required for pavement management. A noise and artificial intelligence-based monitoring system was devised to compensate for the shortcomings of existing high-end equipment that relies on visual information and high-end sensors. From idea establishment to system development, functional definition, information flow, architecture design, and finally, on-site field evaluations were carried out. As a result, confidence in the high level of artificial intelligence evaluation was secured. In addition, hardware and software elements and well-organized guidelines on system utilization were developed. The on-site evaluation process confirmed that non-experts could easily and quickly investigate and visualized the data. The evaluation results could support the management works of road managers. Furthermore, it could improve the completeness of the technologies, such as prior discriminating techniques for external conditions that are not considered in AI learning, system simplification, and variable speed response techniques. This paper presents a new paradigm for pavement monitoring technology that has lasted since the 1960s.

Line-Segment Feature Analysis Algorithm for Handwritten-Digits Data Reduction (필기체 숫자 데이터 차원 감소를 위한 선분 특징 분석 알고리즘)

  • Kim, Chang-Min;Lee, Woo-Beom
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.4
    • /
    • pp.125-132
    • /
    • 2021
  • As the layers of artificial neural network deepens, and the dimension of data used as an input increases, there is a problem of high arithmetic operation requiring a lot of arithmetic operation at a high speed in the learning and recognition of the neural network (NN). Thus, this study proposes a data dimensionality reduction method to reduce the dimension of the input data in the NN. The proposed Line-segment Feature Analysis (LFA) algorithm applies a gradient-based edge detection algorithm using median filters to analyze the line-segment features of the objects existing in an image. Concerning the extracted edge image, the eigenvalues corresponding to eight kinds of line-segment are calculated, using 3×3 or 5×5-sized detection filters consisting of the coefficient values, including [0, 1, 2, 4, 8, 16, 32, 64, and 128]. Two one-dimensional 256-sized data are produced, accumulating the same response values from the eigenvalue calculated with each detection filter, and the two data elements are added up. Two LFA256 data are merged to produce 512-sized LAF512 data. For the performance evaluation of the proposed LFA algorithm to reduce the data dimension for the recognition of handwritten numbers, as a result of a comparative experiment, using the PCA technique and AlexNet model, LFA256 and LFA512 showed a recognition performance respectively of 98.7% and 99%.

Comparison of Artificial Intelligence Multitask Performance using Object Detection and Foreground Image (물체탐색과 전경영상을 이용한 인공지능 멀티태스크 성능 비교)

  • Jeong, Min Hyuk;Kim, Sang-Kyun;Lee, Jin Young;Choo, Hyon-Gon;Lee, HeeKyung;Cheong, Won-Sik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.308-317
    • /
    • 2022
  • Researches are underway to efficiently reduce the size of video data transmitted and stored in the image analysis process using deep learning-based machine vision technology. MPEG (Moving Picture Expert Group) has newly established a standardization project called VCM (Video Coding for Machine) and is conducting research on video encoding for machines rather than video encoding for humans. We are researching a multitask that performs various tasks with one image input. The proposed pipeline does not perform all object detection of each task that should precede object detection, but precedes it only once and uses the result as an input for each task. In this paper, we propose a pipeline for efficient multitasking and perform comparative experiments on compression efficiency, execution time, and result accuracy of the input image to check the efficiency. As a result of the experiment, the capacity of the input image decreased by more than 97.5%, while the accuracy of the result decreased slightly, confirming the possibility of efficient multitasking.