• Title/Summary/Keyword: Algorithm of problem-solving

Search Result 1,014, Processing Time 0.03 seconds

Analyzing and Solving GuessWhat?! (GuessWhat?! 문제에 대한 분석과 파훼)

  • Lee, Sang-Woo;Han, Cheolho;Heo, Yujung;Kang, Wooyoung;Jun, Jaehyun;Zhang, Byoung-Tak
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.30-35
    • /
    • 2018
  • GuessWhat?! is a game in which two machine players, composed of questioner and answerer, ask and answer yes-no-N/A questions about the object hidden for the answerer in the image, and the questioner chooses the correct object. GuessWhat?! has received much attention in the field of deep learning and artificial intelligence as a testbed for cutting-edge research on the interplay of computer vision and dialogue systems. In this study, we discuss the objective function and characteristics of the GuessWhat?! game. In addition, we propose a simple solver for GuessWhat?! using a simple rule-based algorithm. Although a human needs four or five questions on average to solve this problem, the proposed method outperforms state-of-the-art deep learning methods using only two questions, and exceeds human performance using five questions.

An Optimal Investment Planning Model for Improving the Reliability of Layered Air Defense System based on a Network Model (다층 대공방어 체계의 신뢰도 향상을 위한 네트워크 모델 기반의 최적 투자 계획 모델)

  • Lee, Jinho;Chung, Suk-Moon
    • Journal of the Korea Society for Simulation
    • /
    • v.26 no.3
    • /
    • pp.105-113
    • /
    • 2017
  • This study considers an optimal investment planning for improving survivability from an air threat in the layered air defense system. To establish an optimization model, we first represent the layered air defense system as a network model, and then, present two optimization models minimizing the failure probability of counteracting an air threat subject to budget limitation, in which one deals with whether to invest and the other enables continuous investment on the subset of nodes. Nonlinear objective functions are linearized using log function, and we suggest dynamic programming algorithm and linear programing for solving the proposed models. After designing a layered air defense system based on a virtual scenario, we solve the two optimization problems and analyze the corresponding optimal solutions. This provides necessity and an approach for an effective investment planning of the layered air defense system.

A Study on Estimation of Regularizing Parameters for Energy-Based Stereo Matching (에너지 기반 스테레오 매칭에서의 정합 파라미터 추정에 관한 연구)

  • Hahn, Hee-Il;Ryu, Dae-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.288-294
    • /
    • 2011
  • In this paper we define the probability models for determining the disparity map given stereo images and derive the methods for solving the problem, which is proven to be equivalent to an energy-based stereo matching. Under the assumptions the difference between the pixel on the left image and the corresponding pixel on the right image and the difference between the disparities of the neighboring pixels are exponentially distributed, a recursive approach for estimating the MRF regularizing parameter is proposed. The proposed method alternates between estimating the parameters with the intermediate disparity map and estimating the disparity map with the estimated parameters, after computing it with random initial parameters. Our algorithm is applied to the stereo matching algorithms based on the dynamic programming and belief propagation to verify its operation and measure its performance.

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Performance Improvement of Simple Bacteria Cooperative Optimization through Rank-based Perturbation (등급기준 교란을 통한 단순 박테리아협동 최적화의 성능향상)

  • Jung, Sung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.23-31
    • /
    • 2011
  • The simple bacteria cooperative optimization (sBCO) algorithm that we developed as one of optimization algorithms has shown relatively good performances, but their performances were limited by step-by-step movement of individuals at a time. In order to solve this problem, we proposed a new method that assigned a speed to each individual according to its rank and it was confirmed that it improved the performances of sBCO in some degree. In addition to the assigning of speed to the individuals, we employed a new mutation operation that most existing evolutionary algorithms used in order to enhance the performances of sBCO in this paper. A specific percent of bad individuals are mutated within an area that is proportion to the rank of the individual in the mutation operation. That is, Gaussian noise of large standard deviation is added as the fitness of individuals is low. From this, the probability that the individuals with lower ranks can be located far from its parent will be increased. This causes that the probability of falling into local optimum areas is decreased and the probability of fast escaping the local optimum areas is increased. From experimental results with four function optimization problems, we showed that the performances of sBCO with mutation operation and individual speed were increased. If the optimization function is quite complex, however, the performances are not always better. We should devise a new method for solving this problem as a further work.

Development of a Robot Programming Instructional Model based on Cognitive Apprenticeship for the Enhancement of Metacognition (메타인지 발달을 위한 인지적 도제 기반의 로봇 프로그래밍 교수.학습 모형 개발)

  • Yeon, Hyejin;Jo, Miheon
    • Journal of The Korean Association of Information Education
    • /
    • v.18 no.2
    • /
    • pp.225-234
    • /
    • 2014
  • Robot programming allows students to plan an algorithm in order to solve a task, implement the algorithm, easily confirm the results of the implementation with a robot, and correct errors. Thus, robot programming is a problem solving process based on reflective thinking, and is closely related to students' metacognition. On this point, this research is conducted to develop a robot programming instructional model for tile enhancement of students' metacognition. The instructional processes of robot programming are divided into 5 stages (i.e., 'exploration of learning tasks', 'a teacher's modeling', 'preparation of a plan for task performance along with the visualization of the plan', 'task performance', and 'self-evaluation and self-reinforcement'), and core strategies of metacognition (i.e., planning, monitering, regulating, and evaluating) are suggested for students' activities in each stage. Also, in order to support students' programming activities and the use of metacognition, instructional strategies based on cognitive apprenticeship (i.e. modeling, coaching and scaffolding) are suggested in relation to the instructional model. In addition, in order to support students' metacognitive activities. the model is designed to use self-questioning, and questions that students can use at each stage of the model are presented.

Analysis of Optimal Thinning Prescriptions for a Cryptomeria japonica Stand Using Dynamic Programming (동적계획법 적용에 의한 삼나무 임분의 간벌시업체계 분석)

  • Han, Hee;Kwon, Kibeom;Chung, Hyejean;Seol, Ara;Chung, Joosang
    • Journal of Korean Society of Forest Science
    • /
    • v.104 no.4
    • /
    • pp.649-656
    • /
    • 2015
  • The objective of this study was to analyze the optimal thinning regimes for timber or carbon managements in Cryptomeria japonica stands of Hannam Experimental Forest, Korea Forest Research Institute. In solving the problem, PATH algorithm, developed by Paderes and Brodie, was used as the decision-making tool and the individual-tree/distance-free stand growth simulator for the species, developed by Kwon et al., was used to predict the stand growth associated with density control by thinning regimes and mortality. The results of this study indicate that the timber management for maximum net present value (NPV) needs less number of but higher intensity thinnings than the carbon management for maximum carbon absorption does. In case of carbon management, the amount of carbon absorption is bigger than that of timber management by about 6% but NPV is reduced by about 3.2%. On the other hand, intensive forest managements with thinning regimes promotes net income and carbon absorption by about 60% compared with those of the do-nothing option.

Electromagnetic Traveltime Tomography with Wavefield Transformation (파동장 변환을 이용한 전자탐사 주시 토모그래피)

  • Lee, Tae-Jong;Suh, Jung-Hee;Shin, Chang-Soo
    • Geophysics and Geophysical Exploration
    • /
    • v.2 no.1
    • /
    • pp.17-25
    • /
    • 1999
  • A traveltime tomography has been carried out by transforming electromagnetic data in frequency domain to wave-like domain. The transform uniquely relates a field satisfying a diffusion equation to an integral of the corresponding wavefield. But direct transform of frequency domain magnetic fields to wave-field domain is ill-posed problem because the kernel of the integral transform is highly damped. In this study, instead of solving such an unstable problem, it is assumed that wave-fields in transformed domain can be approximated by sum of ray series. And for further simplicity, reflection and refraction energy compared to that of direct wave is weak enough to be neglected. Then first arrival can be approximated by calculating the traveltime of direct wave only. But these assumptions are valid when the conductivity contrast between background medium and the target anomalous body is low enough. So this approach can only be applied to the models with low conductivity contrast. To verify the algorithm, traveltime calculated by this approach was compared to that of direct transform method and exact traveltime, calculated analytically, for homogeneous whole space. The error in first arrival picked by this study was less than that of direct transformation method, especially when the number of frequency samples is less than 10, or when the data are noisy. Layered earth model with varying conductivity contrasts and inclined dyke model have been successfully imaged by applying nonlinear traveltime tomography in 30 iterations within three CPU minutes on a IBM Pentium Pro 200 MHz.

  • PDF

The cancellation performance of loop-back signal in wireless USN multihop relay node (무선 USN 멀티홉 중계 노드에서 루프백 신호의 제거 성능)

  • Lim, Seung-Gag;Kang, Dae-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.4
    • /
    • pp.17-24
    • /
    • 2009
  • This paper deals with the cancellation performance of loop back interference signal in the case of multihop relay of 16-QAM received signal at the USN radio network. For this, it is necessary to the exchange of information with long distance located station by means of the relay function between the node in the USN environment. In the relay node, the loop-back interference signal which the retransmitting signal is feedback to the receiver side due to the antenna of transmitter and receiver are co-used or very colsely located or using the nonlinear device. Due to this signal, the performance of USN system are degraded which are using the limited resource of frequency and power. For improve this, it is necessary to applying the adaptive signal processing algorithm in order to cancellating the unwanted loop-back interference signal at the frontend of receiver in relaying node, we can get the better system and multi hop performance. In the adaptive signal processing, we considered the 16-QAM signal which has a good spectral efficiency, firstly, than, the QR-Array RLS algorithm was used that has a fairly good convergence property and the solving the finite length problem in the H/W implementation. Finaly, we confirmed that the good elimination performanc was confirmed by computer simulation in the learing cuved and received signal constellation compared to the conventional RLS.

  • PDF

3D Modeling and Inversion of Magnetic Anomalies (자력이상 3차원 모델링 및 역산)

  • Cho, In-Ky;Kang, Hye-Jin;Lee, Keun-Soo;Ko, Kwang-Beom;Kim, Jong-Nam;You, Young-June;Han, Kyeong-Soo;Shin, Hong-Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.3
    • /
    • pp.119-130
    • /
    • 2013
  • We developed a method for inverting magnetic data to recover the 3D susceptibility models. The major difficulty in the inversion of the potential data is the non-uniqueness and the vast computing time. The insufficient number of data compared with that of inversion blocks intensifies the non-uniqueness problem. Furthermore, there is poor depth resolution inherent in magnetic data. To overcome this non-uniqueness problem, we propose a resolution model constraint that imposes large penalty on the model parameter with good resolution; on the other hand, small penalty on the model parameter with poor resolution. Using this model constraint, the model parameter with a poor resolution can be effectively resolved. Moreover, the wavelet transform and parallel solving were introduced to save the computing time. Through the wavelet transform, a large system matrix was transformed to a sparse matrix and solved by a parallel linear equation solver. This procedure is able to enormously save the computing time for the 3D inversion of magnetic data. The developed inversion algorithm is applied to the inversion of the synthetic data for typical models of magnetic anomalies and real airborne data obtained at the Geumsan area of Korea.