• Title/Summary/Keyword: artificial intelligence algorithms

Search Result 518, Processing Time 0.028 seconds

Performance Evaluation of Pixel Clustering Approaches for Automatic Detection of Small Bowel Obstruction from Abdominal Radiographs

  • Kim, Kwang Baek
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.153-159
    • /
    • 2022
  • Plain radiographic analysis is the initial imaging modality for suspected small bowel obstruction. Among the many features that affect the diagnosis of small bowel obstruction (SBO), the presence of gas-filled or fluid-filled small bowel loops is the most salient feature that can be automatized by computer vision algorithms. In this study, we compare three frequently applied pixel-clustering algorithms for extracting gas-filled areas without human intervention. In a comparison involving 40 suspected SBO cases, the Possibilistic C-Means and Fuzzy C-Means algorithms exhibited initialization-sensitivity problems and difficulties coping with low intensity contrast, achieving low 72.5% and 85% success rates in extraction. The Adaptive Resonance Theory 2 algorithm is the most suitable algorithm for gas-filled region detection, achieving a 100% success rate on 40 tested images, largely owing to its dynamic control of the number of clusters.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

An Empirical Study on Defense Future Technology in Artificial Intelligence (인공지능 분야 국방 미래기술에 관한 실증연구)

  • Ahn, Jin-Woo;Noh, Sang-Woo;Kim, Tae-Hwan;Yun, Il-Woong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.5
    • /
    • pp.409-416
    • /
    • 2020
  • Artificial intelligence, which is in the spotlight as the core driving force of the 4th industrial revolution, is expanding its scope to various industrial fields such as smart factories and autonomous driving with the development of high-performance hardware, big data, data processing technology, learning methods and algorithms. In the field of defense, as the security environment has changed due to decreasing defense budget, reducing military service resources, and universalizing unmanned combat systems, advanced countries are also conducting technical and policy research to incorporate artificial intelligence into their work by including recognition systems, decision support, simplification of the work processes, and efficient resource utilization. For this reason, the importance of technology-driven planning and investigation is also increasing to discover and research potential defense future technologies. In this study, based on the research data that was collected to derive future defense technologies, we analyzed the characteristic evaluation indicators for future technologies in the field of artificial intelligence and conducted empirical studies. The study results confirmed that in the future technologies of the defense AI field, the applicability of the weapon system and the economic ripple effect will show a significant relationship with the prospect.

Guidelines for big data projects in artificial intelligence mathematics education (인공지능 수학 교육을 위한 빅데이터 프로젝트 과제 가이드라인)

  • Lee, Junghwa;Han, Chaereen;Lim, Woong
    • The Mathematical Education
    • /
    • v.62 no.2
    • /
    • pp.289-302
    • /
    • 2023
  • In today's digital information society, student knowledge and skills to analyze big data and make informed decisions have become an important goal of school mathematics. Integrating big data statistical projects with digital technologies in high school <Artificial Intelligence> mathematics courses has the potential to provide students with a learning experience of high impact that can develop these essential skills. This paper proposes a set of guidelines for designing effective big data statistical project-based tasks and evaluates the tasks in the artificial intelligence mathematics textbook against these criteria. The proposed guidelines recommend that projects should: (1) align knowledge and skills with the national school mathematics curriculum; (2) use preprocessed massive datasets; (3) employ data scientists' problem-solving methods; (4) encourage decision-making; (5) leverage technological tools; and (6) promote collaborative learning. The findings indicate that few textbooks fully align with these guidelines, with most failing to incorporate elements corresponding to Guideline 2 in their project tasks. In addition, most tasks in the textbooks overlook or omit data preprocessing, either by using smaller datasets or by using big data without any form of preprocessing. This can potentially result in misconceptions among students regarding the nature of big data. Furthermore, this paper discusses the relevant mathematical knowledge and skills necessary for artificial intelligence, as well as the potential benefits and pedagogical considerations associated with integrating technology into big data tasks. This research sheds light on teaching mathematical concepts with machine learning algorithms and the effective use of technology tools in big data education.

Optimum design of geometrically non-linear steel frames using artificial bee colony algorithm

  • Degertekin, S.O.
    • Steel and Composite Structures
    • /
    • v.12 no.6
    • /
    • pp.505-522
    • /
    • 2012
  • An artificial bee colony (ABC) algorithm is developed for the optimum design of geometrically non-linear steel frames. The ABC is a new swarm intelligence method which simulates the intelligent foraging behaviour of honeybee swarm for solving the optimization problems. Minimum weight design of steel frames is aimed under the strength, displacement and size constraints. The geometric non-linearity of the frame members is taken into account in the optimum design algorithm. The performance of the ABC algorithm is tested on three steel frames taken from literature. The results obtained from the design examples demonstrate that the ABC algorithm could find better designs than other meta-heuristic optimization algorithms in shorter time.

Performance Comparison of Deep Learning Model Loss Function for Scaffold Defect Detection (인공지지체 불량 검출을 위한 딥러닝 모델 손실 함수의 성능 비교)

  • Song Yeon Lee;Yong Jeong Huh
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.40-44
    • /
    • 2023
  • The defect detection based on deep learning requires minimal loss and high accuracy to pinpoint product defects. In this paper, we confirm the loss rate of deep learning training based on disc-shaped artificial scaffold images. It is intended to compare the performance of Cross-Entropy functions used in object detection algorithms. The model was constructed using normal, defective artificial scaffold images and category cross entropy and sparse category cross entropy. The data was repeatedly learned five times using each loss function. The average loss rate, average accuracy, final loss rate, and final accuracy according to the loss function were confirmed.

  • PDF

A Study on Fine Dust Prediction Based on Internal Factors Using Machine Learning (머신러닝을 활용한 내부 발생 요인 기반의 미세먼지 예측에 관한 연구)

  • Yong-Joon KIM;Min-Soo KANG
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.1 no.2
    • /
    • pp.15-20
    • /
    • 2023
  • This study aims to enhance the accuracy of fine dust predictions by analyzing various factors within the local environment, in addition to atmospheric conditions. In the atmospheric environment, meteorological and air pollution data were utilized, and additional factors contributing to fine dust generation within the region, such as traffic volume and electricity transaction data, were sequentially incorporated for analysis. XGBoost, Random Forest, and ANN (Artificial Neural Network) were employed for the analysis. As variables were added, all algorithms demonstrated improved performance. Particularly noteworthy was the Artificial Neural Network, which, when using atmospheric conditions as a variable, resulted in an MAE of 6.25. Upon the addition of traffic volume, the MAE decreased to 5.49, and further inclusion of power transaction data led to a notable improvement, resulting in an MAE of 4.61. This research provides valuable insights for proactive measures against air pollution by predicting future fine dust levels.

Fuzzy Neural Newtork Pattern Classifier

  • Kim, Dae-Su;Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.1 no.3
    • /
    • pp.4-19
    • /
    • 1991
  • In this paper, we propose a fuzzy neural network pattern classifier utilizing fuzzy information. This system works without any a priori information about the number of clusters or cluster centers. It classifies each input according to the distance between the weights and the normalized input using Bezdek's [1] fuzzy membership value equation. This model returns the correct membership value for each input vector and find several cluster centers. Some experimental studies of comparison with other algorithms will be presented for sample data sets.

  • PDF

Development and Application of Scheduling System in Cold Rolling Mills (냉연 일정계획 시스템의 개발과 적용)

  • Kim, Chang-Hyun;Park, Sang-Hyuck
    • IE interfaces
    • /
    • v.16 no.2
    • /
    • pp.201-210
    • /
    • 2003
  • The purpose of this research is to develop a scheduling system for CAL (Continuous Annealing Line) in Cold Rolling Mill. Based on CSP (Constraint Satisfaction Problem) technique in artificial intelligence, appropriate algorithms to provide schedules satisfying all the constraints imposed on CAL are designed and developed. Performance tests show that the proposed scheduling system outperforms human operators in case of aggregating the same attributes and minimizing the thickness differences between two adjacent coils.

A Study On The Text Recognition Using Artificial Intelligence Technique (인공지능 기법을 이용한 텍스트 인식에 관한 연구)

  • 이행세;최태영;김영길;김정우
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.26 no.11
    • /
    • pp.1782-1793
    • /
    • 1989
  • Stroke crossing number, syntactic pattern recognition procedure, top down recognition structure, and heuristic approach are studied for the Korean text recognition. We propose new algorithms: 1)Korean vowel seperation using limited scanning method in the Korean characters, 2) extracting strokes using stroke width method, 3) stroke crossing number and its properties, 4) average, standard deviation, and mode of stroke crossing number, and 5) classification and recognition methods of limited chinese character. These are studied with computer simuladtions and experiments.

  • PDF