• Title/Summary/Keyword: 일반인공지능

Search Result 259, Processing Time 0.026 seconds

Design and Implementation of a Plan Knowledge Modeler (계획 지식 모델링 도구의 설계 및 구현)

  • Choi, Jae-Hyuk;Kim, In-Cheol
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2006.10b
    • /
    • pp.254-259
    • /
    • 2006
  • 전통적인 인공지능 계획방식은 완전한 월드 상태모델과 시스템 동작모델에 기초하여 처음부터 자동으로 작업계획을 생성하려는 접근방식이다. 그러나 지능로봇제어와 같이 불확실성과 가변성이 높은 실 세계 응용분야에서 이와 같은 전통적인 인공지능 계획방식은 효과를 얻기 어렵다. 반면에 많은 실 세계 응용분야에서는 그 분야에서 이미 잘 알려져 있는 작업 영역지식이나 제어지식들이 존재하며, 이들을 효과적으로 이용하는 것이 매우 중요하다. 이러한 방법 중의 하나로서 복잡도가 높은 작업계획을 전문가가 직접 편집해서 입력하는 방식이 널리 쓰인다. 기본 동작모델과는 달리, 일반적으로 작업계획 표현언어는 복잡한 제어구조를 포함하는 하나의 작업 프로세스로 계획을 표현한다. 따라서 이러한 복잡한 절차적 지식인 작업계획을 편집하고 검증하기 위해서는 편리한 모델링 도구의 개발이 필요하다. 본 연구에서는 PRS 계열의 작업계획을 비주얼 환경에서 편집할 수 있고, 가상 시뮬레이션 기능과 작업 계획기와의 연동 기능을 갖춘 PKM시스템의 설계와 구현에 대해 설명한다.

  • PDF

Design of Artificial Intelligence Water Level Prediction System for Prediction of River Flood (하천 범람 예측을 위한 인공지능 수위 예측 시스템 설계)

  • Park, Se-Hyun;Kim, Hyun-Jae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.2
    • /
    • pp.198-203
    • /
    • 2020
  • In this paper, we propose an artificial water level prediction system for small river flood prediction. River level prediction can be a measure to reduce flood damage. However, it is difficult to build a flood model in river because of the inherent nature of the river or rainfall that affects river flooding. In general, the downstream water level is affected by the water level at adjacent upstream. Therefore, in this study, we constructed an artificial intelligence model using Recurrent Neural Network(LSTM) that predicts the water level of downstream with the water level of two upstream points. The proposed artificial intelligence system designed a water level meter and built a server using Nodejs. The proposed neural network hardware system can predict the water level every 6 hours in the real river.

Development of multi-depth and artificial intelligence smart measuring device for analyzing surface water-groundwater correlation characteristics (지표수-지하수 연계 특성 분석용 다심도 및 인공지능 스마트 계측장치 개발)

  • Lim, Woo-Seok;Hwang, Chan-Ik;Choi, Myoung-Rak;Kim, Gyoo-Bum
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.380-380
    • /
    • 2020
  • 가뭄 피해 극복을 위한 인공 함양지 통합관리시스템의 일부로써 지표수-지하수 연계 특성 분석용 의사결정을 전달하는 인공지능 스마트 계측기의 필요성이 꾸준히 제기되어 왔으나 실용성과 효율성을 동시에 갖춘 계측기는 시장에 출시되지 않았다. 기존의 계측기는 단순 측정이 목적이었으며 분석을 위해서는 일정 기간 직접 계측하여 분석하거나, 계측데이터를 원격 망을 통하여 서버로 전송하고 관리자가 데이터를 해석하는 방식을 취하였다. 또한, 수질 계측과 수질의 미소 변동성을 동시에 계측하여 수질 변화상태를 판단 할 수 있는 수질 계측기는 상품화되지 않아 다목적 수질 분석에 한계점을 갖고 있다. 이러한 한계점이 기존의 지하수 수질 계측기로는 불가능한 수중 라돈을 채수 없이 계측 가능하도록 하고, 순간 수질 변화 및 수질 변화 요인분석이 가능한 계측을 위하여 라돈, 전도도, 수위, 수온 및 필름형 pH 센서를 개발하여 적용한 다항목 계측기로 통합하는 연구가 필요한 이유이다. 개발한 계측기는 빅데이터 기반의 지능형 수질 변동성 분석 알고리즘을 내장하고 수직 깊이 방향의 다중심도 계측이 가능하도록 핵심적인 통신 연결성을 확보하였고 다양한 수질에서 견딜 수 있으며 특히 인공함양에서 발생하는 철, 망간에 부식되지 않는 재질을 이용하여 설계한 '지표수-지하수 연계 특성 분석용 다심도 및 인공지능 스마트 계측장치'이다. 본 장치는 기존 지하수 수질 계측기에서는 불가능하였던 순간 수위변화 및 수위변화 요인분석이 가능한 계측을 위하여 초당 측정 샘플링 주파수(10Hz)를 높인 계측회로를 개발하여 적용하였다.

  • PDF

A Transdisciplinary and Humanistic Approach on the Impacts by Artificial Intelligence Technology (인공지능과 디지털 기술 발달에 따른 트랜스/포스트휴머니즘에 관한 학제적 연구)

  • Kim, Dong-Yoon;Bae, Sang-Joon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.411-419
    • /
    • 2019
  • Nowadays we are not able to consider and imagine anything without taking into account what is called Artificial Intelligence. Even broadcasting media technologies could not be thought of outside this newly emerging technology of A.I.. Since the last part of 20th century, this technology seemingly is accelerating it's development thanks to an unbelievably enormous computational capacity of data information treatments. In conjunction with the firmly established worldwide platform companies like GAFA(Google, Amazon, Facebook, Apple), the key cutting edge technologies dubbed NBIC(Nanotech, Biotech, Information Technology, Cognitive science) converge to change the map of the current civilization by affecting the human relationship with the world and hence modifying what is essential in humans. Under the sign of the converging technologies, the relatively recently coined concepts such as 'trans(post)humanism' are emerging in the academic sphere in the North American and Major European regions. Even though the so-called trans(post)human movements are prevailing in the major technological spots, we have to say that these terms do not yet reach an unanimous acceptation among many experts coming from diverse fields. Indeed trans(post)humanism as a sort of obscure term has been a largely controversial trend. Because there have been many different opinions depending on scientific, philosophical, medical, engineering scholars like Peter Sloterdijk, K. N. Hayles, Neil Badington, Raymond Kurzweil, Hans Moravec, Laurent Alexandre, Gilbert Hottois just to name a few. However, considering the highly dazzling development of artificial intelligence technology basically functioning in conjunction with the cybernetic communication system firstly conceived by Nobert Wiener, MIT mathematician, we can not avoid questioning what A. I. signifies and how it will affect the current media communication environment.

The Influence of AI Technology Acceptance and Ethical Awareness towards Intention to Use (인공지능 기술수용과 윤리성 인식이 이용의도에 미치는 영향)

  • Ko, Young-Hwa;Leem, Choon-Seong
    • Journal of Digital Convergence
    • /
    • v.19 no.3
    • /
    • pp.217-225
    • /
    • 2021
  • This study analyzed the perception formed by artificial intelligence users by converging technology readiness index and technology acceptance models and expanding them to models considering artificial intelligence ethics in order to find out the impact of technology acceptance and ethics. Independent variables include optimism, transparency, ethical awareness, user-centeredness, perceived usefulness and perceived ease of use as potential variables affected by independent variables, and defined the intention of use as potential variables as dependent variables. The survey results from an online and offline of men and women aged over 17 years old across the country (N=260) from September 5 to October 12, 2020 were used in the analysis. The findings, first, showed that optimism had a significant static effect on perceived usefulness and ease of use. Second, ethical awareness (transparency, ethical awareness, user-centeredness) did not have a significant impact on perceived usefulness and ease of use. Third, perceived usefulness and ease of use are finally found to have a significant static effect on the intention of use. Fourth, perceived usefulness has a relatively high influence over ease of use.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Using GA based Input Selection Method for Artificial Neural Network Modeling Application to Bankruptcy Prediction (유전자 알고리즘을 활용한 인공신경망 모형 최적입력변수의 선정: 부도예측 모형을 중심으로)

  • 홍승현;신경식
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.1
    • /
    • pp.227-249
    • /
    • 2003
  • Prediction of corporate failure using past financial data is a well-documented topic. Early studies of bankruptcy prediction used statistical techniques such as multiple discriminant analysis, logit and probit. Recently, however, numerous studies have demonstrated that artificial intelligence such as neural networks can be an alternative methodology for classification problems to which traditional statistical methods have long been applied. In building neural network model, the selection of independent and dependent variables should be approached with great care and should be treated as model construction process. Irrespective of the efficiency of a teaming procedure in terms of convergence, generalization and stability, the ultimate performance of the estimator will depend on the relevance of the selected input variables and the quality of the data used. Approaches developed in statistical methods such as correlation analysis and stepwise selection method are often very useful. These methods, however, may not be the optimal ones for the development of neural network model. In this paper, we propose a genetic algorithms approach to find an optimal or near optimal input variables fur neural network modeling. The proposed approach is demonstrated by applications to bankruptcy prediction modeling. Our experimental results show that this approach increases overall classification accuracy rate significantly.

  • PDF

A Study on the Quantitative Evaluation Method of Quality Control using Ultrasound Phantom in Ultrasound Imaging System based on Artificial Intelligence (인공지능을 활용한 초음파영상진단장치에서 초음파 팬텀 영상을 이용한 정도관리의 정량적 평가방법 연구)

  • Yeon Jin, Im;Ho Seong, Hwang;Dong Hyun, Kim;Ho Chul, Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.6
    • /
    • pp.390-398
    • /
    • 2022
  • Ultrasound examination using ultrasound equipment is an ultrasound device that images human organs using sound waves and is used in various areas such as diagnosis, follow-up, and treatment of diseases. However, if the quality of ultrasound equipment is not guaranteed, the possibility of misdiagnosis increases, and the diagnosis rate decreases. Accordingly, The Korean Society of Radiology and Korea society of Ultrasound in Medicine presented guidelines for quality management of ultrasound equipment using ATS-539 phantom. The DenseNet201 classification algorithm shows 99.25% accuracy and 5.17% loss in the Dead Zone, 97.52% loss in Axial/Lateral Resolution, 96.98% accuracy and 20.64% loss in Sensitivity, 93.44% accuracy and 22.07% loss in the Gray scale and Dynamic Range. As a result, it is the best and is judged to be an algorithm that can be used for quantitative evaluation. Through this study, it can be seen that if quantitative evaluation using artificial intelligence is conducted in the qualitative evaluation item of ultrasonic equipment, the reliability of ultrasonic equipment can be increased with high accuracy.

A Study on the Dataset Construction and Model Application for Detecting Surgical Gauze in C-Arm Imaging Using Artificial Intelligence (인공지능을 활용한 C-Arm에서 수술용 거즈 검출을 위한 데이터셋 구축 및 검출모델 적용에 관한 연구)

  • Kim, Jin Yeop;Hwang, Ho Seong;Lee, Joo Byung;Choi, Yong Jin;Lee, Kang Seok;Kim, Ho Chul
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.4
    • /
    • pp.290-297
    • /
    • 2022
  • During surgery, Surgical instruments are often left behind due to accidents. Most of these are surgical gauze, so radioactive non-permeable gauze (X-ray gauze) is used for preventing of accidents which gauze is left in the body. This gauze is divided into wire and pad type. If it is confirmed that the gauze remains in the body, gauze must be detected by radiologist's reading by imaging using a mobile X-ray device. But most of operating rooms are not equipped with a mobile X-ray device, but equipped C-Arm equipment, which is of poorer quality than mobile X-ray equipment and furthermore it takes time to read them. In this study, Use C-Arm equipment to acquire gauze image for detection and Build dataset using artificial intelligence and select a detection model to Assist with the relatively low image quality and the reading of radiology specialists. mAP@50 and detection time are used as indicators for performance evaluation. The result is that two-class gauze detection dataset is more accurate and YOLOv5 model mAP@50 is 93.4% and detection time is 11.7 ms.

Applicability of Artificial Intelligence Techniques to Forecast Rainfall and Flood Damage in Future (미래 강우량 및 홍수피해 전망을 위한 인공지능 기법의 적용성 검토)

  • Lee, Hoyong;Kim, Jongsung;Seo, Jaeseung;Kim, Sameun;Kim, Soojun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.184-184
    • /
    • 2021
  • 2020년의 경우 대기 상층 제트기류가 크게 강화됨에 따라 작은 규모의 저기압의 발달이 평년보다 두 배 이상 증가하였고, 그로 인해 장마가 최대 54일가량 지속되며 1조 371억 원 가량의 대규모 침수피해가 발생하였다. 이와 같이 최근 기후변화로 인한 이상 기후가 빈번하게 발생하고 있으며, 그로 인해 홍수, 태풍과 같은 재난의 강도 및 파급되는 재산피해가 점차 증가하고 있는 추세이다. 따라서 본 연구에서는 기후변화를 고려하여 향후 30년간 강우량 변화 추이를 파악하고, 이에 따라 파급되는 재난피해 규모의 증가 추세를 확인하고자 하였다. 기후변화 시나리오는 IPCC AR6(Intergovernmental Panel on Climate Change - Sixth Assessment Report)에서 제시하고 있는 시나리오 중 극한 시나리오인 SSP5-8.5와 안정화 시나리오인 SSP2-4.5 시나리오를 활용하고자 하였다. GCM(General Circulation Model) 자료는 전 지구적 모형으로 공간적 해상도가 낮은 문제가 있기 때문에, 국내 적용을 위해서는 축소기법을 적용해야 한다. 본 연구에서는 공간적 축소를 위해 통계학적 기법 중 인공지능 기법을 적용하고 Reference data와 종관기상관측(ASOS)의 실측 강우 자료(1905 ~ 2014년)를 통해 학습된 모형의 정확도 검증을 수행하였다. 또한 연 강수량과 연도별 홍수피해의 규모 및 빈도를 확인하여 연도별 강수량 증가에 따른 피해 규모의 증가를 관계식을 도출하였다. 이후 최종적인 축소기법으로 모형을 통해 향후 2050년까지 부산광역시의 예측 강우량을 전망하여 연 강수량의 증가량과 피해 규모의 증가량을 전망해보고자 하였다. 본 연구 결과는 부산광역시의 예방단계 재난관리의 일환으로 적응형 기후변화 대책 수립에 기초 자료로써 활용될 수 있을 것이다.

  • PDF