• Title/Summary/Keyword: 인공지능 확산

Search Result 145, Processing Time 0.024 seconds

Development of Web System for Predicting Divorce Probability (이혼 확률 예측 웹 시스템 개발)

  • Cho Kyu Cheol;Lee Saem Mi
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.247-248
    • /
    • 2023
  • 한인가정상담소의 발표에 따르면 COVID-19의 확산이 본격 시작된 2020년에 '부부·파트너 간 갈등'이 상담소 내담자의 가장 큰 비중을 차지한다. 이러한 상황에서 본 논문은 웹 기반의 이혼 확률 예측 서비스를 구현함으로써 상담 전 부부 사이의 심각성을 파악하는 데 도움을 주고자 한다. 서비스를 구현하기 위해 Kaggle에 공개된 이혼 예측 데이터를 활용하여 설문지에 대한 응답을 기반으로 이혼 확률을 예측하는 인공지능 모델을 생성하고 웹에 적용하였다.

  • PDF

A Study on the Users Intention to Adopt an Intelligent Service: Focusing on the Factors Affecting the Perceived Necessity of Conversational A.I. Service (인공지능 서비스의 사용자 수용 의도에 관한 연구 : 대화형 AI서비스 필요성에 대한 인식에 영향을 주는 요인을 중심으로)

  • Jeon, Sowon;Lee, Jihee;Lee, Jongtae
    • Journal of Korea Technology Innovation Society
    • /
    • v.22 no.2
    • /
    • pp.242-264
    • /
    • 2019
  • This study focuses on considering the factors affecting the user intention to adopt an intelligent service - A.I. speaker services. Currently there can be a considerable difference between the expectation and the realized diffusion of IT-based intelligent services. This study aims to find out this gap based on the idea of diver previous researches including TAM and UTAUT studies and to identify the direct and indirect effects of diverse factors such as security issues, perceived time pressure, service innovativeness, and the experience of these IT-based intelligent services. And this study considers the expected impact of perceived time pressure factor on the user acceptance of A.I. speaker services. In analysis results, not only the traditional factors such as the perceived usefulness and the hedonic/utilitarian motives but also the perceived time pressure, the perceived security issues, and the experience of the services should be considered as meaningful factors to affect the users adopting A.I. speaker services.

The Education Model of Liberal Arts to Improve the Artificial Intelligence Literacy Competency of Undergraduate Students (대학생의 AI 리터러시 역량 신장을 위한 교양 교육 모델)

  • Park, Youn-Soo;Yi, Yumi
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.2
    • /
    • pp.423-436
    • /
    • 2021
  • In the future, artificial intelligence (AI) technology is expected to become a general-purpose technology (GPT), and it is predicted that AI competency will become an essential competency. Several nations around the world are fostering experts in the field of AI to achieve technological proficiency while working to develop the necessary infrastructure and educational environment. In this study, we investigated the status of software education at the liberal arts level at 31 universities in Seoul, along with precedents from domestic and foreign AI education research. Based on this, we concluded that an AI literacy education model is needed to link software education at the liberal arts level with professional AI education. And we classified 20 AI-related lectures released in the KOCW according to the AI literacy competencies required; based on the results of this classification, we propose a model for AI literacy education in the liberal arts for undergraduate students. The proposed AI literacy education model may be considered as AI·SW convergence to experience AI along with literacy in the humanities, deviating from the existing theoretical and computer-science-based approach. We expect that our proposed AI literacy education model can contribute to the proliferation of AI.

Analysis Study on the Detection and Classification of COVID-19 in Chest X-ray Images using Artificial Intelligence (인공지능을 활용한 흉부 엑스선 영상의 코로나19 검출 및 분류에 대한 분석 연구)

  • Yoon, Myeong-Seong;Kwon, Chae-Rim;Kim, Sung-Min;Kim, Su-In;Jo, Sung-Jun;Choi, Yu-Chan;Kim, Sang-Hyun
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.5
    • /
    • pp.661-672
    • /
    • 2022
  • After the outbreak of the SARS-CoV2 virus that causes COVID-19, it spreads around the world with the number of infections and deaths rising rapidly caused a shortage of medical resources. As a way to solve this problem, chest X-ray diagnosis using Artificial Intelligence(AI) received attention as a primary diagnostic method. The purpose of this study is to comprehensively analyze the detection of COVID-19 via AI. To achieve this purpose, 292 studies were collected through a series of Classification methods. Based on these data, performance measurement information including Accuracy, Precision, Area Under Cover(AUC), Sensitivity, Specificity, F1-score, Recall, K-fold, Architecture and Class were analyzed. As a result, the average Accuracy, Precision, AUC, Sensitivity and Specificity were achieved as 95.2%, 94.81%, 94.01%, 93.5%, and 93.92%, respectively. Although the performance measurement information on a year-on-year basis gradually increased, furthermore, we conducted a study on the rate of change according to the number of Class and image data, the ratio of use of Architecture and about the K-fold. Currently, diagnosis of COVID-19 using AI has several problems to be used independently, however, it is expected that it will be sufficient to be used as a doctor's assistant.

Intelligent Abnormal Situation Event Detections for Smart Home Users Using Lidar, Vision, and Audio Sensors (스마트 홈 사용자를 위한 라이다, 영상, 오디오 센서를 이용한 인공지능 이상징후 탐지 알고리즘)

  • Kim, Da-hyeon;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.17-26
    • /
    • 2021
  • Recently, COVID-19 has spread and time to stay at home has been increasing in accordance with quarantine guidelines of the government such as recommendations to refrain from going out. As a result, the number of single-person households staying at home is also increasingsingle-person households are less likely to be notified to the outside world in times of emergency than multi-person households. This study collects various situations occurring in the home with lidar, image, and voice sensors and analyzes the data according to the sensors through their respective algorithms. Using this method, we analyzed abnormal patterns such as emergency situations and conducted research to detect abnormal signs in humans. Artificial intelligence algorithms that detect abnormalities in people by each sensor were studied and the accuracy of anomaly detection was measured according to the sensor. Furthermore, this work proposes a fusion method that complements the pros and cons between sensors by experimenting with the detectability of sensors for various situations.

Issues and Trends Related to Artificial Intelligence in Research Ethics (연구윤리에서 인공지능 관련 이슈와 동향)

  • Sun-Hee Lee
    • Health Policy and Management
    • /
    • v.34 no.2
    • /
    • pp.103-105
    • /
    • 2024
  • Artificial intelligence (AI) technology is rapidly spreading across various industries. Accordingly, interest in ethical issues arising from the use of AI is also increasing. This is particularly true in the healthcare sector, where AI-related ethical issues are a significant topic due to its focus on health and life. Hence, this issue aims to examine the ethical concerns when using AI tools during research and publication processes. One of the key concerns is the potential for unintended plagiarism when researchers use AI tools for tasks such as translation, citation, and editing. Currently, as AI is not given authorship, the researcher is held accountable for any ethical problems arising from using AI tools. Researchers are advised to specify which AI tools were used and how they were employed in their research papers. As more cases of ethical issues related to AI tools accumulate, it is expected that various guidelines will be developed. Therefore, researchers should stay informed about global consensus and guidelines regarding the use of AI tools in the research and publication process.

An Exploratory Study on Educational instruments of Physical Computing for Maker Education (메이커 교육을 위한 피지컬 컴퓨팅 교구에 관한 탐색적 연구)

  • Lee, Chang Youn;Ahn, Jae-Hyun;Seo, Tae-Kyun
    • Proceedings of The KACE
    • /
    • 2018.01a
    • /
    • pp.157-160
    • /
    • 2018
  • 인공지능 기술의 발달과 함께, 4차 산업 혁명에 관한 사회적 논의가 이슈화됨에 따라 교육 학계도 변화의 목소리를 높이고 있다. 지식기반 사회에서 컴퓨터 활용 역량이 강조되면서, 코딩/소프트웨어 교육 등 테크놀로지 기반 교수학습이 교과목과 관계없이 주목받고 있다. 최근에는 오픈소스를 만드는 사람들에 대한 긍정적 인식이 생겨나면서부터, 그들을 메이커라 지칭하고 메이커를 양성하기 위한 운동이 확산되기 시작하였다. 학계에서도 이 운동을 수용하려고 시도하였으나, 관련 연구에서는 기존 테크놀로지 기반 교수학습과 구분되는 특성을 명백하게 보여주지 못했다. 본 연구는 메이커 교육의 시론적 연구로서, 이재호와 장준형(2017)이 제안한 메이킹 역량을 중심으로 이를 키워줄 수 있는 피지컬 컴퓨팅 교구를 조사하였고, 동시에 활용 가능성을 제안하였다. 초기에 제안된 교구의 활용방안은 현장 교사 3인의 전문가 검토를 받았으며, 그들이 제공한 조언을 참고하여 수정하였다. 보완된 활용방안은 메이킹 역량을 구성하는 분석역량, 설계역량, 구현역량으로 구분하여 제시하였다. 이를 통해 메이커 교육의 이론적 발전과 확산에 기여하고자 하였다.

  • PDF

A Study on the Worm Detection in the IP Packet based on Self-Organizing Feature Maps (Self-Organizing Feature Maps 기반 IP 패킷의 웜 탐지에 관한 연구)

  • 민동옥;손태식;문종섭
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.10a
    • /
    • pp.346-348
    • /
    • 2004
  • 급증하고 있는 인터넷 환경에서 정보보호는 가장 중요한 고려사항 중 하나이다. 특히, 인터넷의 발달로 빠르게 확산되고 있는 웜 바이러스는 현재 바이러스의 대부분을 차지하며, 다양한 종류의 바이러스들과 악성코드들을 네트워크에 전파시키고 있다 지금 이 순간도 웜 바이러스가 네트워크를 통해 확산되고 있지만, 웜 바이러스의 탐지가 응용레벨에서의 룰-매칭 방식에 근거하고 있기 때문에 신종이나 변종 웜 바이러스에 대해서 탐지가 난해하고, 감염된 이후에 탐지를 할 수밖에 없다는 한계를 가지고 있다. 본 연구에서는 신종이나 변종 웜 바이러스의 탐지가 가능하고, 네트워크 레벨에서 탐지할 수 있는 신경망의 인공지능 모델 중 SOFM을 이용한 웜 바이러스 탐지 방안을 제시한다.

  • PDF

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Forecasting the Growth of Smartphone Market in Mongolia Using Bass Diffusion Model (Bass Diffusion 모델을 활용한 스마트폰 시장의 성장 규모 예측: 몽골 사례)

  • Anar Bataa;KwangSup Shin
    • The Journal of Bigdata
    • /
    • v.7 no.1
    • /
    • pp.193-212
    • /
    • 2022
  • The Bass Diffusion Model is one of the most successful models in marketing research, and management science in general. Since its publication in 1969, it has guided marketing research on diffusion. This paper illustrates the usage of the Bass diffusion model, using mobile cellular subscription diffusion as a context. We fit the bass diffusion model to three large developed markets, South Korea, Japan, and China, and the emerging markets of Vietnam, Thailand, Kazakhstan, and Mongolia. We estimate the parameters of the bass diffusion model using the nonlinear least square method. The diffusion of mobile cellular subscriptions does follow an S-curve in every case. After acquiring m, p, and q parameters we use k-Means Cluster Analysis for grouping countries into three groups. By clustering countries, we suggest that diffusion rates and patterns are similar, where countries with emerging markets can follow in the footsteps of countries with developed markets. The purpose was to predict the timing and the magnitude of the market maturity and to determine whether the data follow the typical diffusion curve of innovations from the Bass model.