• Title/Summary/Keyword: artificial intelligence tool

Search Result 255, Processing Time 0.031 seconds

Corporate Bankruptcy Prediction Model using Explainable AI-based Feature Selection (설명가능 AI 기반의 변수선정을 이용한 기업부실예측모형)

  • Gundoo Moon;Kyoung-jae Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.241-265
    • /
    • 2023
  • A corporate insolvency prediction model serves as a vital tool for objectively monitoring the financial condition of companies. It enables timely warnings, facilitates responsive actions, and supports the formulation of effective management strategies to mitigate bankruptcy risks and enhance performance. Investors and financial institutions utilize default prediction models to minimize financial losses. As the interest in utilizing artificial intelligence (AI) technology for corporate insolvency prediction grows, extensive research has been conducted in this domain. However, there is an increasing demand for explainable AI models in corporate insolvency prediction, emphasizing interpretability and reliability. The SHAP (SHapley Additive exPlanations) technique has gained significant popularity and has demonstrated strong performance in various applications. Nonetheless, it has limitations such as computational cost, processing time, and scalability concerns based on the number of variables. This study introduces a novel approach to variable selection that reduces the number of variables by averaging SHAP values from bootstrapped data subsets instead of using the entire dataset. This technique aims to improve computational efficiency while maintaining excellent predictive performance. To obtain classification results, we aim to train random forest, XGBoost, and C5.0 models using carefully selected variables with high interpretability. The classification accuracy of the ensemble model, generated through soft voting as the goal of high-performance model design, is compared with the individual models. The study leverages data from 1,698 Korean light industrial companies and employs bootstrapping to create distinct data groups. Logistic Regression is employed to calculate SHAP values for each data group, and their averages are computed to derive the final SHAP values. The proposed model enhances interpretability and aims to achieve superior predictive performance.

Users' Attachment Styles and ChatGPT Interaction: Revealing Insights into User Experiences

  • I-Tsen Hsieh;Chang-Hoon Oh
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.3
    • /
    • pp.21-41
    • /
    • 2024
  • This study explores the relationship between users' attachment styles and their interactions with ChatGPT (Chat Generative Pre-trained Transformer), an advanced language model developed by OpenAI. As artificial intelligence (AI) becomes increasingly integrated into everyday life, it is essential to understand how individuals with different attachment styles engage with AI chatbots in order to build a better user experience that meets specific user needs and interacts with users in the most ideal way. Grounded in attachment theory from psychology, we are exploring the influence of attachment style on users' interaction with ChatGPT, bridging a significant gap in understanding human-AI interaction. Contrary to expectations, attachment styles did not have a significant impact on ChatGPT usage or reasons for engagement. Regardless of their attachment styles, hesitated to fully trust ChatGPT with critical information, emphasizing the need to address trust issues in AI systems. Additionally, this study uncovers complex patterns of attachment styles, demonstrating their influence on interaction patterns between users and ChatGPT. By focusing on the distinctive dynamics between users and ChatGPT, our aim is to uncover how attachment styles influence these interactions, guiding the development of AI chatbots for personalized user experiences. The introduction of the Perceived Partner Responsiveness Scale serves as a valuable tool to evaluate users' perceptions of ChatGPT's role, shedding light on the anthropomorphism of AI. This study contributes to the wider discussion on human-AI relationships, emphasizing the significance of incorporating emotional intelligence into AI systems for a user-centered future.

An Empirical Study for Performance Evaluation of Web Personalization Assistant Systems (웹 기반 개인화 보조시스템 성능 평가를 위한 실험적 연구)

  • Kim, Ki-Bum;Kim, Seon-Ho;Weon, Sung-Hyun
    • The Journal of Society for e-Business Studies
    • /
    • v.9 no.3
    • /
    • pp.155-167
    • /
    • 2004
  • At this time, the two main techniques for achieving web personalization assistant systems generally concern direct manipulation and software agents. While both direct manipulation and software agents are intended for permitting user to complete tasks rapidly, efficiently, and easily, their methodologies are different. The central debate involving these web personalization techniques originates from the amount of control that each allows to, or holds back from, the users. Direct manipulation can provide users with comprehensibel, predictable and controllable user interfaces that give them a feeling of accomplishnent and responsibility. On the other hand, the intelligent software components, the agents, can assist users with artificial intelligence by monitoring or retrieving personal histories or behaviors. In this empirical study, two web personalization assistant systems are evaluated. One of them, WebPersonalizer, is an agent based user personalization tool; the other, AntWorld, is a collaborative recommendation tool which provides direct manipulation interfaces. Through this empirical study, we have focused on two different paradigms as web personalization assistant systems : direct manipulation and software agents. Each approach has its own advantages and disadvantages. We also provide the experimental result that is worth referring for developers of electronic commerce system and suggest the methodologies for conveniently retrieving necessary information based on their personal needs.

  • PDF

The Possibilities in Craft Creation through Convergence (융합에 의한 공예 창작의 가능성)

  • Park, Jungwon;Xie, Wenqian;Ro, Hae-Sin;Kim, Won-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.1
    • /
    • pp.51-58
    • /
    • 2018
  • The late 20th century saw the industrial period end only to transform into the digital era where people have begun to pay attention to craft because it a field that respects emotion as the essential value, an alternative to overcome the side effect that people have created. Today a new world - where the virtual and the real co-exist through artificial intelligence (AI) - has suddenly approached us and the future of craft is faced with a new situation as it needs to present a new creative solution as a tool that is necessary for human way of life - a tool that has been a necessity throughout history and the evolution of life. As a result for a continued development, craft attempts to establish a new paradigm through current trends represented by our modern society, which is the emergence of creative development through convergence. This study presents creative experiments attempted through the convergence of craft with other heterogeneous tendencies connected to the field. The objective of the study is to enable makers to acquire a more creative way of thinking at the same time as inspiring them and suggesting new creative possibilities in order to develop their work through creative convergence. In Chapter 2, the study investigates on the current status of craft in general, and compares it with what is taking place in Korea; in Chapter 3 the significance of convergence in craft and the process of creating is addressed through case studies. Lastly in Chapter 4, with the basis on analytical case studies, the attribute and the potential of convergence in the field of craft is observed. By analyzing different phenomena presented through attempts to converge in contemporary craft, it has been possible to view the future of the 21st century craft through assessments on what is active and what is as yet hidden potential.

Color Analysis of Disney Animation Villain Characters (디즈니 애니메이션 악당 캐릭터의 색채분석)

  • Sung, Rea;Kim, Hyesung
    • Journal of Information Technology Applications and Management
    • /
    • v.28 no.6
    • /
    • pp.69-85
    • /
    • 2021
  • In the era of the 4th Industrial Revolution, not only artificial intelligence, big data, robots, and biotechnology, but also cultural industries that require human creativity will lead. Among the cultural industries, the animation industry has high industrial utilization value due to its high connection with other industries. Among them, animation characters play the most important role as the subject leading the story of animation. In particular, the villain character not only serves as a medium for the main character to lead the story, but also captivates the audience with a different presence from the main character, adding to the fun and completeness of the animation. These characters consist of visual elements such as form and color, of which color is a tool that effectively conveys the character's personality and role to the audience, and is the first visual element to be considered in delicately describing the character's emotions and the relationship between characters. Therefore, this study attempts to analyze the color of the villain character. To this end, we will select eight Disney animations to derive the characteristics of the villain character's color by analyzing the color, value, chroma, and color association of the colors used in the Disney villain character. As a result of the analysis, the colors mainly used by Disney to convey the villain's image were red (R) and Orange (YR), and there was no difference depending on the times or animation production methods. Second, the brightness of Disney villain characters appeared to be the same medium/famous regardless of the times and production methods, and the frequency of use of high brightness was very low. In terms of saturation, the frequency of use of high and low saturation was high. Third, blackish (Bk), Strong (S), dull (Dl), and deep (Dp) tones were mainly used for tones. In particular, in recent 3D animations than previously produced 2D animations, the use of low chroma and the high black mixing rate increased. Fourth, it can be seen that Disney uses color as a visual method to more clearly express the psychology of the villain character using color association. In conclusion, the color selection of animation characters should be carefully considered as a tool to convey the character's personality, role, and emotion beyond simply using color, and the color selection of characters using color associations and symbols strengthens the narrative structure. It is hoped that this study will help analyze and select the character color of animation.

A Study on the Cognitive Judgment of Pedestrian Risk Factors Using a Second-hand Mobile Phones (중고스마트폰 업사이클링을 통한 보행위험요인 인지판단 연구)

  • Chang, IlJoon;Jeong, Jongmo;Lee, Jaeduk;Ahn, Se-young
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.1
    • /
    • pp.274-282
    • /
    • 2022
  • In order to secure pedestrians' right to walk, we have up-cycled second hand mobile phones to overcome limitations of the existing survey methods, analysis methods, and diagnosis to reduce pedestrian traffic accidents. Second hand mobile phones were up-cycled to produce mobile CCTVs and installed in areas where pedestrian deaths rate is high to secure image data sets for the period of more than 24 hours. It was analyzed by applying image visualization technology and clouding reporting technology, and more precise and accurate results were derived through modeling based on artificial intelligence learning and GIS-based diagnostic guidance. As a result, it was possible to analyze the risk factors and number of pedestrian safety, and even factors that were not known in the existing method could be derived. In addition, the traffic accident risk index was derived by converting data into one year to verify whether second hand mobile phone up-cycling mobile CCTV will be an objective tool for finding pedestrian risk factors. Up-cycling mobile CCTV of second hand mobile phones newly applied through research can be used as a new tool to find pedestrian risk factors, and it can be used as a service to protect the safety of the traffic vulnerable other than pedestrians.

Automatic Interpretation of F-18-FDG Brain PET Using Artificial Neural Network: Discrimination of Medial and Lateral Temporal Lobe Epilepsy (인공신경회로망을 이용한 뇌 F-18-FDG PET 자동 해석: 내.외측 측두엽간질의 감별)

  • Lee, Jae-Sung;Lee, Dong-Soo;Kim, Seok-Ki;Park, Kwang-Suk;Lee, Sang-Kun;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.3
    • /
    • pp.233-240
    • /
    • 2004
  • Purpose: We developed a computer-aided classifier using artificial neural network (ANN) to discriminate the cerebral metabolic pattern of medial and lateral temporal lobe epilepsy (TLE). Materials and Methods: We studied brain F-18-FDG PET images of 113 epilepsy patients sugically and pathologically proven as medial TLE (left 41, right 42) or lateral TLE (left 14, right 16). PET images were spatially transformed onto a standard template and normalized to the mean counts of cortical regions. Asymmetry indices for predefined 17 mirrored regions to hemispheric midline and those for medial and lateral temporal lobes were used as input features for ANN. ANN classifier was composed of 3 independent multi-layered perceptrons (1 for left/right lateralization and 2 for medial/lateral discrimination) and trained to interpret metabolic patterns and produce one of 4 diagnoses (L/R medial TLE or L/R lateral TLE). Randomly selected 8 images from each group were used to train the ANN classifier and remaining 51 images were used as test sets. The accuracy of the diagnosis with ANN was estimated by averaging the agreement rates of independent 50 trials and compared to that of nuclear medicine experts. Results: The accuracy in lateralization was 89% by the human experts and 90% by the ANN classifier Overall accuracy in localization of epileptogenic zones by the ANN classifier was 69%, which was comparable to that by the human experts (72%). Conclusion: We conclude that ANN classifier performed as well as human experts and could be potentially useful supporting tool for the differential diagnosis of TLE.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Characterizing Strategy of Emotional sympathetic Robots in Animation and Movie - Focused on Appearance and Behavior tendency Analysis - (애니메이션 및 영화에 등장하는 정서교감형 로봇의 캐릭터라이징 전략 - 외형과 행동 경향성 분석을 중심으로 -)

  • Ryu, Beom-Yeol;Yang, Se-Hyeok
    • Cartoon and Animation Studies
    • /
    • s.48
    • /
    • pp.85-116
    • /
    • 2017
  • The purpose of this study is to analyze conditions that robots depicted in cinematographic works like animations or movies sympathize with and form an attachment with the nuclear person and organize characterizing strategies for emotional sympathetic robots. Along with the development of technology, the areas of artificial intelligence and robots are no longer considered to belong to science fiction but as realistic issues. Therefore, this author assumes that the expressive characteristics of emotional sympathetic robots created by cinematographic works should be used as meaningful factors in expressively embodying human-friendly service robots to be distributed widely afterwards, that is, in establishing the features of characters. To lay the grounds for it, this research has begun. As the subjects of analysis, this researcher has chosen robot characters whose emotional intimacy with the main person is clearly observed among those found in movies and animations produced after the 1920 when robot's contemporary concept was declared. Also, to understand robots' appearance and behavioral tendency, this study (1) has classified robots' external impressions into five types (human-like, cartoon, tool-like, artificial bring, pet or creature) and (2) has classified behavioral tendencies considered to be the outer embodiment of personality by using DiSC, the tool to diagnose behavioral patterns. Meanwhile, it has been observed that robots equipped with high emotional intimacy are all strongly independent about their duties and indicate great emotional acceptance. Therefore, 'influence' and 'Steadiness' types show great emotional acceptance, the influencing type tends to be highly independent, and the 'Conscientiousness' type tends to indicate less emotional acceptance and independency in general. Yet, according to the analysis on external impressions, appearance factors hardly have any significant relationship with emotional sympathy. It implies that regarding the conditions of robots equipped with great emotional sympathy, emotional sympathy grounded on communication exerts more crucial effects than first impression similarly to the process of forming interpersonal relationship in reality. Lastly, to study the characters of robots, it is absolutely needed to have consilient competence embracing different areas widely. This author also has felt that only with design factors or personality factors, it is hard to estimate robot characters and also analyze a vast amount of information demanded in sympathy with humans entirely. However, this researcher will end this thesis as the foundation for it expecting that the general artistic value of animations can be used preciously afterwards in developing robots that have to be studied interdisciplinarily.

A Study on Project Information Integrated Management Measures Using Life Cycle Information in Road Construction Projects (도로건설사업의 생애주기별 정보를 이용한 건설사업정보 통합관리방안 연구)

  • Kim, Seong-Jin;Kim, Bum-Soo;Kim, Tae-Hak;Kim, Nam-Gon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.11
    • /
    • pp.208-216
    • /
    • 2019
  • Construction projects generate a massive amount of diverse information. It takes at least five years to more than 10 years to complete, so it is important to manage the information on a project's history, including processes and costs. Furthermore, it is necessary to determine if construction projects have been carried out according to the planned goals, and to convert a construction information management system (CALS) into a virtuous cycle. It is easy to ensure integrated information management in private construction projects because constructors can take care of the whole process (from planning to completion), whereas it is difficult for public construction projects because various agencies are involved in the projects. A CALS manages the project information of public road construction, but that information is managed according to CALS subsystems, resulting in disconnected information among the subsystems, and making it impossible to monitor integrated information. Thus, this study proposes integrated information management measures to ensure comprehensive management of the information generated during the construction life cycle. To that end, a CALS is improved by standardizing and integrating the system database, integrating the individually managed user information, and connecting the system with the Dbrain tool, which collectively builds artificial intelligence, to ensure information management based on the project budget.