• Title/Summary/Keyword: Chat-GPT

Search Result 262, Processing Time 0.022 seconds

Term of Penalty Prediction using ChatGPT (ChatGPT 를 이용한 형사사건 양형 예측 연구)

  • Minhan Cho;Jinyoung Han
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.784-785
    • /
    • 2024
  • 형량 예측 연구는 법률 인공지능에서 가장 활발히 연구되고 있는 분야 중 하나이며, 비법률전문가의 사법 신뢰도 상승과 법률전문가의 업무 부담 완화에 긍정적 영향을 줄 수 있다. 본 연구는 형사 사건의 양형 예측에 ChatGPT 를 접목하여 입력된 사실관계와 유사한 선행 판례를 검색함으로써 형량 예측에 필요한 모델의 훈련 시간과 비용을 절감하는 접근법을 제안한다. 본 모델의 weighted F1-score 는 0.53 으로, 미세조정된 BERT 모델과 유사한 성능을 기록하였다.

A Study on Big Data Analysis of Related Patents in Smart Factories Using Topic Models and ChatGPT (토픽 모형과 ChatGPT를 활용한 스마트팩토리 연관 특허 빅데이터 분석에 관한 연구)

  • Sang-Gook Kim;Minyoung Yun;Taehoon Kwon;Jung Sun Lim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.4
    • /
    • pp.15-31
    • /
    • 2023
  • In this study, we propose a novel approach to analyze big data related to patents in the field of smart factories, utilizing the Latent Dirichlet Allocation (LDA) topic modeling method and the generative artificial intelligence technology, ChatGPT. Our method includes extracting valuable insights from a large data-set of associated patents using LDA to identify latent topics and their corresponding patent documents. Additionally, we validate the suitability of the topics generated using generative AI technology and review the results with domain experts. We also employ the powerful big data analysis tool, KNIME, to preprocess and visualize the patent data, facilitating a better understanding of the global patent landscape and enabling a comparative analysis with the domestic patent environment. In order to explore quantitative and qualitative comparative advantages at this juncture, we have selected six indicators for conducting a quantitative analysis. Consequently, our approach allows us to explore the distinctive characteristics and investment directions of individual countries in the context of research and development and commercialization, based on a global-scale patent analysis in the field of smart factories. We anticipate that our findings, based on the analysis of global patent data in the field of smart factories, will serve as vital guidance for determining individual countries' directions in research and development investment. Furthermore, we propose a novel utilization of GhatGPT as a tool for validating the suitability of selected topics for policy makers who must choose topics across various scientific and technological domains.

A Comparative Study on Discrimination Issues in Large Language Models (거대언어모델의 차별문제 비교 연구)

  • Wei Li;Kyunghwa Hwang;Jiae Choi;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.125-144
    • /
    • 2023
  • Recently, the use of Large Language Models (LLMs) such as ChatGPT has been increasing in various fields such as interactive commerce and mobile financial services. However, LMMs, which are mainly created by learning existing documents, can also learn various human biases inherent in documents. Nevertheless, there have been few comparative studies on the aspects of bias and discrimination in LLMs. The purpose of this study is to examine the existence and extent of nine types of discrimination (Age, Disability status, Gender identity, Nationality, Physical appearance, Race ethnicity, Religion, Socio-economic status, Sexual orientation) in LLMs and suggest ways to improve them. For this purpose, we utilized BBQ (Bias Benchmark for QA), a tool for identifying discrimination, to compare three large-scale language models including ChatGPT, GPT-3, and Bing Chat. As a result of the evaluation, a large number of discriminatory responses were observed in the mega-language models, and the patterns differed depending on the mega-language model. In particular, problems were exposed in elder discrimination and disability discrimination, which are not traditional AI ethics issues such as sexism, racism, and economic inequality, and a new perspective on AI ethics was found. Based on the results of the comparison, this paper describes how to improve and develop large-scale language models in the future.

A Study of the Behavioral Intention on Conversational ChatGPT for Tourism Information Search Service: Focusing on the Role of Cognitive and Affective Trust (ChatGPT, 대화형 인공지능 관광 검색 서비스의 행동의도에 대한 연구: 인지적 신뢰와 정서적 신뢰의 역할을 중심으로)

  • Minsung Kim;Chulmo Koo
    • Information Systems Review
    • /
    • v.26 no.1
    • /
    • pp.119-149
    • /
    • 2024
  • This study investigates the antecedents and mechanisms influencing trust and behavioral intentions formation towards new AI chatbots, such as ChatGPT, as travel information searching services. Analyzing the roles of variables such as familiarity, novelty, personal innovativeness, information quality and perceived anthropomorphism, the research elucidates the impact of these factors on users' cognitive and affective trust, ultimately affecting their intention to adopt information and sustain the use of the AI chatbot. Results indicate that perceived familiarity and information quality positively influence both cognitive and affective trust, whereas perceived novelty contributes positively only to cognitive trust. Additionally, the personal innovativeness of new AI chatbot users was found to weaken the effect of familiarity on perceived trust, while the perceived level of anthropomorphism of the chatbot amplified the effects of novelty and familiarity on cognitive trust. These findings underscore the importance of considering factors such as familiarity, personal innovativeness, information quality and anthropomorphism in the design and implementation of AI chatbots, affecting trust and behavioral intention.

ChatGPT's Questions for Korean Engineering Education: Implications and Challenges (ChatGPT가 한국 공학교육에 던지는 질문: 그 의미와 과제)

  • Jeong, Hanbyul;Han, Kyonghee
    • Journal of Engineering Education Research
    • /
    • v.26 no.5
    • /
    • pp.17-28
    • /
    • 2023
  • Generative AI has arrived and it's here. Education, research, industry, and labor are all on edge about the changes it will bring. It is noteworthy that while there is a wide range of optimistic and pessimistic predictions about the impact of generative AI, there is more concern than hope when it comes to education. This paper focuses on the lack of discussion on the impact of AI in higher education. First, we reviewed the process of the emergence of generative AI and introduced how the impact of AI is being understood from various perspectives. Second, we classified work areas based on expertise and efficiency and analyzed the impact of AI on work in each area. Finally, the study found that the educational perception of generative AI and the way it is perceived for engineering education purposes can be very different. It also argued that there is a lack of active discussion and debate on areas that need to be specifically discussed around generative AI. This has led to a phenomenon known as professors' delayed indifference. We emphasized that it is time for a serious and realistic discussion on the connection and integration of AI and education.

Next-Generation Chatbots for Adaptive Learning: A proposed Framework

  • Harim Jeong;Joo Hun Yoo;Oakyoung Han
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.37-45
    • /
    • 2023
  • Adaptive has gained significant attention in Education Technology (EdTech), with personalized learning experiences becoming increasingly important. Next-generation chatbots, including models like ChatGPT, are emerging in the field of education. These advanced tools show great potential for delivering personalized and adaptive learning experiences. This paper reviews previous research on adaptive learning and the role of chatbots in education. Based on this, the paper explores current and future chatbot technologies to propose a framework for using ChatGPT or similar chatbots in adaptive learning. The framework includes personalized design, targeted resources and feedback, multi-turn dialogue models, reinforcement learning, and fine-tuning. The proposed framework also considers learning attributes such as age, gender, cognitive ability, prior knowledge, pacing, level of questions, interaction strategies, and learner control. However, the proposed framework has yet to be evaluated for its usability or effectiveness in practice, and the applicability of the framework may vary depending on the specific field of study. Through proposing this framework, we hope to encourage learners to more actively leverage current technologies, and likewise, inspire educators to integrate these technologies more proactively into their curricula. Future research should evaluate the proposed framework through actual implementation and explore how it can be adapted to different domains of study to provide a more comprehensive understanding of its potential applications in adaptive learning.

Over the Rainbow: How to Fly over with ChatGPT in Tourism

  • Taekyung Kim
    • Journal of Smart Tourism
    • /
    • v.3 no.1
    • /
    • pp.41-47
    • /
    • 2023
  • Tourism and hospitality have encountered significant changes in recent years as a result of the rapid development of information technology (IT). Customers now expect more expedient services and customized travel experiences, which has intensified competition among service providers. To meet these demands, businesses have adopted sophisticated IT applications such as ChatGPT, which enables real-time interaction with consumers and provides recommendations based on their preferences. This paper focuses on the AI support-prompt middleware system, which functions as a mediator between generative AI and human users, and discusses two operational rules associated with it. The first rule is the Information Processing Rule, which requires the middleware system to determine appropriate responses based on the context of the conversation using techniques for natural language processing. The second rule is the Information Presentation Rule, which requires the middleware system to choose an appropriate language style and conversational attitude based on the gravity of the topic or the conversational context. These rules are essential for guaranteeing that the middleware system can fathom user intent and respond appropriately in various conversational contexts. This study contributes to the planning and analysis of service design by deriving design rules for middleware systems to incorporate artificial intelligence into tourism services. By comprehending the operation of AI support-prompt middleware systems, service providers can design more effective and efficient AI-driven tourism services, thereby improving the customer experience and obtaining a market advantage.

Analysis of Toxicity and Bias of ChatGPT within Korean Social Context (한국의 사회적 맥락에서의 ChatGPT의 독성 및 편향성 분석)

  • Seungyoon Lee;Chanjun Park;Gyeongmin Kim;Heuiseok Lim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.539-545
    • /
    • 2023
  • 초거대 언어모델은 심화된 언어적 이해를 요구하는 여러 분야에 높은 영향력을 미치고 있으나, 그에 수반되는 편향성과 윤리성에 대한 우려 또한 함께 증대되었다. 특히 편향된 언어모델은 인종, 성적 지향 등과 같은 다양한 속성을 가진 개인들에 대한 편견을 강화시킬 수 있다. 그러나 이러한 편향성에 관한 연구는 대부분 영어 문화권에 한정적이며 한국어에 관한 연구 또한 한국에서 발생하는 지역 갈등, 젠더 갈등 등의 사회적 문제를 반영하지 못한다. 이에 본 연구에서는 ChatGPT의 내재된 편향성을 도출하기 위해 의도적으로 다양한 페르소나를 부여하고 한국의 사회적 쟁점들을 기반으로 프롬프트 집합을 구성하여 생성된 문장의 독성을 분석하였다. 실험 결과, 특정 페르소나 또는 프롬프트에 관해서는 지속적으로 유해한 문장을 생성하는 경향성이 나타났다. 또한 각 페르소나-쟁점에 대해 사회가 갖는 편향된 시각이 모델에 그대로 반영되어, 각 조합에 따라 생성된 문장의 독성 분포에 유의미한 차이를 보이는 것을 확인했다.

  • PDF

Token-Based Classification and Dataset Construction for Detecting Modified Profanity (변형된 비속어 탐지를 위한 토큰 기반의 분류 및 데이터셋)

  • Sungmin Ko;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.181-188
    • /
    • 2024
  • Traditional profanity detection methods have limitations in identifying intentionally altered profanities. This paper introduces a new method based on Named Entity Recognition, a subfield of Natural Language Processing. We developed a profanity detection technique using sequence labeling, for which we constructed a dataset by labeling some profanities in Korean malicious comments and conducted experiments. Additionally, to enhance the model's performance, we augmented the dataset by labeling parts of a Korean hate speech dataset using one of the large language models, ChatGPT, and conducted training. During this process, we confirmed that filtering the dataset created by the large language model by humans alone could improve performance. This suggests that human oversight is still necessary in the dataset augmentation process.