• Title/Summary/Keyword: AI Understanding

Search Result 309, Processing Time 0.027 seconds

Is ChatGPT an Ally or an Enemy? Its Impact on Society Based on a Systematic Literature Review

  • Juliana Basulo-Ribeiro;Leonor Teixeira
    • Journal of Information Science Theory and Practice
    • /
    • v.12 no.2
    • /
    • pp.79-95
    • /
    • 2024
  • The new AI based conversational chatbot, ChatGPT, launched in November 2022, is causing a stir. There are many opinions about this being a 'threat or a promise,' and thus it is important to understand what has been said about this tool and, based on the growing literature that has emerged on the subject, demystify its effective impact on society. To analyse this impact, a systematic literature review with the support of the preferred reporting items for systematic reviews and meta-analysis protocol was used. The data, scientific documents, were collected using the main scientific databases - SCOPUS and Web of Science - and the results were presented based on a bibliometric and thematic exploration of content. The main findings indicate that people are increasingly using this chatbot in more diverse areas. Therefore, this study contributes at the practical level, aiming to enlighten people in general - both in professional and personal life - about this tool and its impacts. Also, it contributes at the theoretical level, which involves expanding understanding and elucidation of the impacts of ChatGPT in different areas of study.

Exploring the Effectiveness of Smart Education in a College Writing Course Utilizing Multimedia Learning Tools

  • Si-Yeon Pyo
    • Journal of Practical Engineering Education
    • /
    • v.16 no.2
    • /
    • pp.143-150
    • /
    • 2024
  • With the development of AI, multimedia tools in education offer personalized learning environments, which foster individual competencies. This study aims to examine the effectiveness of smart education as perceived by learners through a case study of university writing classes utilizing multimedia learning tools, and to explore potential applications. To achieve this, a writing course incorporating various multimedia tools to promote interaction was designed and implemented over the course of one semester, targeting 42 university students. Through the semester, student reactions and survey results were analyzed to investigate the effects and satisfaction levels regarding the use of multimedia learning tools in writing instruction as perceived by students. The analysis revealed that multimedia-assisted writing classes effectively fostered learners' autonomy by focusing on individual needs, while also promoting interaction and encouraging spontaneous participation. Students reported recognizing the presence of diverse perspectives by comparing and communicating about each other's writing, leading to an expansion of their own thinking. In using ChatGPT, it was found that students attempted to refine their questions until they obtained the desired answers. They reported that this process deepened their understanding of the essence of the questions. These benefits led to results of high levels of students' active class engagement and satisfaction. This study contributes foundational and empirical data regarding the effectiveness and potential applications of learner-centered smart education as part of fourth industrial revolution integration research.

Stock Price Prediction and Portfolio Selection Using Artificial Intelligence

  • Sandeep Patalay;Madhusudhan Rao Bandlamudi
    • Asia pacific journal of information systems
    • /
    • v.30 no.1
    • /
    • pp.31-52
    • /
    • 2020
  • Stock markets are popular investment avenues to people who plan to receive premium returns compared to other financial instruments, but they are highly volatile and risky due to the complex financial dynamics and poor understanding of the market forces involved in the price determination. A system that can forecast, predict the stock prices and automatically create a portfolio of top performing stocks is of great value to individual investors who do not have sufficient knowledge to understand the complex dynamics involved in evaluating and predicting stock prices. In this paper the authors propose a Stock prediction, Portfolio Generation and Selection model based on Machine learning algorithms, Artificial neural networks (ANNs) are used for stock price prediction, Mathematical and Statistical techniques are used for Portfolio generation and Un-Supervised Machine learning based on K-Means Clustering algorithms are used for Portfolio Evaluation and Selection which take in to account the Portfolio Return and Risk in to consideration. The model presented here is limited to predicting stock prices on a long term basis as the inputs to the model are based on fundamental attributes and intrinsic value of the stock. The results of this study are quite encouraging as the stock prediction models are able predict stock prices at least a financial quarter in advance with an accuracy of around 90 percent and the portfolio selection classifiers are giving returns in excess of average market returns.

Application of Quantitative Assessment of Coronary Atherosclerosis by Coronary Computed Tomographic Angiography

  • Su Nam Lee;Andrew Lin;Damini Dey;Daniel S. Berman;Donghee Han
    • Korean Journal of Radiology
    • /
    • v.25 no.6
    • /
    • pp.518-539
    • /
    • 2024
  • Coronary computed tomography angiography (CCTA) has emerged as a pivotal tool for diagnosing and risk-stratifying patients with suspected coronary artery disease (CAD). Recent advancements in image analysis and artificial intelligence (AI) techniques have enabled the comprehensive quantitative analysis of coronary atherosclerosis. Fully quantitative assessments of coronary stenosis and lumen attenuation have improved the accuracy of assessing stenosis severity and predicting hemodynamically significant lesions. In addition to stenosis evaluation, quantitative plaque analysis plays a crucial role in predicting and monitoring CAD progression. Studies have demonstrated that the quantitative assessment of plaque subtypes based on CT attenuation provides a nuanced understanding of plaque characteristics and their association with cardiovascular events. Quantitative analysis of serial CCTA scans offers a unique perspective on the impact of medical therapies on plaque modification. However, challenges such as time-intensive analyses and variability in software platforms still need to be addressed for broader clinical implementation. The paradigm of CCTA has shifted towards comprehensive quantitative plaque analysis facilitated by technological advancements. As these methods continue to evolve, their integration into routine clinical practice has the potential to enhance risk assessment and guide individualized patient management. This article reviews the evolving landscape of quantitative plaque analysis in CCTA and explores its applications and limitations.

Evolution and Historical Review of Music in Mass Media

  • Kang-iL Um;Jiyoung Jung
    • International Journal of Advanced Culture Technology
    • /
    • v.12 no.3
    • /
    • pp.370-379
    • /
    • 2024
  • In this paper, we explore the historical development and revolutionary impact of music in mass media across various forms, including radio, television, film, and digital platforms. The evolution of music in mass media reflects significant technological and cultural shifts over the past century. From the early days of radio to the advent of digital streaming, music has played a crucial role in shaping the types of mass media. Early radio broadcasts in the 1920s relied on live performances and recordings to captivate audiences, establishing music as a central element of media content. The rise of television in the 1950s brought new opportunities for music integration, with theme songs, variety shows, and music videos becoming staples of TV programming. The film industry further revolutionized the use of music, with iconic scores enhancing cinematic storytelling and emotional depth. The digital revolution of the late 20th century introduced new formats and services, expanding access to music and transforming consumption patterns. Recently, streaming platforms and social media allow for personalized music experiences and direct artist-fan interactions. Through an analysis of technological advancements, this study highlights the integral role of music in enhancing narrative, evoking emotions, and creating cultural identities. We present our understanding of this evolution to provide insights into future trends and potential innovations in the integration of music with mass media, including the use of artificial intelligence and virtual reality to create immersive auditory experiences.

Serological Survey for the Major Viral Diseases in the Layers (국내 산란계의 주요 바이러스성 질병에 대한 혈청학적 모니터링 결과 및 분석)

  • Lee, Hae-Rim;Kim, Jong-Man;Kim, Jin-Hyung;Kim, Chang-Moon;So, Hyun-Hee;Lee, Dong-Woo;Ha, Bong-Do;Hong, Song-Chol;Mo, In-Pil
    • Korean Journal of Poultry Science
    • /
    • v.37 no.4
    • /
    • pp.361-372
    • /
    • 2010
  • Serological evaluation for the poultry is important for various reasons, such as designing and assessing the vaccination program and diagnosing diseases and for this reason, serologic tests for the layer flocks have been conducted on a regular basis. Moreover, the nationwide serological survey and analysis are essential to understand the epidemiological status of national poultry industry. In this sense, the study was conducted to evaluate the immune status of the layer flocks with the sera submitted to Avian Disease Laboratory, Chungbuk National University in 2009, and several important viral diseases were selected for evaluation including low pathogenic avian influenza (LPAI), Newcastle disease (ND), infectious bronchitis (IB) and avian metapneumovirus (aMPV). For LPAI and ND, the age-related patterns of geometric mean titer (GMT) changes were similar but there were differences in the flock positive rate and the level of GMT due to the different vaccination policy. In the case of IB, the values of GMT showed that the field infection was more prevalent than expected. For aMPV, positive birds in a flock increased as the layers got older, which reflected the course of field infection because vaccination against aMPV was not allowed in 2009. From this study, the immune status for the main viral diseases in layers became more clarified but this information was limited because of only one year study. Therefore, serological survey needs to be conducted on a yearly basis and furthermore include broilers and breeders for a better understanding of the health status in the national poultry industry.

Korean Contextual Information Extraction System using BERT and Knowledge Graph (BERT와 지식 그래프를 이용한 한국어 문맥 정보 추출 시스템)

  • Yoo, SoYeop;Jeong, OkRan
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.123-131
    • /
    • 2020
  • Along with the rapid development of artificial intelligence technology, natural language processing, which deals with human language, is also actively studied. In particular, BERT, a language model recently proposed by Google, has been performing well in many areas of natural language processing by providing pre-trained model using a large number of corpus. Although BERT supports multilingual model, we should use the pre-trained model using large amounts of Korean corpus because there are limitations when we apply the original pre-trained BERT model directly to Korean. Also, text contains not only vocabulary, grammar, but contextual meanings such as the relation between the front and the rear, and situation. In the existing natural language processing field, research has been conducted mainly on vocabulary or grammatical meaning. Accurate identification of contextual information embedded in text plays an important role in understanding context. Knowledge graphs, which are linked using the relationship of words, have the advantage of being able to learn context easily from computer. In this paper, we propose a system to extract Korean contextual information using pre-trained BERT model with Korean language corpus and knowledge graph. We build models that can extract person, relationship, emotion, space, and time information that is important in the text and validate the proposed system through experiments.

A Case Study on Credit Analysis System in P2P: 8Percent, Lendit, Honest Fund (P2P 플랫폼에서의 대출자 신용분석 사례연구: 8퍼센트, 렌딧, 어니스트 펀드)

  • Choi, Su Man;Jun, Dong Hwa;Oh, Kyong Joo
    • Knowledge Management Research
    • /
    • v.21 no.3
    • /
    • pp.229-247
    • /
    • 2020
  • In the remarkable growth of P2P financial platform in the field of knowledge management, only companies with big data and machine learning technologies are surviving in fierce competition. The ability to analyze borrowers' credit is most important, and platform companies are also recognizing this capability as the most important business asset, so they are building a credit evaluation system based on artificial intelligence. Nonetheless, online P2P platform providers that offer related services only act as intermediaries to apply for investors and borrowers, and all the risks associated with the investments are attributable to investors. For investors, the only way to verify the safety of investment products depends on the reputation of P2P companies from newspaper and online website. Time series information such as delinquency rate is not enough to evaluate the early stage of Korean P2P makers' credit analysis capability. This study examines the credit analysis procedure of P2P loan platform using artificial intelligence through the case analysis method for well known the top three companies that are focusing on the credit lending market and the kinds of information data to use. Through this, we will improve the understanding of credit analysis techniques through artificial intelligence, and try to examine limitations of credit analysis methods through artificial intelligence.

Analysis of the Status of Natural Language Processing Technology Based on Deep Learning (딥러닝 중심의 자연어 처리 기술 현황 분석)

  • Park, Sang-Un
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.63-81
    • /
    • 2021
  • The performance of natural language processing is rapidly improving due to the recent development and application of machine learning and deep learning technologies, and as a result, the field of application is expanding. In particular, as the demand for analysis on unstructured text data increases, interest in NLP(Natural Language Processing) is also increasing. However, due to the complexity and difficulty of the natural language preprocessing process and machine learning and deep learning theories, there are still high barriers to the use of natural language processing. In this paper, for an overall understanding of NLP, by examining the main fields of NLP that are currently being actively researched and the current state of major technologies centered on machine learning and deep learning, We want to provide a foundation to understand and utilize NLP more easily. Therefore, we investigated the change of NLP in AI(artificial intelligence) through the changes of the taxonomy of AI technology. The main areas of NLP which consists of language model, text classification, text generation, document summarization, question answering and machine translation were explained with state of the art deep learning models. In addition, major deep learning models utilized in NLP were explained, and data sets and evaluation measures for performance evaluation were summarized. We hope researchers who want to utilize NLP for various purposes in their field be able to understand the overall technical status and the main technologies of NLP through this paper.

Prediction Model of Real Estate ROI with the LSTM Model based on AI and Bigdata

  • Lee, Jeong-hyun;Kim, Hoo-bin;Shim, Gyo-eon
    • International journal of advanced smart convergence
    • /
    • v.11 no.1
    • /
    • pp.19-27
    • /
    • 2022
  • Across the world, 'housing' comprises a significant portion of wealth and assets. For this reason, fluctuations in real estate prices are highly sensitive issues to individual households. In Korea, housing prices have steadily increased over the years, and thus many Koreans view the real estate market as an effective channel for their investments. However, if one purchases a real estate property for the purpose of investing, then there are several risks involved when prices begin to fluctuate. The purpose of this study is to design a real estate price 'return rate' prediction model to help mitigate the risks involved with real estate investments and promote reasonable real estate purchases. Various approaches are explored to develop a model capable of predicting real estate prices based on an understanding of the immovability of the real estate market. This study employs the LSTM method, which is based on artificial intelligence and deep learning, to predict real estate prices and validate the model. LSTM networks are based on recurrent neural networks (RNN) but add cell states (which act as a type of conveyer belt) to the hidden states. LSTM networks are able to obtain cell states and hidden states in a recursive manner. Data on the actual trading prices of apartments in autonomous districts between January 2006 and December 2019 are collected from the Actual Trading Price Disclosure System of the Ministry of Land, Infrastructure and Transport (MOLIT). Additionally, basic data on apartments and commercial buildings are collected from the Public Data Portal and Seoul Metropolitan Government's data portal. The collected actual trading price data are scaled to monthly average trading amounts, and each data entry is pre-processed according to address to produce 168 data entries. An LSTM model for return rate prediction is prepared based on a time series dataset where the training period is set as April 2015~August 2017 (29 months), the validation period is set as September 2017~September 2018 (13 months), and the test period is set as December 2018~December 2019 (13 months). The results of the return rate prediction study are as follows. First, the model achieved a prediction similarity level of almost 76%. After collecting time series data and preparing the final prediction model, it was confirmed that 76% of models could be achieved. All in all, the results demonstrate the reliability of the LSTM-based model for return rate prediction.