• Title/Summary/Keyword: Generated AI

Search Result 238, Processing Time 0.025 seconds

Using Metaverse and AI recommendation services Development of Korea's leading kiosk usage service guide (메타버스와 AI 추천서비스를 활용한 국내 대표 키오스크 사용서비스 안내 개발)

  • SuHyeon Choi;MinJung Lee;JinSeo Park;Yeon Ho Seo;Jaehyun Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.886-887
    • /
    • 2023
  • This paper is about the development of kiosks that provide four types of service. Simple UI and educational videos solve the complexity of existing kiosks and provide an intuitive and convenient screen to users. In addition, the AR function, which is a three-dimensional form, shows directions and store representative images. After storing user information in the DB, a learning model is generated using user-based KNN collaborative filtering to provide a recommendation menu. As a result, it is possible to increase user convenience through kiosks using metaverse and AI recommendation services. It is also expected to solve digital alienation of social classes who have difficulty using kiosks.

Study on the feasibility of using AI image generation tool for fashion design development -Focused on the use of Midjourney (패션디자인 개발을 위한 AI 이미지 생성 도구의 활용 가능성 연구 -미드저니(Midjourney)의 활용을 중심으로)

  • Park, Keunsoo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.237-244
    • /
    • 2023
  • Today, AI is being applied to various industrial fields, leading to a paradigm shift in the overall industry. In the fashion industry, AI is also used to predict trends and provide various services for consumers, and in particular, AI image creation tools have the potential as a tool for fashion design development. This study investigated the possibilities and limitations of using Midjourny for fashion design development by creating images using Midjourney among AI image creation tools and identifying its characteristics. The characteristics of images created in Midjourney are as follows. First, it has the intuitiveness to create images by intuitively applying or combining images corresponding to commands. Second, there is randomness in which different images are generated when the same command is entered at different times. Third, when using existing images and commands together, the image created in Midjourney is more dependent on the existing image than the command. In conclusion, Midjourny's various image creation functions and the ability to change images according to commands can be helpful in developing original fashion designs. However, it is important to note that fashion designs that cannot be worn or made are sometimes presented. It is expected that the results of this study will serve as basic data for the use of AI image creation tools for fashion design development.

The Design and Experiment of AI Device Communication System Equipped with 5G (5G를 탑재한 AI 디바이스 통신 시스템의 설계 및 실험)

  • Han Seongil;Lee Daesik;Han Jihwan;Moon Hhyunjin;Lim Changmin;Lee Sangku
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.2
    • /
    • pp.69-78
    • /
    • 2023
  • In this paper, IO+5G dedicated hardware is developed and an AI device communication system equipped with a 5G is designed and tested. The AI device communication system equipped with a 5G receives the collected real-time images and the information collected from the IoT sensor in real time is to analyze the information and generates the risk detection events in the AI processing board. The event generated in the AI processing board creates a 5G channel in the dedicated hardware equipped with IO+5G. The created 5G channel delivers event video to the control video server. The 5G based dongle network enables faster data collection and more precise data measurement compared to wireless LAN and 5G routers. As a result of the experiment in this paper, the average test result of the 5G dongle network is about 51% faster than the Wi-Fi average test result in downlink and about 40% faster in uplink. In addition, when comparing the test result with terms of the 5G rounter to be set to 80% upload and 20% download, the average test result is that the 5G dongle network is about 11.27% faster when downloading and about 17.93% faster when uploading. when comparing the test result with terms of the the router to be set to 60% upload and 40% download, the 5G dongle network is about 11.19% faster when downlinking and about 13.61% faster when uplinking. Therefore, in this paper it describes that the developed 5G dongle network can improve the results by collecting data and analyzing it faster than wireless LAN and 5G routers.

An Exploratory Study of Success Factors for Generative AI Services: Utilizing Text Mining and ChatGPT (생성형AI 서비스의 성공요인에 대한 탐색적 연구: 텍스트 마이닝과 ChatGPT를 활용하여)

  • Ji Hoon Yang;Sung-Byung Yang;Sang-Hyeak Yoon
    • Information Systems Review
    • /
    • v.25 no.2
    • /
    • pp.125-144
    • /
    • 2023
  • Generative Artificial Intelligence (AI) technology is gaining global attention as it can automatically generate sentences, images, and voices that humans previously generated. In particular, ChatGPT, a representative generative AI service, shows proactivity and accuracy differentiated from existing chatbot services, and the number of users is rapidly increasing in a short period of time. Despite this growing interest in generative AI services, most preceding studies are still in their infancy. Therefore, this study utilized LDA topic modeling and keyword network diagrams to derive success factors for generative AI services and to propose successful business strategies based on them. In addition, using ChatGPT, a new research methodology that complements the existing text-mining method, was presented. This study overcomes the limitations of previous research that relied on qualitative methods and makes academic and practical contributions to the future development of generative AI services.

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

A Survey on Deep Learning-based Analysis for Education Data (빅데이터와 AI를 활용한 교육용 자료의 분석에 대한 조사)

  • Lho, Young-uhg
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.240-243
    • /
    • 2021
  • Recently, there have been research results of applying Big data and AI technologies to the evaluation and individual learning for education. It is information technology innovations that collect dynamic and complex data, including student personal records, physiological data, learning logs and activities, learning outcomes and outcomes from social media, MOOCs, intelligent tutoring systems, LMSs, sensors, and mobile devices. In addition, e-learning was generated a large amount of learning data in the COVID-19 environment. It is expected that learning analysis and AI technology will be applied to extract meaningful patterns and discover knowledge from this data. On the learner's perspective, it is necessary to identify student learning and emotional behavior patterns and profiles, improve evaluation and evaluation methods, predict individual student learning outcomes or dropout, and research on adaptive systems for personalized support. This study aims to contribute to research in the field of education by researching and classifying machine learning technologies used in anomaly detection and recommendation systems for educational data.

  • PDF

A Study on the Generation of Datasets for Applied AI to OLED Life Prediction

  • CHUNG, Myung-Ae;HAN, Dong Hun;AHN, Seongdeok;KANG, Min Soo
    • Korean Journal of Artificial Intelligence
    • /
    • v.10 no.2
    • /
    • pp.7-11
    • /
    • 2022
  • OLED displays cannot be used permanently due to burn-in or generation of dark spots due to degradation. Therefore, the time when the display can operate normally is very important. It is close to impossible to physically measure the time when the display operates normally. Therefore, the time that works normally should be predicted in a way other than a physical way. Therefore, if you do computer simulations based on artificial intelligence, you can increase the accuracy of prediction by saving time and continuous learning. Therefore, if we do computer simulations based on artificial intelligence, we can increase the accuracy of prediction by saving time and continuous learning. In this paper, a dataset in the form of development from generation to diffusion of dark spots, which is one of the causes related to the life of OLED, was generated by applying the finite element method. The dark spots were generated in nine conditions, such as 0.1 to 2.0 ㎛ with the size of pinholes, the number was 10 to 100, and 50% with water content. The learning data created in this way may be a criterion for generating an artificial intelligence-based dataset.

Exploring Factors to Minimize Hallucination Phenomena in Generative AI - Focusing on Consumer Emotion and Experience Analysis - (생성형AI의 환각현상 최소화를 위한 요인 탐색 연구 - 소비자의 감성·경험 분석을 중심으로-)

  • Jinho Ahn;Wookwhan Jung
    • Journal of Service Research and Studies
    • /
    • v.14 no.1
    • /
    • pp.77-90
    • /
    • 2024
  • This research aims to investigate methods of leveraging generative artificial intelligence in service sectors where consumer sentiment and experience are paramount, focusing on minimizing hallucination phenomena during usage and developing strategic services tailored to consumer sentiment and experiences. To this end, the study examined both mechanical approaches and user-generated prompts, experimenting with factors such as business item definition, provision of persona characteristics, examples and context-specific imperative verbs, and the specification of output formats and tone concepts. The research explores how generative AI can contribute to enhancing the accuracy of personalized content and user satisfaction. Moreover, these approaches play a crucial role in addressing issues related to hallucination phenomena that may arise when applying generative AI in real services, contributing to consumer service innovation through generative AI. The findings demonstrate the significant role generative AI can play in richly interpreting consumer sentiment and experiences, broadening the potential for application across various industry sectors and suggesting new directions for consumer sentiment and experience strategies beyond technological advancements. However, as this research is based on the relatively novel field of generative AI technology, there are many areas where it falls short. Future studies need to explore the generalizability of research factors and the conditional effects in more diverse industrial settings. Additionally, with the rapid advancement of AI technology, continuous research into new forms of hallucination symptoms and the development of new strategies to address them will be necessary.

Characteristics of Generated Voltage by Temperature Change of Electrical, Elecrtronic and Industrial MIM Element Using LB Ultra Thin Film (LB 초박막을 이용한 전지전자 공업용 MIM소자의 온도변화에 의한 발생전압 특성)

  • 김병인;국상훈
    • The Proceedings of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.11 no.3
    • /
    • pp.80-87
    • /
    • 1997
  • As a result of experimenting the temperature characteristics of origination voltage in Al/$AL_1,O_3/PI (nL)/Au, the sample of polyimide LB film and AI/$AL_1,O_3/Cls TCNQ(lOL)/Al, the difference of work function is found between upper and lower electrodes. If polyimide LB film is accumulated with Z type or becomes imide, the polarization of the film is not made. And AI/$AL_2,O_3/C-{15}TCNQ( IOL)/ AI which is the CI5 TCNQ LB film sample doesn't show the difference of work function because it has the same upper and lower electrode and the polarization is found on the film. As a result of experiment with MIM element of LB ultra thin film, Direct current more than hun¬dreds of m V is generated and it can be used for industrial power resources in the area of electricity, electronics and information communication.

  • PDF

Generating and Validating Synthetic Training Data for Predicting Bankruptcy of Individual Businesses

  • Hong, Dong-Suk;Baik, Cheol
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.4
    • /
    • pp.228-233
    • /
    • 2021
  • In this study, we analyze the credit information (loan, delinquency information, etc.) of individual business owners to generate voluminous training data to establish a bankruptcy prediction model through a partial synthetic training technique. Furthermore, we evaluate the prediction performance of the newly generated data compared to the actual data. When using conditional tabular generative adversarial networks (CTGAN)-based training data generated by the experimental results (a logistic regression task), the recall is improved by 1.75 times compared to that obtained using the actual data. The probability that both the actual and generated data are sampled over an identical distribution is verified to be much higher than 80%. Providing artificial intelligence training data through data synthesis in the fields of credit rating and default risk prediction of individual businesses, which have not been relatively active in research, promotes further in-depth research efforts focused on utilizing such methods.