• Title/Summary/Keyword: 스토리 생성

Search Result 160, Processing Time 0.03 seconds

KISTI-ML Platform: A Community-based Rapid AI Model Development Tool for Scientific Data (KISTI-ML 플랫폼: 과학기술 데이터를 위한 커뮤니티 기반 AI 모델 개발 도구)

  • Lee, Jeongcheol;Ahn, Sunil
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.73-84
    • /
    • 2019
  • Machine learning as a service, the so-called MLaaS, has recently attracted much attention in almost all industries and research groups. The main reason for this is that you do not need network servers, storage, or even data scientists, except for the data itself, to build a productive service model. However, machine learning is often very difficult for most developers, especially in traditional science due to the lack of well-structured big data for scientific data. For experiment or application researchers, the results of an experiment are rarely shared with other researchers, so creating big data in specific research areas is also a big challenge. In this paper, we introduce the KISTI-ML platform, a community-based rapid AI model development for scientific data. It is a place where machine learning beginners use their own data to automatically generate code by providing a user-friendly online development environment. Users can share datasets and their Jupyter interactive notebooks among authorized community members, including know-how such as data preprocessing to extract features, hidden network design, and other engineering techniques.

A User Privacy Protection Scheme based on Password through User Information Virtuality in Cloud Computing (클라우드 컴퓨팅에서 패스워드기반의 사용자 정보 가상화를 통한 사용자 프라이버시 보장 기법)

  • Jeong, Yoon-Su;Lee, Sang-Ho
    • Journal of Convergence Society for SMB
    • /
    • v.1 no.1
    • /
    • pp.29-37
    • /
    • 2011
  • As the area of informatization has been expanding followed by the development of information communication technology, cloud computing which can use infra sources like server, storage, and network in IT area as an efficient service whenever and wherever skyrockets. But users who use cloud computing technology may have some problems like exposure personal data, surveillance on person, and process on commercial purpose on their personal data. This paper proposes a security technique which protect user's privacy by creating imaginary user information not to be used by other people. The proposed technique virtualizes user's information as an anonymity value not to let other people know user's identity by combining PIN code with it and guarantees user's anonymity. Also it can manage and certificate personal information that is important in cloud computing, so that it can solve security problem of cloud computing which centers all informations. Therefore this paper can assist upgrading of the level of information of poor SMBs through safe use of cloud computing.

  • PDF

Emotion and Sentiment Analysis from a Film Script: A Case Study (영화 대본에서 감정 및 정서 분석: 사례 연구)

  • Yu, Hye-Yeon;Kim, Moon-Hyun;Bae, Byung-Chull
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1537-1542
    • /
    • 2017
  • Emotion plays a key role in both generating and understanding narrative. In this article we analyzed the emotions represented in a movie script based on 8 emotion types from the wheel of emotions by Plutchik. First we conducted manual emotion tagging scene by scene. The most dominant emotions by manual tagging were anger, fear, and surprise. It makes sense when the film script we analyzed is a thriller-genre. We assumed that the emotions around the climax of the story would be heightened as the tension grew up. From manual tagging we could identify three such duration when the tension is high. Next we analyzed the emotions in the same script using Python-based NLTK VADERSentiment tool. The result showed that the emotions of anger and fear were most matched. The emotion of surprise, anticipation, and disgust, however, scored lower matching.

Augmented Reality Framework for Archeological Site Tours (유적지 투어 지원을 위한 증강 현실기반 프레임워크)

  • Kim, Eunseok;Woo, Woontack
    • Journal of the HCI Society of Korea
    • /
    • v.10 no.2
    • /
    • pp.35-43
    • /
    • 2015
  • As augmented reality (AR) technology has been utilized to enhance the user experience, several AR applications have been developed in the cultural heritage domain. Although there has been significant progress in this area, naive augmentation representation becomes an obstacle to provide an enhanced user experience and there has been little research on an effective AR experience methodology that reflects the characteristics of AR technology. Furthermore, the development of temporary content that comes from the absence of the authoring framework restrains the content ecosystem, which is an essential requisite for sustainable service. To resolve these issues, we propose a space-driven AR experience methodology, Spacetelling, to extend established object-oriented augmentation trends and Storyscape that generates spatio-temporal related contents for the spacetelling and supports sustainable service. Moreover, we present the wrap-up system framework covering both features mentioned above for the archeological site tour. Finally, we present our work-in-progress project, K-Culture Time Machine Project, to investigate the practical feasibility of our proposals. Through these proposals, we expect that sustainable AR applications with improved user experience will be possible in the cultural heritage domain.

A Study on the Method of Creating Realistic Content in Audience-participating Performances using Artificial Intelligence Sentiment Analysis Technology (인공지능 감정분석 기술을 이용한 관객 참여형 공연에서의 실감형 콘텐츠 생성 방식에 관한 연구)

  • Kim, Jihee;Oh, Jinhee;Kim, Myeungjin;Lim, Yangkyu
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.533-542
    • /
    • 2021
  • In this study, a process of re-creating Jindo Buk Chum, one of the traditional Korean arts, into digital art using various artificial intelligence technologies was proposed. The audience's emotional data, quantified through artificial intelligence language analysis technology, intervenes in various object forms in the projection mapping performance and affects the big story without changing it. If most interactive arts express communication between the performer and the video, this performance becomes a new type of responsive performance that allows the audience to directly communicate with the work, centering on artificial intelligence emotion analysis technology. This starts with 'Chuimsae', a performance that is common only in Korean traditional art, where the audience directly or indirectly intervenes and influences the performance. Based on the emotional information contained in the performer's 'prologue', it is combined with the audience's emotional information and converted into the form of images and particles used in the performance to indirectly participate and change the performance.

The related record about 'Daejanggeum' and its modern acceptance (대장금(大長今)' 관련 기록의 현대적 수용 - 문화콘텐츠로의 생성과 전개 양상 분석 -)

  • Nam, Eunkyung
    • (The)Study of the Eastern Classic
    • /
    • no.43
    • /
    • pp.33-64
    • /
    • 2011
  • The historical drama played on TV in 2003, Daejanggeum is originally based on the short historical record of lady doctor of the palace from the [Jungjong record] of Josun. The drama mixed fiction and historic record well together draw enormous interest and became a novel, musical and animation for children. Also the location of shooting drama became a theme park to attract travelers and the name 'Daejanggeum' was used for products to create great additional value. Most of all, the drama then was exported to overseas and became the representing drama of Korea. Therefore, drama is the representing piece that proved the success of historic data with its application as various modern cultural contents. The analysis of success reason of showed the creation of new modern woman character, fresh selection of the item that suits well in the time of desiring wellbeing, the strong drama scenario with different story development compared to previous historic drama. Also, it used 'one source multi use' method prior to the broadcasting and prepared production of various cultural contents. This success of Daejanggeum means a lot from the point of 'modern acceptance of tradition' to tradition researchers.

Comparison and analysis of compression algorithms to improve transmission efficiency of manufacturing data (제조 현장 데이터 전송효율 향상을 위한 압축 알고리즘 비교 및 분석)

  • Lee, Min Jeong;Oh, Sung Bhin;Kim, Jin Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.1
    • /
    • pp.94-103
    • /
    • 2022
  • As a large amount of data generated by sensors or devices at the manufacturing site is transmitted to the server or client, problems arise in network processing time delay and storage resource cost increase. To solve this problem, considering the manufacturing site, where real-time responsiveness and non-disruptive processes are essential, QRC (Quotient Remainder Compression) and BL_beta compression algorithms that enable real-time and lossless compression were applied to actual manufacturing site sensor data for the first time. As a result of the experiment, BL_beta had a higher compression rate than QRC. As a result of experimenting with the same data by slightly adjusting the data size of QRC, the compression rate of the QRC algorithm with the adjusted data size was 35.48% and 20.3% higher than the existing QRC and BL_beta compression algorithms.

A Study on Realistic 360 Degree Panorama Webtoon-Metaverse Service

  • Lee, Byong-Kwon;Jung, Doo-Yong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.147-153
    • /
    • 2022
  • Most of the metaverse service is a gamification type of metaverse solution composed by placing 3D objects on a 360 panoramic world base. However, the metaverse webtoon service is lacking in implementation and research. In this study, a method for realizing a 2D flat image form in a 3D space and a realistic 360-degree panoramic webtoon metaverse service were studied. The research process consisted of basic storytelling and design work for webtoon production, panoramic image creation to convert the produced image into 360-degree form, and content creation process for viewing 360-degree directions. Finally, we implemented shading and material work with game engine tools so that you can enjoy virtual reality-based webtoons. The webtoon-based metaverse study studied the process of creating a 360-degree panoramic webtoon from content that can only be viewed in 2D format. Accordingly, it is thought that webtoons will also be presented as a standard for the production and implementation method for the metaverse service.

Study on the Digital File Management Behavior of Undergraduate Students according to the Life Cycle of Digital Object (디지털 객체 생애주기에 따른 대학생의 파일관리 행태 연구)

  • Jee, Yoon-Jae;Lee, Hye-Eun
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.33 no.1
    • /
    • pp.321-343
    • /
    • 2022
  • This study presents the direction of services and policies for digital file management in universities by identifying undergraduate students' digital file management behavior. The research defined the Life Cycle of Digital Objects. In addition, This research collected data from 154 undergraduate students using an online survey on their file Creation, Storing, Naming, Organizing, and Backup based on the Digital File Management Workflow. Also, an in-depth interview was conducted for 8 students, two for each major in engineering, arts, social science, and humanities. The result showed that students mostly used personal computers as storage media and USB drive as backup media and had their own file Naming and Organizing methods. Furthermore, students' satisfaction with digital file management was high when universities supported software and cloud storage. Therefore, this study suggests that universities need to provide services reflecting the students' digital file management behavior.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.