• Title/Summary/Keyword: 텍스트 박스

Search Result 19, Processing Time 0.022 seconds

Digital Signage with Motion Graphics (모션 그래픽스의 디지털 사이니지 적용)

  • Park, Daehyuk
    • Journal of Digital Convergence
    • /
    • v.18 no.2
    • /
    • pp.377-383
    • /
    • 2020
  • Digital signage is constantly being researched as new digital video platform to replace existing signage market. Traditionally, It conveys various information combining still images with text. Nowadays, it is rapidly exchanging to multi digital platform by high specificaton system, improvement of internet speed and advancement of video and audio compression technology with HTML5 technology. Not only a single wide-screen display but also the combination and adjustment of screens with setop box, OLED, media facades, and lase beam projectors are transformed into various forms to enable creative and diverse attempts for graphic designers. This study focuses on the application of motion graphics in rapidly evolving future platform - digital signage, and looking forward to help digital video content creator, researchers, and motion graphic designers.

Technology Forecasting using Bayesian Discrete Model (베이지안 이산모형을 이용한 기술예측)

  • Jun, Sunghae
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.2
    • /
    • pp.179-186
    • /
    • 2017
  • Technology forecasting is predict future trend and state of technology by analyzing the results so far of developing technology. In general, a patent has novel information about the result of developed technology, because the exclusive right of technology included in patent is protected for a time period by patent law. So many studies on the technology forecasting using patent data analysis has been performed. The patent keyword data widely used in patent analysis consist of occurred frequency of the keyword. In most previous researches, the continuous data analyses such as regression or Box-Jenkins Models were applied to the patent keyword data. But, we have to apply the analytical methods of discrete data for patent keyword analysis because the keyword data is discrete. To solve this problem, we propose a patent analysis methodology using Bayesian Poisson discrete model. To verify the performance of our research, we carry out a case study by analyzing the patent documents applied by Apple until now.

An Implementation of Automatic Transmission System of Traffic Event Information (교통이벤트 정보의 자동 전송시스템 구현)

  • Jeong, Yeong-Rae;Jang, Jae-Hoon;Kang, Seog Geun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.987-994
    • /
    • 2018
  • In this paper, an automatic transmission system of traffic information is presented. Here, a traffic event is defined as an obstacle to an emergency vehicle such as an ambulance or a fire truck. When a traffic event is detected from a video recorded by a black box installed in a vehicle, the implemented system automatically transmits a proof image and corresponding information to the control center through an e-mail. For this purpose, we realize an algorithm of identifying the numbers and a character from the license plate, and an algorithm for determining the occurrence of a traffic event. To report the event, a function for automatic transmission of the text and image files through e-mail and file transfer protocol (FTP) is also appended. Therefore, if the traffic event is extended and applied to the presented system, it will be possible to establish a convenient reporting system for the violation of various traffic regulations. In addition, it will contribute to significantly reduce the number of traffic violations against the regulations.

An Integrated File System for Guaranteeing the Quality of Service of Multimedia Stream (멀티미디어 스트림의 QoS를 보장하는 통합형 파일시스템)

  • 김태석;박경민;최정완;김두한;원유집;고건;박승민;김정기
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.9
    • /
    • pp.527-535
    • /
    • 2004
  • Handling mixed workload in digital set-top box or streaming server becomes an important issue as integrated file system gets momentum as the choice for the next generation file system. The next generation file system is required to handle real-time audio/video playback while being able to handle text requests such as web page, image file, etc. Legacy file system provides only best effort I/O service and thus cannot properly support the QoS of soft real-time I/O. In this paper, we would like to present our experience in developing the file system which fan guarantee the QoS of multimedia stream. We classify all application I/O requests into two category: periodic I/O and sporadic I/O. The QoS requirement of multimedia stream could be guaranteed by giving a higher priority to periodic requests than sporadic requests. The proto-type file system(Qosfs) is developed on Linux Operating System.

An Expoloratory Study on Influencing Factors of Film Equity Crowdfunding Success: Based on Chinese Movie Crowdfunding (영화 크라우드펀딩 성공에 영향을 미치는 요인에 관한 탐색적 연구: 중국의 영화 플랫폼 크라우드펀딩을 중심으로)

  • Bao, Tantan;Kim, Hun;Chang, Byeng-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.2
    • /
    • pp.1-14
    • /
    • 2021
  • Recently, crowdfunding platforms have received attention as one of the content investment platforms for the public. This research attempts to explore the influencing factors on the success of movie euqity crowdfunding project. We use 'number of texts', 'number of images', 'star influence power', 'IP-based movie project', 'movie production stage', 'box office prediction', 'investment capital ratio', 'amount of surplus available investment', 'profit calculation method' and 'minimum investment amount' as independent variables. And we examined how these factors affects the achievement rate of movie crowdfunding. As a result of multiple regression analysis, 'movie production stage', 'investment capital ratio', 'amount of surplus available investment' and 'profit calculation method' have a significant effect on the crowdfunding achievement rate. In addition, the results of this research can be used for reference when planning film crowdfunding projects.

Test Dataset for validating the meaning of Table Machine Reading Language Model (표 기계독해 언어 모형의 의미 검증을 위한 테스트 데이터셋)

  • YU, Jae-Min;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.164-167
    • /
    • 2022
  • In table Machine comprehension, the knowledge required for language models or the structural form of tables changes depending on the domain, showing a greater performance degradation compared to text data. In this paper, we propose a pre-learning data construction method and an adversarial learning method through meaningful tabular data selection for constructing a pre-learning table language model robust to these domain changes in table machine reading. In order to detect tabular data sed for decoration of web documents without structural information from the extracted table data, a rule through heuristic was defined to identify head data and select table data was applied. An adversarial learning method between tabular data and infobax data with knowledge information about entities was applied. When the data was refined compared to when it was trained with the existing unrefined data, F1 3.45 and EM 4.14 increased in the KorQuAD table data, and F1 19.38, EM 4.22 compared to when the data was not refined in the Spec table QA data showed increased performance.

  • PDF

Implementation of a Web-based Virtual Educational System for Java Language Using Java Web Player (자바 웹플레이어를 이용한 웹기반 자바언어 가상교육시스템의 구현)

  • Kim, Dongsik;Moon, Ilhyun;Choi, Kwansun;Jeon, Changwan;Lee, Sunheum
    • The Journal of Korean Association of Computer Education
    • /
    • v.11 no.1
    • /
    • pp.57-64
    • /
    • 2008
  • This paper presents a web-based virtual educational system for Java language, which consists of a management system named Java Web Player (JWP) and creative multimedia contents for the lectures of Java language. The JWP is a Java application program free from security problems by the Java Web Start technologies that supports an integrated learning environment including three important learning procedures: Java concept learning process, programming practice process and assessment process. On-line voice presentation and its related texts together with moving images are synchronized for efficiently conveying creative contents to learners. Furthermore, a simple and useful compiler is included in the JWP for providing user-friendly language practice environment enabling such as coding, editing, executing, and debugging Java source files on the Web. Finally, simple multiple choices are given suddenly to the learners while they are studying through the JWP and the test results are displayed on the message box. In order to show the validity of the proposed virtual educational system we analysed and assessed the learners' academic performance on the five quizzes for one semester.

  • PDF

The Box-office Success Factors of Films Utilizing Big Data-Focus on Laugh and Tear of Film Factors (빅데이터를 활용한 영화 흥행 분석 -천만 영화의 웃음과 눈물 요소를 중심으로)

  • Hwang, Young-mee;Park, Jin-tae;Moon, Il-young;Kim, Kwang-sun;Kwon, Oh-young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1087-1095
    • /
    • 2016
  • The study aims to analyze factors of box office utilizing big data. The film industry has been increasing in the scale, but the discussion on analysis and prediction of box-office hit has not secured reliability because of failing in including all relevant data. 13 films have sold 10 million tickets until the present in Korea. The study demonstrated laughs and tears as an main interior factors of box-office hit films which showed more than 10 milling tickets power. First, the study collected terms relevant to laugh and tear. Next, it schematizes how frequently laugh and tear factors could be found along the 5-film-stage (exposition - Rising action - crisis - climax - ending) and revealed box-office hit films by genre. The results of the analysis would contribute to the construction of comprehensive database for the box office predictions on future scenarios.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.