• Title/Summary/Keyword: Intelligent Media Contents

Search Result 71, Processing Time 0.024 seconds

Document Summarization via Convex-Concave Programming

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.293-298
    • /
    • 2016
  • Document summarization is an important task in various areas where the goal is to select a few the most descriptive sentences from a given document as a succinct summary. Even without training data of human labeled summaries, there has been several interesting existing work in the literature that yields reasonable performance. In this paper, within the same unsupervised learning setup, we propose a more principled learning framework for the document summarization task. Specifically we formulate an optimization problem that expresses the requirements of both faithful preservation of the document contents and the summary length constraint. We circumvent the difficult integer programming originating from binary sentence selection via continuous relaxation and the low entropy penalization. We also suggest an efficient convex-concave optimization solver algorithm that guarantees to improve the original objective at every iteration. For several document datasets, we demonstrate that the proposed learning algorithm significantly outperforms the existing approaches.

Similarity-Based Patch Packing Method for Efficient Plenoptic Video Coding in TMIV

  • Kim, HyunHo;Kim, Yong-Hwan
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.250-252
    • /
    • 2022
  • As immersive video contents have started to emerge in the commercial market, research on it is required. For this, efficient coding methods for immersive video are being studied in the MPEG-I Visual workgroup, and they released Test Model for Immersive Video (TMIV). In current TMIV, the patches are packed into atlas in order of patch size. However, this simple patch packing method can reduce the coding efficiency in terms of 2D encoder. In this paper, we propose patch packing method which pack the patches into atlases by using the similarity of each patch for improving coding efficiency of 3DoF+ video. Experimental result shows that there is a 0.3% BD-rate savings on average over the anchor of TMIV.

  • PDF

A study on intelligent services using 3D modeling (3차원 모델링을 적용한 지능형 서비스에 관한 연구)

  • Eunji Kim;Lee ByongKwon
    • Journal of Digital Policy
    • /
    • v.2 no.2
    • /
    • pp.1-6
    • /
    • 2023
  • This thesis was developed so that users can access tourism services more easily by utilizing Unity based on the metaverse. The core function and environment of the program were created using Unity, a game production tool, to create a virtual space. In the virtual space, it is implemented so that the tourist service can be used from various angles and positions through NPCs to which control and camera viewpoints are applied. This project is a content that allows you to visit tourist attractions in a virtual world without going to the site by using virtual reality technology. The background and goal of the project is to make it into a game form using a UI frame and fuse it into a simple game form to add fun elements to enable virtual reality tourist experience applying game for tourist attraction publicity.

Trends of Cloud and Virtualization in Broadcast Infra (방송 인프라의 클라우드 및 가상화 동향)

  • Kim, S.C.;Oh, H.J.;Yim, H.J.;Hyun, E.H.;Choi, D.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.3
    • /
    • pp.23-33
    • /
    • 2019
  • Broadcast is evolving into media service aimed at user customization, personalization, and participation with high-quality broadcasting contents (4K/8K/AR/VR). A broadcast infrastructure is needed to engage with the competition for providing large-scaled media traffic process, platform performance for adaptive transcoding to diverse receivers, and intelligent service. Cloud service and virtualization in broadcast are becoming more valuable as the broadcasting environment changes and new high-level broadcasting services emerge. This document describes the examples of cloud and virtualization in the broadcast industry, and prospects the network virtualization of broadcast transmission infrastructure, especially terrestrial and cable networks.

Scene extraction technology on deep learning for media production (미디어 제작을 위한 씬 검출 기법)

  • Song, Hyok;Ko, Min-Soo;Yoo, Jisang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.184-185
    • /
    • 2022
  • 인터넷 환경의 변화에 따라 텍스트 기반의 정보 전달에서 멀티미디어 기반의 스트리밍 방식으로 바뀌어가고 있다. 또한 대용량의 동영상 데이터뿐 아니라 Shorts, Clip Reels 또는 등 다양한 방식의 동영상 형태로 배포되고 있으며 서비스 플랫폼에서는 손쉽게 편집할 수 있도록 기능을 제공하고 있다. 대용량 콘텐츠, TV, Youtue 콘텐츠를 포함하여 소용량 동영상 편집에 필요한 영상 제작 기술에서 가장 인력과 시간이 많이 소요되는 부분은 편집 단계로 딥러닝 기반 인공지능 기술을 활용하여 자동화하고 있으며 영상편집에서 가장 기본이 되는 단위인 씬검출 기법을 개발하였다. 키프레임 검출 기법과 유사도 기법을 이용하여 씬을 추출하였으며 블록 Cost Function을 이용하여 최적화하여 0.5214의 정확도를 도출하였다.

  • PDF

Optimized Implementation of Audio Loudness Measurement Method for Broadcasting Contents (방송프로그램 음량 측정 기법의 고속화 구현)

  • Kim, Je Woo;Cho, Choongsang;Lee, Young Han
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.60-62
    • /
    • 2016
  • 디지털 방송이 대중화면서 방송 프로그램의 음량은 프로그램의 효과, 방송사간의 경쟁 등으로 인해 점점 더 커지고, 채널 간 및 프로그램 간의 음량 불균형이 심해지고 있다. 이를 해결하기 위해 ITU-R 에서는 음량 측정 방법 및 기준 음량에 대한 연구하여, 그 결과로 BS.1770 표준을 권고하였다. 이 국제 기준을 바탕으로 미국, EU, 일본 등 주요 선진국 뿐만 아니라 우리나라에서는 자국 내 기준을 제정하고, 디지털 방송 프로그램의 음량에 대한 규제를 시행하고 있다. 본 논문에서는 우리나라에서 음량 측정 방법으로 적용한 ITU-R BS.1770-3 방송 프로그램의 음량 측정 기법에 대해서 기술하고, 음량 측정 기법의 고속화 구현을 위한 방법을 제안한다. 제안된 방법은 BS.1770-3 의 음량 측정 기법에 적용된 필터와 True Peak 측정을 위한 필터의 병렬 고속화 방법으로 일반적인 필터 구현에 비해 4 배의 고속화를 달성하였으며, 제안된 방법을 EBU R128 및 Tech 3341 의 컨퍼먼스 스트림으로 실험하여 표준 규격을 만족하였다.

  • PDF

Predicting Plant Biological Environment Using Intelligent IoT (지능형 사물인터넷을 이용한 식물 생장 환경 예측)

  • Ko, Sujeong
    • Journal of Digital Contents Society
    • /
    • v.19 no.7
    • /
    • pp.1423-1431
    • /
    • 2018
  • IoT(Internet of Things) is applied to technologies such as agriculture and dairy farming, making it possible to cultivate crops easily and easily in cities.In particular, IoT technology that intelligently judge and control the growth environment of cultivated crops in the agricultural field is being developed. In this paper, we propose a method of predicting the growth environment of plants by learning the moisture supply cycle of plants using the intelligent object internet. The proposed system finds the moisture level of the soil moisture by mapping learning and finds the rules that require moisture supply based on the measured moisture level. Based on these rules, we predicted the moisture supply cycle and output it using media, so that it is convenient for users to use. In addition, in order to reduce the error of the value measured by the sensor, the information of each plant is exchanged with each other, so that the accuracy of the prediction is improved while compensating the value when there is an error. In order to evaluate the performance of the growth environment prediction system, the experiment was conducted in summer and winter and it was verified that the accuracy was high.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

A Study on Distributed & Intelligent Platform for Digital Contents (디지털 콘텐츠 저장 및 유통을 위한 분산 지능형 플랫폼에 관한 연구)

  • 장연세;임승린;나오키엔도
    • Journal of the Korea Society of Computer and Information
    • /
    • v.8 no.3
    • /
    • pp.53-60
    • /
    • 2003
  • In these day, Digital Contents is authorized in forms of HTML and various kind of multi-media by an enterprise and/or organization. At this circumstances, it is very hard or impossible to search a contents between them. Futhermore, you may have to change your contents service systems to extend to cover more user services needs. On this paper, we propose a new architecture to get extendability and empowered interoperability without any maintenance, update or changes. And we introduce the OMG's CORBA to make fault resilient system and to provide a load balancing using a distributed process mechanism. Even more, we adopted the SyncML specification. It can help us to make it real to synchronize between DBMS at real-time. So all these are support easy content synchronization and user-friendness.

  • PDF

Similar Contents Recommendation Model Based On Contents Meta Data Using Language Model (언어모델을 활용한 콘텐츠 메타 데이터 기반 유사 콘텐츠 추천 모델)

  • Donghwan Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.1
    • /
    • pp.27-40
    • /
    • 2023
  • With the increase in the spread of smart devices and the impact of COVID-19, the consumption of media contents through smart devices has significantly increased. Along with this trend, the amount of media contents viewed through OTT platforms is increasing, that makes contents recommendations on these platforms more important. Previous contents-based recommendation researches have mostly utilized metadata that describes the characteristics of the contents, with a shortage of researches that utilize the contents' own descriptive metadata. In this paper, various text data including titles and synopses that describe the contents were used to recommend similar contents. KLUE-RoBERTa-large, a Korean language model with excellent performance, was used to train the model on the text data. A dataset of over 20,000 contents metadata including titles, synopses, composite genres, directors, actors, and hash tags information was used as training data. To enter the various text features into the language model, the features were concatenated using special tokens that indicate each feature. The test set was designed to promote the relative and objective nature of the model's similarity classification ability by using the three contents comparison method and applying multiple inspections to label the test set. Genres classification and hash tag classification prediction tasks were used to fine-tune the embeddings for the contents meta text data. As a result, the hash tag classification model showed an accuracy of over 90% based on the similarity test set, which was more than 9% better than the baseline language model. Through hash tag classification training, it was found that the language model's ability to classify similar contents was improved, which demonstrated the value of using a language model for the contents-based filtering.