• Title/Summary/Keyword: technology platforms

Search Result 1,095, Processing Time 0.029 seconds

Prediction of a hit drama with a pattern analysis on early viewing ratings (초기 시청시간 패턴 분석을 통한 대흥행 드라마 예측)

  • Nam, Kihwan;Seong, Nohyoon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.33-49
    • /
    • 2018
  • The impact of TV Drama success on TV Rating and the channel promotion effectiveness is very high. The cultural and business impact has been also demonstrated through the Korean Wave. Therefore, the early prediction of the blockbuster success of TV Drama is very important from the strategic perspective of the media industry. Previous studies have tried to predict the audience ratings and success of drama based on various methods. However, most of the studies have made simple predictions using intuitive methods such as the main actor and time zone. These studies have limitations in predicting. In this study, we propose a model for predicting the popularity of drama by analyzing the customer's viewing pattern based on various theories. This is not only a theoretical contribution but also has a contribution from the practical point of view that can be used in actual broadcasting companies. In this study, we collected data of 280 TV mini-series dramas, broadcasted over the terrestrial channels for 10 years from 2003 to 2012. From the data, we selected the most highly ranked and the least highly ranked 45 TV drama and analyzed the viewing patterns of them by 11-step. The various assumptions and conditions for modeling are based on existing studies, or by the opinions of actual broadcasters and by data mining techniques. Then, we developed a prediction model by measuring the viewing-time distance (difference) using Euclidean and Correlation method, which is termed in our study similarity (the sum of distance). Through the similarity measure, we predicted the success of dramas from the viewer's initial viewing-time pattern distribution using 1~5 episodes. In order to confirm that the model is shaken according to the measurement method, various distance measurement methods were applied and the model was checked for its dryness. And when the model was established, we could make a more predictive model using a grid search. Furthermore, we classified the viewers who had watched TV drama more than 70% of the total airtime as the "passionate viewer" when a new drama is broadcasted. Then we compared the drama's passionate viewer percentage the most highly ranked and the least highly ranked dramas. So that we can determine the possibility of blockbuster TV mini-series. We find that the initial viewing-time pattern is the key factor for the prediction of blockbuster dramas. From our model, block-buster dramas were correctly classified with the 75.47% accuracy with the initial viewing-time pattern analysis. This paper shows high prediction rate while suggesting audience rating method different from existing ones. Currently, broadcasters rely heavily on some famous actors called so-called star systems, so they are in more severe competition than ever due to rising production costs of broadcasting programs, long-term recession, aggressive investment in comprehensive programming channels and large corporations. Everyone is in a financially difficult situation. The basic revenue model of these broadcasters is advertising, and the execution of advertising is based on audience rating as a basic index. In the drama, there is uncertainty in the drama market that it is difficult to forecast the demand due to the nature of the commodity, while the drama market has a high financial contribution in the success of various contents of the broadcasting company. Therefore, to minimize the risk of failure. Thus, by analyzing the distribution of the first-time viewing time, it can be a practical help to establish a response strategy (organization/ marketing/story change, etc.) of the related company. Also, in this paper, we found that the behavior of the audience is crucial to the success of the program. In this paper, we define TV viewing as a measure of how enthusiastically watching TV is watched. We can predict the success of the program successfully by calculating the loyalty of the customer with the hot blood. This way of calculating loyalty can also be used to calculate loyalty to various platforms. It can also be used for marketing programs such as highlights, script previews, making movies, characters, games, and other marketing projects.

A case study of blockchain-based public performance video platform establishment: Focusing on Gyeonggi Art On, a new media art broadcasting station in Gyeonggi-do (블록체인 기반 공연영상 공공 플랫폼 구축 사례 연구: 경기도 뉴미디어 예술방송국 경기아트온을 중심으로)

  • Lee, Seung Hyun
    • Journal of Service Research and Studies
    • /
    • v.13 no.1
    • /
    • pp.108-126
    • /
    • 2023
  • This study explored the sustainability of a blockchain-based cultural art performance video platform through the construction of Gyeonggi Art On, a new media art broadcasting station in Gyeonggi-do. In addition, the technical limitations of video content transaction using block chain, legal and institutional issues, and the protection of personal information and intellectual property rights were reviewed. As for the research method, participatory observation methods such as in-depth interviews with developers and operators and participation in meetings were conducted. The researcher participated in and observed the entire development process, including designing and developing blockchain nodes, smart contracts, APIs, UI/UX, and testing interworking between blockchain and content distribution services. Research Question 1: The results of the study on 'Which technology model is suitable for a blockchain-based performance video content distribution public platform?' are as follows. 1) The blockchain type suitable for the public platform for distribution of art performance video contents based on the blockchain is the private type that can be intervened only when the blockchain manager directly invites it. 2) In public platforms such as Gyeonggi ArtOn, among the copyright management model, which is an art based on NFT issuance, and the BC token and cloud-based content distribution model, the model that provides content to external demand organizations through API and uses K-token for fee settlement is suitable. 3) For public platform initial services such as Gyeonggi ArtOn, a closed blockchain that provides services only to users who have been granted the right to use content is suitable. Research question 2: What legal and institutional problems should be reviewed when operating a blockchain-based performance video distribution public platform? The results of the study are as follows. 1) Blockchain-based smart contracts have a party eligibility problem due to the nature of blockchain technology in which the identities of transaction parties may not be revealed. 2) When a security incident occurs in the block chain, it is difficult to recover the loss because it is unclear how to compensate or remedy the user's loss. 3) The concept of default cannot be applied to smart contracts, and even if the obligations under the smart contract have already been fulfilled, the possibility of incomplete performance must be reviewed.

Evaluation of Hydrophilic Polymer on the Growth of Plants in the Extensive Green Roofs (저관리형 옥상녹화 식물생육을 위한 Hydrophilic polymer의 효용성)

  • Yang, Ji;Yoon, Yong-Han;Ju, Jin-Hee
    • Korean Journal of Environment and Ecology
    • /
    • v.28 no.3
    • /
    • pp.357-364
    • /
    • 2014
  • This study aimed to determine effects of the use of water-retention additive, hydrophilic polymer, for extensive green roofs on growth of Juniperus chinensis var. sargentii and Euonymus fortunei 'Emerald and Gold' for woody plants, and Carex kobomugi and Carex pumila for herbaceous plants. Five different contents of hydrophilic polymer including 0% (Control), 1.0%, 2.5%, 5.0%, and 10% (polymer: medium (w/w), dry weight basis) were added to each of the container filed with a 100 kg of growth medium. Ten of plants were transplanted in each of square container ($1m(L){\times}1m(W){\times}0.3m$ (H)) built on the roof platforms in randomized complete block design in the $20^{th}$ of May, 2013. In results, excessively high volumetric soil water content, about 97-98%, was found in the substrate under elevated hydrophilic polymer concentration of at least 2.5%, during the entire growing period. The moisture content of the substrate containing 1.0% of hydrophilic polymer was higher about 20% in the range between 70% and 80%, compared tho that of Control substrate in the range between 50% and 60%, for 27 days after transplanting prior to abundant rainfall, indicating that the application of hydrophilic polymer to the extensive green roof substrate is effective to eliminate drought condition by retaining water in the substrate. Euonymus fortunei 'Emerald and Gold' and Carex kobomugi resulting in higher plant growth with 2.5% than those of the other treatment plants. Juniperus chinensis var. sargentii was observed the highest growth under 1.0% hydrophilic polymer treatement, and Carex pumila was founded the best growth with Control respectively. Plants that grown in both the 1.0% and 2.5% hydrophilic polymer survived all, while the plants that grown in the 5.0% and 10% hydrophilic polymer died after 3 months. These results suggest that advantage of the addition of hydrophilic polymer may be greater in drought-tolerant plants, but the mixture proportion of hydrophilic polymer should be determined according to the different features of the plant species being grown.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Development of Multimedia Annotation and Retrieval System using MPEG-7 based Semantic Metadata Model (MPEG-7 기반 의미적 메타데이터 모델을 이용한 멀티미디어 주석 및 검색 시스템의 개발)

  • An, Hyoung-Geun;Koh, Jae-Jin
    • The KIPS Transactions:PartD
    • /
    • v.14D no.6
    • /
    • pp.573-584
    • /
    • 2007
  • As multimedia information recently increases fast, various types of retrieval of multimedia data are becoming issues of great importance. For the efficient multimedia data processing, semantics based retrieval techniques are required that can extract the meaning contents of multimedia data. Existing retrieval methods of multimedia data are annotation-based retrieval, feature-based retrieval and annotation and feature integration based retrieval. These systems take annotator a lot of efforts and time and we should perform complicated calculation for feature extraction. In addition. created data have shortcomings that we should go through static search that do not change. Also, user-friendly and semantic searching techniques are not supported. This paper proposes to develop S-MARS(Semantic Metadata-based Multimedia Annotation and Retrieval System) which can represent and extract multimedia data efficiently using MPEG-7. The system provides a graphical user interface for annotating, searching, and browsing multimedia data. It is implemented on the basis of the semantic metadata model to represent multimedia information. The semantic metadata about multimedia data is organized on the basis of multimedia description schema using XML schema that basically comply with the MPEG-7 standard. In conclusion. the proposed scheme can be easily implemented on any multimedia platforms supporting XML technology. It can be utilized to enable efficient semantic metadata sharing between systems, and it will contribute to improving the retrieval correctness and the user's satisfaction on embedding based multimedia retrieval algorithm method.

Designing Mobile Framework for Intelligent Personalized Marketing Service in Interactive Exhibition Space (인터랙티브 전시 환경에서 개인화 마케팅 서비스를 위한 모바일 프레임워크 설계)

  • Bae, Jong-Hwan;Sho, Su-Hwan;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.59-69
    • /
    • 2012
  • As exhibition industry, which is a part of 17 new growth engines of the government, is related to other industries such as tourism, transportation and financial industries. So it has a significant ripple effect on other industries. Exhibition is a knowledge-intensive, eco-friendly and high value-added Industry. Over 13,000 exhibitions are held every year around the world which contributes to getting foreign currency. Exhibition industry is closely related with culture and tourism and could be utilized as local and national development strategies and improve national brand image as well. Many countries try various efforts to invigorate exhibition industry by arranging related laws and support system. In Korea, more than 200 exhibitions are being held every year, but only 2~3 exhibitions are hosted with over 400 exhibitors and except these exhibitions most exhibitions have few foreign exhibitors. The main reason of weakness of domestic trade show is that there are no agencies managing exhibitionrelated statistics and there is no specific and reliable evaluation. This might cause impossibility of providing buyer or seller with reliable data, poor growth of exhibitions in terms of quality and thus service quality of trade shows cannot be improved. Hosting a lot of visitors (Public/Buyer/Exhibitor) is very crucial to the development of domestic exhibition industry. In order to attract many visitors, service quality of exhibition and visitor's satisfaction should be enhanced. For this purpose, a variety of real-time customized services through digital media and the services for creating new customers and retaining existing customers should be provided. In addition, by providing visitors with personalized information services they could manage their time and space efficiently avoiding the complexity of exhibition space. Exhibition industry can have competitiveness and industrial foundation through building up exhibition-related statistics, creating new information and enhancing research ability. Therefore, this paper deals with customized service with visitor's smart-phone at the exhibition space and designing mobile framework which enables exhibition devices to interact with other devices. Mobile server framework is composed of three different systems; multi-server interaction, server, client, display device. By making knowledge pool of exhibition environment, the accumulated data for each visitor can be provided as personalized service. In addition, based on the reaction of visitors each of all information is utilized as customized information and so the cyclic chain structure is designed. Multiple interaction server is designed to have functions of event handling, interaction process between exhibition device and visitor's smart-phone and data management. Client is an application processed by visitor's smart-phone and could be driven on a variety of platforms. Client functions as interface representing customized service for individual visitors and event input and output for simultaneous participation. Exhibition device consists of display system to show visitors contents and information, interaction input-output system to receive event from visitors and input toward action and finally the control system to connect above two systems. The proposed mobile framework in this paper provides individual visitors with customized and active services using their information profile and advanced Knowledge. In addition, user participation service is suggested as well by using interaction connection system between server, client, and exhibition devices. Suggested mobile framework is a technology which could be applied to culture industry such as performance, show and exhibition. Thus, this builds up the foundation to improve visitor's participation in exhibition and bring about development of exhibition industry by raising visitor's interest.

A Study on Policy-making, Leadership and Improvement of Professionalism for Audiovisual Archives Management in Korea (국내 시청각 기록관리 정책 리더십 및 전문성 제고 방안 연구)

  • Choi, Hyo jin
    • The Korean Journal of Archival Studies
    • /
    • no.72
    • /
    • pp.91-163
    • /
    • 2022
  • The focus of this paper lies on the fact that the 'management' and 'utilization' of audiovisual archives are still not specialized in both the public and the private sectors. The use of online video platforms including 'YouTube' has became common. Accordingly the production and collection of high-definition and high-capacity audiovisual archives has been rapidly increasing. However, it also emphasizes that there are no references or principles in the current Public Records Act and its enforcement rules, public standards, and guidelines. This paper ultimately examines the provisions that are related to audiovisual archives of the current Public Records Act, which needed to be revised and enacted due to the lack of an audiovisual archives management manual of national institutions, public broadcasters, and organizations can refer to. In addition, this study tries to find out what kind of systems and guidelines are used in audiovisual archives management. This paper examines the current state of standardization of audiovisual records of the National Archives. It also analyses the systems and the guidelines methodically for efficient audiovisual record management in the public records management sector. It suggests the new direction of relevant public standards and guidelines through this research. Futhermore, it measures to activate the audiovisual management policy-making functions of the National Archives. The necessity of establishing a Public Audiovisual Archives as an organization was also reviewed in this paper. The Public Audiovisual Archives will collect Public Audio and Videos systematically and comprehensively through the legal deposit system. And it will be operated by the management and the utilization system so that it can be used for public as a collective memory. Finally, it will takes a charge of a professional role in audiovisual record management field, such as technology standardization to safeguard and protect the copyrights through this process.

Implications for the Direction of Christian Education in the Age of Artificial Intelligence (인공지능 시대의 기독교교육 방향성에 대한 고찰)

  • Sunwoo Nam
    • Journal of Christian Education in Korea
    • /
    • v.74
    • /
    • pp.107-134
    • /
    • 2023
  • The purpose of this study is to provide a foundation for establishing the correct direction of education that utilizes artificial intelligence, a key technology of the Fourth Industrial Revolution, in the context of Christian education. To achieve this, theoretical and literature research was conducted. First, the research analyzed the historical development of artificial intelligence to understand its characteristics. Second, the research analyzed the use of artificial intelligence in convergence education from an educational perspective and examined the current policy direction in South Korea. Through this analysis, the research examined the direction of Christian education in the era of artificial intelligence. In particular, the research critically examined the perspectives of continuity and change in the context of Christian education in the era of artificial intelligence. The research reflected upon the fundamental educational purposes of Christian education that should remain unchanged despite the changing times. Furthermore, the research deliberated on the educational curriculum and teaching methods that should adapt to the changing dynamics of the era. In conclusion, this research emphasizes that even in the era of artificial intelligence, the fundamental objectives of Christian education should not be compromised. The utilization of artificial intelligence in education should serve as a tool that fulfills the mission permitted by God. Therefore, Christian education should remain centered around God, rooted in the principles of the Bible. Moreover, Christian education should aim to foster creative and convergent Christian nurturing. To achieve this, it is crucial to provide learners with an educational environment that actively utilizes AI-based hybrid learning environments and metaverse educational platforms, combining online and offline learning spaces. Moreover, to enhance learners' engagement and effectiveness in education, it is essential to actively utilize AI-based edutech that reflects the aforementioned educational environments. Lastly, in order to cultivate Christian learners with dynamic knowledge, it is crucial to employ a variety of teaching and learning methods grounded in constructivist theories, which emphasize active learner participation, collaboration, inquiry, and reflection. These approaches seek to align knowledge with life experiences, promoting a holistic convergence of faith and learning.

A Study on the Choice of Export Payment Types by Applying the Characteristics of the New Trade & Logistics Environment (신(新)무역물류환경의 특성을 적용한 수출대금 결제유형 선택연구)

  • Chang-bong Kim;Dong-jun Lee
    • Korea Trade Review
    • /
    • v.48 no.4
    • /
    • pp.303-320
    • /
    • 2023
  • Recently, import and export companies have been using T/T remittance and Surrender B/L more frequently than L/C when selecting the process and method of trade payment settlement. The new trade and logistics environment is thriving in the era of the Fourth Industrial Revolution (4IR). Document-based trade transactions are undergoing a digitalization as bills of lading or smart contracts are being developed. The purpose of this study is to verify whether exporters choose export payment types based on negotiating factors. In addition, we would like to discuss the application of the characteristics of the new trade and logistics environment. Data for analysis was collected through surveys. The collection method consisted of direct visits to the company, e-mail, fax, and online surveys. The survey distribution period is from February 1, 2023, to April 30, 2023. The questionnaire was distributed in 2,000 copies, and 447 copies were collected. The final 336 copies were used for analysis, excluding 111 copies that were deemed inappropriate for the purpose of this study. The results of the study are shown below. First, among the negotiating factors, the product differentiation of exporters did not significantly affect the selection of export payment types. Second, among the negotiating factors, the greater the purchasing advantage recognized by exporters, the higher the possibility of using the post-transfer method. In addition to analyzing the results, this study suggests that exporters should consider adopting new payment methods, such as blockchain technology-based bills of lading and trade finance platforms, to adapt to the characteristics of the evolving trade and logistics environment. Therefore, exporters should continue to show interest in initiatives aimed at digitizing trade documents as a response to the challenges posed by bills of lading. In future studies, it is necessary to address the lack of social awareness in Korea by conducting advanced research abroad.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.