• Title/Summary/Keyword: S-SMART system

Search Result 2,428, Processing Time 0.03 seconds

An Embedding /Extracting Method of Audio Watermark Information for High Quality Stereo Music (고품질 스테레오 음악을 위한 오디오 워터마크 정보 삽입/추출 기술)

  • Bae, Kyungyul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.21-35
    • /
    • 2018
  • Since the introduction of MP3 players, CD recordings have gradually been vanishing, and the music consuming environment of music users is shifting to mobile devices. The introduction of smart devices has increased the utilization of music through music playback, mass storage, and search functions that are integrated into smartphones and tablets. At the time of initial MP3 player supply, the bitrate of the compressed music contents generally was 128 Kbps. However, as increasing of the demand for high quality music, sound quality of 384 Kbps appeared. Recently, music content of FLAC (Free License Audio Codec) format using lossless compression method is becoming popular. The download service of many music sites in Korea has classified by unlimited download with technical protection and limited download without technical protection. Digital Rights Management (DRM) technology is used as a technical protection measure for unlimited download, but it can only be used with authenticated devices that have DRM installed. Even if music purchased by the user, it cannot be used by other devices. On the contrary, in the case of music that is limited in quantity but not technically protected, there is no way to enforce anyone who distributes it, and in the case of high quality music such as FLAC, the loss is greater. In this paper, the author proposes an audio watermarking technology for copyright protection of high quality stereo music. Two kinds of information, "Copyright" and "Copy_free", are generated by using the turbo code. The two watermarks are composed of 9 bytes (72 bits). If turbo code is applied for error correction, the amount of information to be inserted as 222 bits increases. The 222-bit watermark was expanded to 1024 bits to be robust against additional errors and finally used as a watermark to insert into stereo music. Turbo code is a way to recover raw data if the damaged amount is less than 15% even if part of the code is damaged due to attack of watermarked content. It can be extended to 1024 bits or it can find 222 bits from some damaged contents by increasing the probability, the watermark itself has made it more resistant to attack. The proposed algorithm uses quantization in DCT so that watermark can be detected efficiently and SNR can be improved when stereo music is converted into mono. As a result, on average SNR exceeded 40dB, resulting in sound quality improvements of over 10dB over traditional quantization methods. This is a very significant result because it means relatively 10 times improvement in sound quality. In addition, the sample length required for extracting the watermark can be extracted sufficiently if the length is shorter than 1 second, and the watermark can be completely extracted from music samples of less than one second in all of the MP3 compression having a bit rate of 128 Kbps. The conventional quantization method can extract the watermark with a length of only 1/10 compared to the case where the sampling of the 10-second length largely fails to extract the watermark. In this study, since the length of the watermark embedded into music is 72 bits, it provides sufficient capacity to embed necessary information for music. It is enough bits to identify the music distributed all over the world. 272 can identify $4*10^{21}$, so it can be used as an identifier and it can be used for copyright protection of high quality music service. The proposed algorithm can be used not only for high quality audio but also for development of watermarking algorithm in multimedia such as UHD (Ultra High Definition) TV and high-resolution image. In addition, with the development of digital devices, users are demanding high quality music in the music industry, and artificial intelligence assistant is coming along with high quality music and streaming service. The results of this study can be used to protect the rights of copyright holders in these industries.

A Study on the Improvement for Medical Service Using Video Promotion Materials for PET/CT Scans (PET/CT 검사에서 동영상 홍보물을 통한 의료서비스 향상에 관한 연구)

  • Kim, Woo Hyun;Kim, Jung Seon;Ko, Hyun Soo;Sung, Ji Hye;Lee, Jeoung Eun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.17 no.1
    • /
    • pp.30-35
    • /
    • 2013
  • Purpose: One of the current services, providing information to the patients and their guardians by using promotion materials induces positive responses and contributes to the improvement of the hospital reliability. Therefore, the objective of this study is to evaluate the effectiveness of audio visual materials, one of the means of promotion, as a way to give accurate medical information to resolve patient's curiosity about purpose and procedure of their examination and deplete complains about waiting which attributes negative effect to service quality assessment. Materials and Methods: 60 patients(mean age $53.97{\pm}12.24$, male : female = 26 : 34) who had $^{18}F-FDG PET/CT$ scan from July 2012 to August 2012 in Seoul Asan Medical Center were referred to the study. All of the patients having PET/CT scan were asked to watch an informative video material before the injection of radiopharmaceutical ($^{18}F-FDG$) and to fill in a questionnaire. Results: As a result of analyzing the contents of questionnaire, 52% of 60 patients had PET/CT scan for the first time and 72.4% of the patients read the PET/CT guidebook offered from their outpatient department or inpatient wards before their scan. After we searched the level of previous knowledge of the purpose and method of PET/CT scan, the patients answered 25.1% "know well", 34% "not sure", 40.9% "don't know" respectively. And 84.7% of the patients answered that watching the PET/CT guide video before the injection helps understanding what exam they were having and 15.3% of the patients did not. For the question asking ever the patients have experienced using our homepage or smart phone QR code to see the guide video before they visit out PET center, only 3.3% of them answered "yes". Lastly, the patients answered 60.1% "yes", 31.4% "so so" and 8.5% "no" respectively for the question asking whether watching the video makes the patients to fill the waiting time short. Conclusion: It is found that understanding of objective and method of the PET/CT scan and level of satisfaction was improved after the patients watched the guide video whether they had PET/CT scan before and read the PET/CT guidebook or not. Also, watching the video was effective for the reduction of perceptible waiting time. But while displaying the PET/CT guide video is useful for providing information about the scan and shortening the waiting time as one of the medical service, utilization of service was actually very poor because of the passive promotion and indifference of the patients about their examination. Therefore, from now on, it is necessary to construct the healthcare system which can be offered to more patients through the active promotion.

  • PDF

Framework of Stock Market Platform for Fine Wine Investment Using Consortium Blockchain (공유경제 체제로서 컨소시엄 블록체인을 활용한 와인투자 주식플랫폼 프레임워크)

  • Chung, Yunkyeong;Ha, Yeyoung;Lee, Hyein;Yang, Hee-Dong
    • Knowledge Management Research
    • /
    • v.21 no.3
    • /
    • pp.45-65
    • /
    • 2020
  • It is desirable to invest in wine that increases its value, but wine investment itself is unfamiliar in Korea. Also, the process itself is unreasonable, and information is often forged, because pricing in the wine market is done by a small number of people. With the right solution, however, the wine market can be a desirable investment destination in that the longer one invests, the higher one can expect. Also, it is expected that the domestic wine consumption market will expand through the steady increase in domestic wine imports. This study presents the consortium block chain framework for revitalizing the wine market and enhancing transparency as the "right solution" of the nation's wine investment market. Blockchain governance can compensate for the shortcomings of the wine market because it guarantees desirable decision-making rights and accountability. Because the data stored in the block chain can be checked by consumers, it reduces the likelihood of counterfeit wine appearing and complements the process of unreasonably priced. In addition, digitization of assets resolves low cash liquidity and saves money and time throughout the supply chain through smart contracts, lowering entry barriers to wine investment. In particular, if the governance of the block chain is composed of 'chateau-distributor-investor' through consortium blockchains, it can create a desirable wine market. The production process is stored in the block chain to secure production costs, set a reasonable launch price, and efficiently operate the distribution system by storing the distribution process in the block chain, and forecast the amount of orders for futures trading. Finally, investors make rational decisions by viewing all of these data. The study presented a new perspective on alternative investment in that ownership can be treated like a share. We also look forward to the simplification of food import procedures and the formation of trust within the wine industry by presenting a framework for wine-owned sales. In future studies, we would like to expand the framework to study the areas to be applied.

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.

A Study on Analysis of Components and Color Characteristics of History·Culture Streets - focused on Street of Gaya in Gimhae - (역사·문화가로의 구성요소 및 색채특성 분석 연구 - 김해시 가야의 거리를 중심으로 -)

  • An, Su Mi;Son, Kwang Ho;Choi, In Young
    • Korea Science and Art Forum
    • /
    • v.20
    • /
    • pp.255-265
    • /
    • 2015
  • When it comes to how to define history·culture streets, people think of the streets as street environments that would create local identity in association with this local community's particular historical and cultural resources as well as urban streets. In order to build such streets, any relevant fields first need to apply some original design based on understanding on historical and cultural resources. With Street of Gaya in Gimhae selected as a research subject, this study aims to look into components and color characteristics of the history·culture street and finds ways to create other streets of that kind. As a frame to understand the history·culture streets, what this study would come up with is considered significant in that it helps the value to be re-recognized and promoted. In order to achieve the research goal, the study (1) extracted components of streetscapes referring to relevant previous researches and then, (2) analyzed a current status of these components of Street of Gaya via field investigation. (3) The study examined color characteristics of each of the components. Findings of the research are summarized as follows. (1) From a comprehensive point of view, the study categorized and subdivided the components of the history·culture street into nonphysical and physical elements. (2) After analyzing the current status of the components, the study learned that Street of Gaya basically consists of historical and cultural remains and sculptures as well as street facilities. (3) Results of the color investigation reported that the plan on designing of Street of Gaya had been processed with a focus laid on harmony of historical remains and cultural remains which are told to be natural components. However, the study also figured out that as long as relevant fields want to create different identity in each section and to efficiently deliver information, they should first prepare this smart design system to integrate each pieces of a streetscape as a whole.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

A Study on the Influence of IT Education Service Quality on Educational Satisfaction, Work Application Intention, and Recommendation Intention: Focusing on the Moderating Effects of Learner Position and Participation Motivation (IT교육 서비스품질이 교육만족도, 현업적용의도 및 추천의도에 미치는 영향에 관한 연구: 학습자 직위 및 참여동기의 조절효과를 중심으로)

  • Kang, Ryeo-Eun;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.169-196
    • /
    • 2017
  • The fourth industrial revolution represents a revolutionary change in the business environment and its ecosystem, which is a fusion of Information Technology (IT) and other industries. In line with these recent changes, the Ministry of Employment and Labor of South Korea announced 'the Fourth Industrial Revolution Leader Training Program,' which includes five key support areas such as (1) smart manufacturing, (2) Internet of Things (IoT), (3) big data including Artificial Intelligence (AI), (4) information security, and (5) bio innovation. Based on this program, we can get a glimpse of the South Korean government's efforts and willingness to emit leading human resource with advanced IT knowledge in various fusion technology-related and newly emerging industries. On the other hand, in order to nurture excellent IT manpower in preparation for the fourth industrial revolution, the role of educational institutions capable of providing high quality IT education services is most of importance. However, these days, most IT educational institutions have had difficulties in providing customized IT education services that meet the needs of consumers (i.e., learners), without breaking away from the traditional framework of providing supplier-oriented education services. From previous studies, it has been found that the provision of customized education services centered on learners leads to high satisfaction of learners, and that higher satisfaction increases not only task performance and the possibility of business application but also learners' recommendation intention. However, since research has not yet been conducted in a comprehensive way that consider both antecedent and consequent factors of the learner's satisfaction, more empirical research on this is highly desirable. With the advent of the fourth industrial revolution, a rising interest in various convergence technologies utilizing information technology (IT) has brought with the growing realization of the important role played by IT-related education services. However, research on the role of IT education service quality in the context of IT education is relatively scarce in spite of the fact that research on general education service quality and satisfaction has been actively conducted in various contexts. In this study, therefore, the five dimensions of IT education service quality (i.e., tangibles, reliability, responsiveness, assurance, and empathy) are derived from the context of IT education, based on the SERVPERF model and related previous studies. In addition, the effects of these detailed IT education service quality factors on learners' educational satisfaction and their work application/recommendation intentions are examined. Furthermore, the moderating roles of learner position (i.e., practitioner group vs. manager group) and participation motivation (i.e., voluntary participation vs. involuntary participation) in relationships between IT education service quality factors and learners' educational satisfaction, work application intention, and recommendation intention are also investigated. In an analysis using the structural equation model (SEM) technique based on a questionnaire given to 203 participants of IT education programs in an 'M' IT educational institution in Seoul, South Korea, tangibles, reliability, and assurance were found to have a significant effect on educational satisfaction. This educational satisfaction was found to have a significant effect on both work application intention and recommendation intention. Moreover, it was discovered that learner position and participation motivation have a partial moderating impact on the relationship between IT education service quality factors and educational satisfaction. This study holds academic implications in that it is one of the first studies to apply the SERVPERF model (rather than the SERVQUAL model, which has been widely adopted by prior studies) is to demonstrate the influence of IT education service quality on learners' educational satisfaction, work application intention, and recommendation intention in an IT education environment. The results of this study are expected to provide practical guidance for IT education service providers who wish to enhance learners' educational satisfaction and service management efficiency.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.