• Title/Summary/Keyword: kindle

Search Result 15, Processing Time 0.022 seconds

A Study on the Self Configurable Smart E-Book Recognizing User's Look based on Face Recognition (얼굴인식을 이용한 사용자 표정감지 스마트 전자책 연구)

  • Cha, Jiyoon;Kim, Injae;Shin, Yurim;Lim, Gyumin;Yun, Sunghyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2016.10a
    • /
    • pp.566-567
    • /
    • 2016
  • 최근 스마트기기가 발전함에 따라 종이책이 아닌 다양한 전자책뷰어가 등장하고 있다. 애플의 아이패드(iPad), 아마존의 킨들(kindle)과 같은 태블릿, 전자책 또는 스마트폰이 대표적인 예이며 스마트기기의 사용자 증가로 전용 단말기가 아닌 스마트폰 중심의 전자책 시장도 크게 성장하고 있다. 하지만 시니어 또는 스마트기기 사용을 어려워하는 사용자들이 전자책을 사용하는데 어려움을 느끼기 때문에, 전자책은 종이책에 비해 여전히 낮은 이용률을 나타내고 있다. 본 논문에서는 얼굴 표정에 따라서 전자책 설정을 자동으로 변경해 주는 방법을 제안한다. 제안한 방법은 표정감지를 위하여 OpenCV 라이브러리를 이용하여 얼굴을 검출하고, Haar-Like 피처 기법으로 사용자의 눈 모양을 검출한다. 눈이 감겨있는 경우와 찌푸린 경우를 감지하여 이에 맞게 글자크기와 화면을 자동으로 설정해 준다.

The Qualities of Molded Charcoal for Kindling Molded-Coal-Briquette (구멍탄착화용 성형탄의 품질)

  • Jo Jae-myeong;Kim Young-nyon;Kim Suk-goo;Cho Sung-taig;Kong Young-to
    • Journal of Korea Foresty Energy
    • /
    • v.1 no.2
    • /
    • pp.28-33
    • /
    • 1981
  • To survey the present qualities of the molded and to present the base line of qualities in manufacture, the charcoal collected at 27 makers through the nation were examined. The molded charcoal examined in this paper, which is made by carbonization and molding of sawdusts from wood industries, is widely used to kindle holed-coal-briquette. The holed-coal-briquette is utilized in cooking and heating as primary energy source of ordinary households in this country. The average qualities of molded charcoal was as follows; ash content 13.95$\%$, weight 184.6g, density 0.47, time of kindling holed-coal-briquette 65.4 min., calorie 5,790 kcal/kg. The ten makers produced inferior qualities, that was 37 per cent of the 27 makers examined. The base line of qualities of molded charcoal was defined as follows; ash content below 17$\%$, weight above 175 g, falling strength above 300 mm, calorie above 5,500 kcal/kg.

  • PDF

Integration of Products and Services of Korean Firms and Innovation Policy Directions

  • Jang, Pyoung Yol
    • STI Policy Review
    • /
    • v.3 no.2
    • /
    • pp.111-129
    • /
    • 2012
  • The integration of products and services is being expanded in both manufacturing and service companies such as in Apple's iPod & iTunes, Amazon's Kindle, and Hyundai Motor Company's Mozen. This phenomenon has recently accelerated due to multiple factors including market change, lessening of differences in quality of products or services, the paradigm of participation and sharing, and deindustrialization and evolution toward becoming a service economy. The objective of this paper is to investigate and analyze the status and characteristics of integration of products and services in Korean firms and to suggest policy directions promoting this integration. Towards this purpose, income statements from the Korea Listed Companies Association (KLCA) database of companies listed on the Korea Stock Exchange are analyzed regarding the servitization of manufacturing firms as well as the productization of service firms. In addition, this research investigates the Korean Innovation Survey 2011 database for the service sector and 2010 database for the manufacturing sector in order to evaluate R&D activity in each. In the manufacturing sector, the average ratio of service sales (servitization) was low at 0.208, with bias in the level and distribution of ratios associated with the manufacturing sector. 18 out of a total of 23 sectors (78%) have low servitization, showing there's a long way to go for servitization in the Korean manufacturing sector. In the service sector, the average ratio of product sales (productization) was 9.53%, which is relatively high compared to that of the manufacturing sector. However, the distribution of ratios is also biased, as with the manufacturing sector. Based on this analysis, policy directions are proposed in terms of 1) R&D, 2) concept boost, 3) R&D result spread, 4) statistics, 5) infrastructure and 6) green growth.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

A Study of ePub-based Standard Framework Supporting Mutual Comparability of eBook DRM (전자책 DRM의 상호호환성을 지원하는 ePub 기반 표준 프레임워크에 관한 연구)

  • Kang, Ho-Gap;Kim, Tae-Hyun;Yoon, Hee-Don;Cho, Seong-Hwan
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.6
    • /
    • pp.235-245
    • /
    • 2011
  • EBooks refer to electronic versions of books which are accessible via internet with forms of digital texts. In recent years, Amazon's Kindle digital eBooks has revealed the possibilities of success of the e-book market, which leads other companies to launch eBook service such as Google's eBook stores and Apple's iPad and eBook service. These reveal that the eBook market is finally showing a substantial amount of growth. Although the issue of technical support of eBook copyright protection emerges from the fast growing eBook marketplace, current technic of commercial DRM for protecting eBook copyright protection still has problems of non-comparability. Therefore, with the current technical status, DRM comparability problems, which have already occurred in music DRM environment, would also happen in eBook environment. This study suggests a standard framework to support eBook DRM comparability. When development of the standard reference software for eBook DRM comparability is completed, the sources will be registered as shareware to be open to public.