• Title/Summary/Keyword: high K

Search Result 125,621, Processing Time 0.155 seconds

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

The Royal and Sajik Tree of Joseon Dynasty, the Culturo-social Forestry, and Cultural Sustainability (근세조선의 왕목-사직수, 문화사회적 임업, 그리고 문화적 지속가능성)

  • Yi, Cheong-Ho;Chun, Young Woo
    • Journal of Korean Society of Forest Science
    • /
    • v.98 no.1
    • /
    • pp.66-81
    • /
    • 2009
  • From a new perspective of "humans and the culture of forming and conserving the environment", the sustainable forest management can be reformulated under the concept of "cultural sustainability". Cultural sustainability is based on the emphasis of the high contribution to sustainability of the culture of forming and conserving the environment. This study extracts the implications to cultural sustainability for the modern world by investigating a historical case of the culturo-social pine forestry in the Joseon period of Korea. In the legendary and recorded acts by the first king Taejo, Seonggye Yi, Korean red pine (Pinus densiflora) was the "Royal tree" of Joseon and also the "Sajik tree" related intimately with the Great Sajik Ritual valued as the top rank within the national ritual regime that sustained the Royal Virtue Politics in Confucian political ideology. Into the Neo-Confucian faith and royal rituals of Joseon, elements of geomancy (Feng shui), folk religion, and Buddhism had been amalgamated. The deities worshipped or revered at the Sajik shrine were Earth-god (Sa) and crop-god (Jik). And it is the Earth god and the concrete entity, Sajik tree, that contains the legacy of sylvan religion descended from the ancient times and had been incorporated into the Confucian faith and ritual regime. Korean red pine as the Royal-Sajik tree played a critical role of sustaining the religio-political justification for the rule of the Joseon's Royalty. The religio-political symbolism of Korean red pine was represented in diverse ways. The same pine was used as the timber material of shrine buildings established for the national rituals under Neo-Confucian faith by the royal court of Joseon kingdom before the modern Korea. The symbolic role of pine had also been expressed in the forms of royal tomb forests, the Imposition Forest (Bongsan) for royal coffin timber (Whangjangmok), and the creation, protection, conservation and bureaucratic management of the pine forests in the Inner-four and Outer-four mountains for the capital fortress at Seoul, where the king and his family inhabit. The religio-political management system of pine forests parallels well with the kingdom's economic forest management system, called "Pine Policy", with an array of pine cultivation forests and Prohibition Forests (Geumsan) in the earlier period, and that of Imposition Forests in the later period. The royal pine culture with the economic forest management system had influenced on the public consciousness and the common people seem to have coined Malrimgat, a pure Korean word that is interchangeable with the Chinesecharacter words of prohibition-cultivation land or forest (禁養地, 禁養林) practiced in the royal tomb forests, and Prohibition and Imposition Forests, which contained prohibition landmarks (Geumpyo) made of stone and rock on the boundaries. A culturo-social forestry, in which Sajik altar, royal tomb forests, Whangjang pine Prohibition and Imposition forests and the capital Inner-four and Outer-four mountain forests consist, was being put into practice in Joseon. In Joseon dynastry, the Neo-Confucian faith and royal rituals with geomancy, folk religion, and Buddhism incorporated has also played a critical humanistic role for the culturo-social pine forestry, the one higher in values than that of the economic pine forestry. The implications have been extracted from the historical case study on the Royal-Sajik tree and culturo-social forestry of Joseon : Cultural sustainability, in which the interaction between humans and environment maintains a long-term culturo-natural equilibrium or balance for many generations, emphasizes the importance that the modern humans who form and conserve environment need to rediscover and transform their culturo-natural legacy into conservation for many generations and produce knowledge of sustainability science, the transdisciplinary knowledge for the interaction between environment and humans, which fulfills the cultural, social and spiritual needs.

Importance-Performance Analysis of Quality Attributes of Coffee Shops and a Comparison of Coffee Shop Visits between Koreans and Mongolians (한국인과 몽골인의 커피전문점 품질 속성에 대한 중요도-수행도 분석 및 커피전문점 이용 현황 비교)

  • Jo, Mi-Na;Purevsuren, Bolorerdene
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.9
    • /
    • pp.1499-1512
    • /
    • 2013
  • The purpose of this study was to compare the coffee shop visits of Koreans and Mongolians, and to determine the quality attributes that should be managed by Importance-Performance Analysis (IPA). The survey was conducted in Seoul and the Gyeonggi Province of Korea, and at Ulaanbaatar in Mongolia from April to May 2012. The questionnaire was distributed to 380 Koreans and 380 Mongolians, with 253 and 250 responses from the Koreans and Mongolians, respectively, used for statistical analyses. From the results, Koreans visited coffee shops more frequently than Mongolians, with both groups mainly visiting a coffee shop with friends. Koreans also spent more time in a coffee shop than Mongolians. In addition, they generally used a coffee shop, regardless of time. In terms of coffee preference, Koreans preferred Americano and Mongolians preferred Espresso. The most frequently stated purpose of Koreans for visiting a coffee shop was to rest, while Mongolians typically visited to drink coffee. The general price range respondents spent on coffee was less than 4~8 thousand won for the Koreans and 2~4 thousand won for the Mongolians. Both Koreans and Mongolians obtained information about coffee shops from recommendations. According to the IPA results of 20 quality attributes of coffee shops, the selection attributes with high importance but low satisfaction were quality, price, and kindness for Koreans, but none of the attributes was found for Mongolians.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

Effect of Hydrogen Peroxide Enema on Recovery of Carbon Monoxide Poisoning (과산화수소 관장이 급성 일산화탄소중독의 회복에 미치는 영향)

  • Park, Won-Kyun;Chae, E-Up
    • The Korean Journal of Physiology
    • /
    • v.20 no.1
    • /
    • pp.53-63
    • /
    • 1986
  • Carbon monoxide(CO) poisoning has been one of the major environmental problems because of the tissue hypoxia, especially brain tissue hypoxia, due to the great affinity of CO with hemoglobin. Inhalation of the pure oxygen$(0_2)$ under the high atmospheric pressure has been considered as the best treatment of CO poisoning by the supply of $0_2$ to hypoxic tissues with dissolved from in plasma and also by the rapid elimination of CO from the carboxyhemoglobin(HbCO). Hydrogen peroxide $(H_2O_2)$ was rapidly decomposed to water and $0_2$ under the presence of catalase in the blood, but the intravenous administration of $H_2O_2$ is hazardous because of the formation of methemoglobin and air embolism. However, it was reported that the enema of $H_2O_2$ solution below 0.75% could be continuously supplied $0_2$ to hypoxic tissues without the hazards mentioned above. This study was performed to evaluate the effect of $H_2O_2$ enema on the elimination of CO from the HbCO in the recovery of the acute CO poisoning. Rabbits weighting about 2.0 kg were exposed to If CO gas mixture with room air for 30 minutes. After the acute CO poisoning, 30 rabbits were divided into three groups relating to the recovery period. The first group T·as exposed to the room air and the second group w·as inhalated with 100% $0_2$ under 1 atmospheric pressure. The third group was administered 10 ml of 0.5H $H_2O_2$ solution per kg weight by enema immediately after CO poisoning and exposed to the room air during the recovery period. The arterial blood was sampled before and after CO poisoning ana in 15, 30, 60 and 90 minutes of the recovery period. The blood pH, $Pco_2\;and\;Po_2$ were measured anaerobically with a Blood Gas Analyzer and the saturation percentage of HbCO was measured by the Spectrophotometric method. The effect of $H_2O_2$ enema on the recovery from the acute CO poisoning was observed and compared with the room air group and the 100% $0_2$ inhalation group. The results obtained from the experiment are as follows: The pH of arterial blood was significantly decreased after CO poisoning and until the first 15 minutes of the recovery period in all groups. Thereafter, it was slowly increased to the level of the before CO poisoning, but the recovery of pH of the $H_2O_2$ enema group was more delayed than that of the other groups during the recovery period. $Paco_2$ was significantly decreased after CO poisoning in all groups. Boring the recovery Period, $Paco_2$ of the room air group was completely recovered to the level of the before CO Poisoning, but that of the 100% $O_2$ inhalation group and the $H_2O_2$ enema group was not recovered until the 90 minutes of the recovery period. $Paco_2$ was slightly decreased after CO poisoning. During the recovery Period, it was markedly increased in the first 15 minutes and maintained the level above that before CO Poisoning in all groups. Furthermore $Paco_2$ of the $H_2O_2$ enema group was 102 to 107 mmHg and it was about 10 mmHg higher than that of the room air group during the recovery period. The saturation percentage of HbCO was increased up to the range of 54 to 72 percents after CO poisoning and in general it was generally diminished during the recovery period. However in the $H_2O_2$ enema group the diminution of the saturation percentage of HbCO was generally faster than that of the 100% $O_2$ inhalation group and the room air group, and its diminution in the 100% $O_2$ inhalation group was also slightly faster than that of the room air group at the relatively later time of the recovery period. In conclusion, the enema of 0.5% $H_2O_2$ solution is seems to facilitate the elimination of CO from the HbCO in the blood and increase $Paco_2$ simultaneously during the recovery period of the acute CO poisoning.

  • PDF

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Yesterday and Today of Twelve Excellent Sceneries at Banbyeoncheon Expressed in Heojoo's Sansuyucheop (허주(虛舟) 산수유첩(山水遺帖)에 표현된 반변천(半邊川) 십이승경(十二勝景)의 어제와 오늘)

  • Kim, Jeong-Moon;Rho, Jae-Hyun
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.30 no.1
    • /
    • pp.90-102
    • /
    • 2012
  • Sansuyucheop by Heojoobugun(虛舟府君) as the subject of this study is a 십이-width picture album by the eldest grandson of 11 generations for Goseong Lee family, Lee Jong Ak(李宗岳: 1726-1773), a figure having five habits(五癖) for ancient documents(古書癖), playing the gayageum(彈琴癖), flowering plant(花卉癖), paintings and calligraphic works(書畵癖) and boating(舟遊癖) etc., who boated with 18 relatives, and those by marriage from old home, home of mother's side, wife's home, and his home for 5 days Apr. 4 through 8, 1763, starting from Imcheonggak, through Yangjeong(羊汀), Chiltan(七灘), Sabin Auditorium(泗濱書院), Seonchang(船倉), Nakyeon(落淵), Seonchal(仙刹), Seonyujeong(仙遊亭), Mongseongak(夢仙閣), Baekwoonjeong(白雲亭) and Naeap Village(川前里), Iho(伊湖), Seoeodae(鮮魚帶) to the returning point, Bangujeong(伴鷗亭), cruised magnificent views around Banbyeoncheon called 'Andong 8 Gyeong' or 'Imhagugok', and whenever the boat anchored, appreciated the scenery at each point, and enjoyed and loved arts playing the geomungo. This study reached following findings through grasping physical, ecological, visual and aesthetic changes about the places, sceneries, plant elements and past and current scenery of the width pictures expressed at this Sansuyucheop. The refinement on the boat seeing the clear river water, white sand beach, fantastically-shaped cliffs expressed at this Sansuyucheop, exchanging poems and calligraphies, and enjoying the geomungo is a good example displaying the play culture of high-class in Joseon Dynasty. Also construction of Imha Dam and Andong Dam has caused serious visual and ecological changes, making us not enable to feel the original mood of the background spots such as Yangjeonggwabeom(羊汀過帆), Chiltanhuseon(七灘候船), Sasubeomjoo(泗水泛舟), Seonchanggyeram(船倉繫纜), Nakyeonmosaek(落淵莫色), Mangcheonguido(輞川歸棹), Ihojeongdo(伊湖停棹), but only discern then landscape or sentiment through the landscape described at the canvas. The 1st picture(Donghohaeram, 東湖解纜), and the 11th picture(Seoeobanjo, 鮮魚返照) of Heojoobugun's Sansuyucheop expressed trees thought to be fallen, brad-leaf tall trees, and the 9th picture(Unjeongpungbeom, 雲亭風帆) formed a pine forest called 'Gaeho(開湖)' by Uncheongong planting 1,000 pine trees with the village people in 1617. In addition, Seunggyeongdo expressed ever-green needle leaf trees at the natural topography, and fallen-leaf tall trees around the pavilion and building. Comparative consideration of Heojoobugun's Sansuyucheop and Shinam's Dongyusipsogi(東遊十小記) showed that the location of Samgok is assumed to be Macheon and Chiltan, so Imhagugok is assumed to start from Baekunjeong of Ilgok, Igok from Imcheon and Imcheon auditorium, Samgok from Mangcheon and Chiltan, Sagok from Sabin Auditorium of Sasoo, Ogok from Songseok, Yukgok from Sooseok of Seonchang, Chilgok from Nakyeonhyeonryu, Palgok from Seonchalsa and Seonyoojeong, and Gugok from Pyong Yuheo. This study can be significant in that it could clarify that Heojoobugun's Sansuyucheop is judged to be valuable in exquisitively expressing the coast of Banbyeon River, the biggest branch stream in the Nakdong River at the latter half of Joseon Dynasty, and as a vital diagrammatical historical data to make a comparative analysis of currently rarely-seen ancestors' life traces and landscape factors with present ones.

Understanding User Motivations and Behavioral Process in Creating Video UGC: Focus on Theory of Implementation Intentions (Video UGC 제작 동기와 행위 과정에 관한 이해: 구현의도이론 (Theory of Implementation Intentions)의 적용을 중심으로)

  • Kim, Hyung-Jin;Song, Se-Min;Lee, Ho-Geun
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.125-148
    • /
    • 2009
  • UGC(User Generated Contents) is emerging as the center of e-business in the web 2.0 era. The trend reflects changing roles of users in production and consumption of contents on websites and helps us to understand new strategies of websites such as web portals and social network websites. Nowadays, we consume contents created by other non-professional users for both utilitarian (e.g., knowledge) and hedonic values (e.g., fun). Also, contents produced by ourselves (e.g., photo, video) are posted on websites so that our friends, family, and even the public can consume those contents. This means that non-professionals, who used to be passive audience in the past, are now creating contents and share their UGCs with others in the Web. Accessible media, tools, and applications have also reduced difficulty and complexity in the process of creating contents. Realizing that users create plenty of materials which are very interesting to other people, media companies (i.e., web portals and social networking websites) are adjusting their strategies and business models accordingly. Increased demand of UGC may lead to website visits which are the source of benefits from advertising. Therefore, they put more efforts into making their websites open platforms where UGCs can be created and shared among users without technical and methodological difficulties. Many websites have increasingly adopted new technologies such as RSS and openAPI. Some have even changed the structure of web pages so that UGC can be seen several times to more visitors. This mainstream of UGCs on websites indicates that acquiring more UGCs and supporting participating users have become important things to media companies. Although those companies need to understand why general users have shown increasing interest in creating and posting contents and what is important to them in the process of productions, few research results exist in this area to address these issues. Also, behavioral process in creating video UGCs has not been explored enough for the public to fully understand it. With a solid theoretical background (i.e., theory of implementation intentions), parts of our proposed research model mirror the process of user behaviors in creating video contents, which consist of intention to upload, intention to edit, edit, and upload. In addition, in order to explain how those behavioral intentions are developed, we investigated influences of antecedents from three motivational perspectives (i.e., intrinsic, editing software-oriented, and website's network effect-oriented). First, from the intrinsic motivation perspective, we studied the roles of self-expression, enjoyment, and social attention in forming intention to edit with preferred editing software or in forming intention to upload video contents to preferred websites. Second, we explored the roles of editing software for non-professionals to edit video contents, in terms of how it makes production process easier and how it is useful in the process. Finally, from the website characteristic-oriented perspective, we investigated the role of a website's network externality as an antecedent of users' intention to upload to preferred websites. The rationale is that posting UGCs on websites are basically social-oriented behaviors; thus, users prefer a website with the high level of network externality for contents uploading. This study adopted a longitudinal research design; we emailed recipients twice with different questionnaires. Guided by invitation email including a link to web survey page, respondents answered most of questions except edit and upload at the first survey. They were asked to provide information about UGC editing software they mainly used and preferred website to upload edited contents, and then asked to answer related questions. For example, before answering questions regarding network externality, they individually had to declare the name of the website to which they would be willing to upload. At the end of the first survey, we asked if they agreed to participate in the corresponding survey in a month. During twenty days, 333 complete responses were gathered in the first survey. One month later, we emailed those recipients to ask for participation in the second survey. 185 of the 333 recipients (about 56 percentages) answered in the second survey. Personalized questionnaires were provided for them to remind the names of editing software and website that they reported in the first survey. They answered the degree of editing with the software and the degree of uploading video contents to the website for the past one month. To all recipients of the two surveys, exchange tickets for books (about 5,000~10,000 Korean Won) were provided according to the frequency of participations. PLS analysis shows that user behaviors in creating video contents are well explained by the theory of implementation intentions. In fact, intention to upload significantly influences intention to edit in the process of accomplishing the goal behavior, upload. These relationships show the behavioral process that has been unclear in users' creating video contents for uploading and also highlight important roles of editing in the process. Regarding the intrinsic motivations, the results illustrated that users are likely to edit their own video contents in order to express their own intrinsic traits such as thoughts and feelings. Also, their intention to upload contents in preferred website is formed because they want to attract much attention from others through contents reflecting themselves. This result well corresponds to the roles of the website characteristic, namely, network externality. Based on the PLS results, the network effect of a website has significant influence on users' intention to upload to the preferred website. This indicates that users with social attention motivations are likely to upload their video UGCs to a website whose network size is big enough to realize their motivations easily. Finally, regarding editing software characteristic-oriented motivations, making exclusively-provided editing software more user-friendly (i.e., easy of use, usefulness) plays an important role in leading to users' intention to edit. Our research contributes to both academic scholars and professionals. For researchers, our results show that the theory of implementation intentions is well applied to the video UGC context and very useful to explain the relationship between implementation intentions and goal behaviors. With the theory, this study theoretically and empirically confirmed that editing is a different and important behavior from uploading behavior, and we tested the behavioral process of ordinary users in creating video UGCs, focusing on significant motivational factors in each step. In addition, parts of our research model are also rooted in the solid theoretical background such as the technology acceptance model and the theory of network externality to explain the effects of UGC-related motivations. For practitioners, our results suggest that media companies need to restructure their websites so that users' needs for social interaction through UGC (e.g., self-expression, social attention) are well met. Also, we emphasize strategic importance of the network size of websites in leading non-professionals to upload video contents to the websites. Those websites need to find a way to utilize the network effects for acquiring more UGCs. Finally, we suggest that some ways to improve editing software be considered as a way to increase edit behavior which is a very important process leading to UGC uploading.