• Title/Summary/Keyword: 형태 추출

Search Result 3,558, Processing Time 0.034 seconds

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

KNU Korean Sentiment Lexicon: Bi-LSTM-based Method for Building a Korean Sentiment Lexicon (Bi-LSTM 기반의 한국어 감성사전 구축 방안)

  • Park, Sang-Min;Na, Chul-Won;Choi, Min-Seong;Lee, Da-Hee;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.219-240
    • /
    • 2018
  • Sentiment analysis, which is one of the text mining techniques, is a method for extracting subjective content embedded in text documents. Recently, the sentiment analysis methods have been widely used in many fields. As good examples, data-driven surveys are based on analyzing the subjectivity of text data posted by users and market researches are conducted by analyzing users' review posts to quantify users' reputation on a target product. The basic method of sentiment analysis is to use sentiment dictionary (or lexicon), a list of sentiment vocabularies with positive, neutral, or negative semantics. In general, the meaning of many sentiment words is likely to be different across domains. For example, a sentiment word, 'sad' indicates negative meaning in many fields but a movie. In order to perform accurate sentiment analysis, we need to build the sentiment dictionary for a given domain. However, such a method of building the sentiment lexicon is time-consuming and various sentiment vocabularies are not included without the use of general-purpose sentiment lexicon. In order to address this problem, several studies have been carried out to construct the sentiment lexicon suitable for a specific domain based on 'OPEN HANGUL' and 'SentiWordNet', which are general-purpose sentiment lexicons. However, OPEN HANGUL is no longer being serviced and SentiWordNet does not work well because of language difference in the process of converting Korean word into English word. There are restrictions on the use of such general-purpose sentiment lexicons as seed data for building the sentiment lexicon for a specific domain. In this article, we construct 'KNU Korean Sentiment Lexicon (KNU-KSL)', a new general-purpose Korean sentiment dictionary that is more advanced than existing general-purpose lexicons. The proposed dictionary, which is a list of domain-independent sentiment words such as 'thank you', 'worthy', and 'impressed', is built to quickly construct the sentiment dictionary for a target domain. Especially, it constructs sentiment vocabularies by analyzing the glosses contained in Standard Korean Language Dictionary (SKLD) by the following procedures: First, we propose a sentiment classification model based on Bidirectional Long Short-Term Memory (Bi-LSTM). Second, the proposed deep learning model automatically classifies each of glosses to either positive or negative meaning. Third, positive words and phrases are extracted from the glosses classified as positive meaning, while negative words and phrases are extracted from the glosses classified as negative meaning. Our experimental results show that the average accuracy of the proposed sentiment classification model is up to 89.45%. In addition, the sentiment dictionary is more extended using various external sources including SentiWordNet, SenticNet, Emotional Verbs, and Sentiment Lexicon 0603. Furthermore, we add sentiment information about frequently used coined words and emoticons that are used mainly on the Web. The KNU-KSL contains a total of 14,843 sentiment vocabularies, each of which is one of 1-grams, 2-grams, phrases, and sentence patterns. Unlike existing sentiment dictionaries, it is composed of words that are not affected by particular domains. The recent trend on sentiment analysis is to use deep learning technique without sentiment dictionaries. The importance of developing sentiment dictionaries is declined gradually. However, one of recent studies shows that the words in the sentiment dictionary can be used as features of deep learning models, resulting in the sentiment analysis performed with higher accuracy (Teng, Z., 2016). This result indicates that the sentiment dictionary is used not only for sentiment analysis but also as features of deep learning models for improving accuracy. The proposed dictionary can be used as a basic data for constructing the sentiment lexicon of a particular domain and as features of deep learning models. It is also useful to automatically and quickly build large training sets for deep learning models.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Studies on Neck Blast Infection of Rice Plant (벼 이삭목도열병(病)의 감염(感染)에 관(關)한 연구(硏究))

  • Kim, Hong Gi;Park, Jong Seong
    • Korean Journal of Agricultural Science
    • /
    • v.12 no.2
    • /
    • pp.206-241
    • /
    • 1985
  • Attempts to search infection period, infection speed in the tissue of neck blast of rice plant, location of inoculum source and effects of several conditions about the leaf sheath of rice plants for neck blast incidence have been made. 1. The most infectious period for neck blast incidence was the booting stage just before heading date, and most of necks have been infected during the booting stage and on heading date. But $Indica{\times}Japonica$ hybrid varieties had shown always high possibility for infection after booting stage. 2. Incubation period for neck blast of rice plants under natural conditions had rather a long period ranging from 10 to 22 days. Under artificial inoculation condition incubation period in the young panicle was shorter than in the old panicle. Panicles that emerged from the sheath of flag leaf had long incubation period, with a low infection rate and they also shown slow infection speed in the tissue. 3. Considering the incubation period of neck blast of rice plant, we assumed that the most effective application periods of chemicals are 5-10 days for immediate effective chemicals and 10-15 days for slow effective chemicals before heading. 4. Infiltration of conidia into the leaf sheath of rice plant carried out by saturation effect with water through the suture of the upper three leaves. The number of conidia observed in the leaf sheath during the booting stage were higher than those in the leaf sheath during other stages. Ligule had protected to infiltrate of conidia into the leaf sheath. 5. When conidia were infiltrated into the leaf sheath, the highest number of attached conidia was observed on the panicle base and panicle axis with hairs and degenerated panicle, which seemed to promote the infection of neck blast. 6. The lowest spore concentration for neck blast incidence was variable with rice varietal groups. $Indica{\times}Japonica$ hybrid varieties were infected easily compared to the Japonica type varieties, especially. The number of spores for neck blast incidence in $Indica{\times}Japonica$ hybrid varieties was less than 100 and disease index was higher also in $Indica{\times}Japonica$ hybrid than in Japonica type varieties. 7. Nitrogen content and silicate content were related with blast incidence in necks of rice plants in the different growing stage changed during growing period. Nitrogen content increased from booting stage to heading date and then decreased gradually as time passes. Silicate content increased from booting stage after heading with time. Change of these content promoted to increase neck blast infection. 8. Conidia moved to rice plant by ascending and desending dispersal and then attached on the rice plant. Conidia transfered horizontally was found very negligible. So we presumed that infection rate of neck blast was very low after emergence of panicle base from the leaf sheath. Also ascending air current by temperature difference between upper and lower side of rice plant seemed to increase the liberation of spores. 9. Conidial number of the blast fungus collected just before and after heading date was closely related with neck blast incidence. Lesions on three leaves from the top were closely related with neck blast incidence, because they had high potential for conidia formation of rice blast fungus and they were direct inoculum sources for neck blast. 10. The condition inside the leaf sheath was very favorable for the incidence of neck blast and the neck blast incidence in the leaf sheath increased as the level of fertilizer applied increased. Therefore, the infection rate of neck blast on the all panicle parts such as panicle base, panicle branches, spikelets, nodes, and internodes inside the leaf sheath didn't show differences due to varietal resistance or fertilizers applied. 11. Except for others among dominant species of fungi in the leaf sheath, only Gerlachia oryzae appeared to promote incidence of neck blast. It was assumed that days for heading of varieties were related with neck blast incidence.

  • PDF

Studies on Lipids in Fresh-Water Fishes 1. Distribution of Lipid Components in Various Tissues of Crucian Carp, Carassius carassius (담수어의 지질에 관한 연구 1. 붕어(Carassius carassius)의 부위별 지질성분의 분포)

  • CHOI Jin-Ho;RO Jae-Il;PYEUN Jae-Hyeong;CHOI Kang-Ju
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.17 no.4
    • /
    • pp.333-343
    • /
    • 1984
  • This study was designed to elucidate the lipid and its fatty acid composition in various tissues of fresh water fishes. The free and bound lipids in meat, skin and viscera of crucian carp (Carassius carassius) were extracted with ethyl ether and the mixed solvent of chloroform-methanol-water (10/9/1, v/v). The free and bound lipids were fractionated into neutral lipid, glycolipid and phospholipid by a silicic acid column chromatography using chloroform, acetone and methanol, respectively, and quantitatively analyzed by thin layer chromatography (TLC) and TLC scanner. The fatty acid compositions of polar ana nonpolar lipids in meat, and these of neutral lipid in various tissues were analyzed by gas liquid chromatography(GLC). The free lipid content in meat, skin and viscera was $6.22\%,\;9.95\%\;and\;9.76\%$, whereas the bound lipid content in those tissues was $10.01\%,\;3.56\%\;and\;7.36\%$, respectively. The neutral lipid contents in free lipid were ranged from $71.7\%$ to $89.4\%$, and $3{\sim}9$ times higher than those in bound lipid, while the phospholipid contents in bound lipid were ranged from $42.3\%$ to $63.2\%$, and $5{\sim}10$ times higher than those in free lipid. The neutral lipid was mainly consisted of triglyceride ($81.91{\sim}88.34\%$) in free lipid, and esterified sterol & hydrocarbon ($41.00{\sim}59.43\%$) in bound lipid. The phospholipid was mainly consisted of phosphatidyl ethanolamine($54.56{\sim}66.79\%$) and phosphatidyl choline ($21.88{\sim}34.28\%$) in free lipid, and phosphatidyl choline ($50.49{\sim}70.57\%$) and phosphatidyl ethanolamine ($15.74{\sim}24.92\%$) in bound lipid. The major fatty acids of polar lipid in free and bound lipids were $C_{16:0}\;(17.53\%,\;19.29\%)$, $C_{18:1}\;(24.57\%,\;16.08\%)$, $C_{18:2}\;(8.39\%,\;4.03\%)$, $C_{22:5}\;(1.68\%,\;8.08\%)$, and $C_{22:6}\;(6.22\%,\;13.60\%)$ and these of neutral lipid in free and bound lipids were $C_{16:0}\;(17.67\%,\;24.15\%)$, $C_{16:1}\;(12.81\%,\;5.52\%)$, $C_{18:1}\;(24.13\%,\;13.02\%)$, $C_{18:2}\;(15.47\%,\;8.68\%)$, $C_{22:5}\;(0.88\%,\;4.14\%)$ and $C_{22:6}\;(1.17\%,\;5.04\%)$, respectively. The unsaturations (TUFA/TSFA) of polar lipid in free and bound lipids were 2.02 and 2.74, and $1.5{\sim}2.0$ times higher than 1.51 and 1.23 of nonpolar lipid. In both polar and nonpolar lipids, w3 highly unsaturated fatty acid (w3HUFA) content of bound lipid was $2{\sim}5$ times higher than that of free lipid. The polyenoic acid contents such as $C_{20:5},\;C_{22:5}\;and\;C_{22:6}$ in bound lipid were $2{\sim}5$ times higher than these in free lipid. Consequently, there were significant difference between the lipid and its fatty acid composition in free and bound lipids and/or in various tissues.

  • PDF

The 1998, 1999 Patterns of Care Study for Breast Irradiation After Breast-Conserving Surgery in Korea (1998, 1999년도 우리나라에서 시행된 유방보존수술 후 방사선치료 현황 조사)

  • Suh Chang-Ok;Shin Hyun Soo;Cho Jae Ho;Park Won;Ahn Seung Do;Shin Kyung Hwan;Chung Eun Ji;Keum Ki Chang;Ha Sung Whan;Ahn Sung Ja;Kim Woo Cheol;Lee Myung Za;Ahn Ki Jung
    • Radiation Oncology Journal
    • /
    • v.22 no.3
    • /
    • pp.192-199
    • /
    • 2004
  • Purpose: To determine the patterns on evaluation and treatment in the patient with early breast cancer treated with conservative surgery and radiotherapy and to improve the radiotherapy techiniques, nationwide survey was peformed. Materials and Methods: A web-based database system for korean Patterns of Care Study (PCS) for 6 common cancers was developed. Two hundreds sixty-one randomly selected records of eligible patients treated between 1998$\~$1999 from 15 hospitals were reviewed. Results: The patients ages ranged from 24 to 85 years(median 45 years). Infiltrating ductal carcinoma was most common histologic type (88.9$\%$) followed by medullary carcinoma (4.2$\%$) and infiltrating lobular carcinoma (1.5$\%$). Pathologic T stage by AJCC was T1 in 59.7$\%$ of the casses, T2 in 29.5$\%$ of the cases, Tis in 8.8$\%$ of the cases. Axillary lymph node dissection was peformed I\in 91.2$\%$ of the cases and 69.7$\%$ were node negative. AJCC stage was 0 in 8.8$\%$ of the cases, stage I in 44.9$\%$ of the cases, stage IIa in 33.3$\%$ of the cases, and stage IIb in 8.4$\%$ of the cases. Estrogen and progesteron receptors were evaluated in 71.6$\%$, and 70.9$\%$ of the patients, respectively. Surgical methods of breast-conserving surgery was excision/lumpectomy in 37.2$\%$, wide excision in 11.5$\%$, quadrantectomy in 23$\%$ and partial mastectomy in 27.5$\%$ of the cases. A pathologically confirmed negative margin was obtained in 90.8$\%$ of the cases. Pathological margin was involved with tumor in 10 patients and margin was close (less than 2 mm) in 10 patients. All the patients except one recieved more than 90$\%$ of the planned radiotherapy dose. Radiotherapy volume was breast only In 88$\%$ of the cases, breast+supraclavicular fossa (SCL) in 5$\%$ of the cases, and breast+ SCL+ posterior axillary boost in 4.2%$\%$of the cases. Only one patient received isolated internal mammary lymph node irradiation. Used radiation beam was Co-60 in 8 cases, 4 MV X-ray in 115 cases, 6 MV X-ray in 125 cases, and 10 MV X-ray in 11 cases. The radiation dose to the whole breast was 45$\~$59.4 Gy (median 50.4) and boost dose was 8$\~$20 Gy (median 10 Gy). The total radiation dose delivered was 50.4$\~$70.4 Gy (median 60.4 Gy). Conclusion: There was no major deviation from current standard in the patterns of evaluation and treatment for the patients with early breast cancer treated with breast conservation method. Some varieties were identified in boost irradiation dose. Separate analysis for the datails of radiotherapy planning will be followed and the outcome of treatment is needed to evaluate the process.

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.