• Title/Summary/Keyword: Industrial system

Search Result 21,494, Processing Time 0.053 seconds

Spatio-Temporal Monitoring of Soil CO2 Fluxes and Concentrations after Artificial CO2 Release (인위적 CO2 누출에 따른 토양 CO2 플럭스와 농도의 시공간적 모니터링)

  • Kim, Hyun-Jun;Han, Seung Hyun;Kim, Seongjun;Yun, Hyeon Min;Jun, Seong-Chun;Son, Yowhan
    • Journal of Environmental Impact Assessment
    • /
    • v.26 no.2
    • /
    • pp.93-104
    • /
    • 2017
  • CCS (Carbon Capture and Storage) is a technical process to capture $CO_2$ from industrial and energy-based sources, to transfer and sequestrate impressed $CO_2$ in geological formations, oceans, or mineral carbonates. However, potential $CO_2$ leakage exists and causes environmental problems. Thus, this study was conducted to analyze the spatial and temporal variations of $CO_2$ fluxes and concentrations after artificial $CO_2$ release. The Environmental Impact Evaluation Test Facility (EIT) was built in Eumseong, Korea in 2015. Approximately 34kg $CO_2$ /day/zone were injected at Zones 2, 3, and 4 among the total of 5 zones from October 26 to 30, 2015. $CO_2$ fluxes were measured every 30 minutes at the surface at 0m, 1.5m, 2.5m, and 10m from the $CO_2$ releasing well using LI-8100A until November 13, 2015, and $CO_2$ concentrations were measured once a day at 15cm, 30cm, and 60cm depths at every 0m, 1.5m, 2.5m, 5m, and 10m from the well using GA5000 until November 28, 2015. $CO_2$ flux at 0m from the well started increasing on the fifth day after $CO_2$ release started, and continued to increase until November 13 even though the artificial $CO_2$ release stopped. $CO_2$ fluxes measured at 2.5m, 5.0m, and 10m from the well were not significantly different with each other. On the other hand, soil $CO_2$ concentration was shown as 38.4% at 60cm depth at 0m from the well in Zone 3 on the next day after $CO_2$ release started. Soil $CO_2$ was horizontally spreaded overtime, and detected up to 5m away from the well in all zones until $CO_2$ release stopped. Also, soil $CO_2$ concentrations at 30cm and 60cm depths at 0m from the well were measured similarly as $50.6{\pm}25.4%$ and $55.3{\pm}25.6%$, respectively, followed by 30cm depth ($31.3{\pm}17.2%$) which was significantly lower than those measured at the other depths on the final day of $CO_2$ release period. Soil $CO_2$ concentrations at all depths in all zones were gradually decreased for about 1 month after $CO_2$ release stopped, but still higher than those of the first day after $CO_2$ release stared. In conclusion, the closer the distance from the well and the deeper the depth, the higher $CO_2$ fluxes and concentrations occurred. Also, long-term monitoring should be required because the leaked $CO_2$ gas can remains in the soil for a long time even if the leakage stopped.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

IR Study on the Adsorption of Carbon Monoxide on Silica Supported Ruthenium-Nickel Alloy (실리카 지지 루테늄-니켈 합금에 있어서 일산화탄소의 흡착에 관한 IR 연구)

  • Park, Sang-Youn;Yoon, Dong-Wook
    • Applied Chemistry for Engineering
    • /
    • v.17 no.4
    • /
    • pp.349-356
    • /
    • 2006
  • We have investigated adsorption and desorption properties of CO adsorption on silica supported Ru/Ni alloys at various Ru/Ni mole content ratio as well as CO partial pressures using Fourier transform infrared spectrometer (FT-IR). For Ru-$SiO_{2}$ sample, four bands were observed at $2080.0cm^{-1}$, $2021.0{\sim}2030.7cm^{-1}$, $1778.9{\sim}1799.3cm^{-1}$, $1623.8cm^{-1}$ on adsorption and three bands were observed at $2138.7cm^{-1}$, $2069.3cm^{-1}$, $1988.3{\sim}2030.7cm^{-1}$ on vacumn desorption. For Ni-$SiO_{2}$ sample, four bands were observed at $2057.7cm^{-1}$, $2019.1{\sim}2040.3cm^{-1}$, $1862.9{\sim}1868.7cm^{-1}$, $1625.7cm^{-1}$ on adsorption and two bands were observed at $2009.5{\sim}2040.3cm^{-1}$, $1828.4{\sim}1868.7cm^{-1}$ on vacumn desorption. These absorption bands correspond with those of the previous reports approximately. For Ru/Ni(9/1, 8/2, 7/3, 6/4, 5/5; mole content ratio)-$SiO_{2}$ samples, three bands were observed at $2001.8{\sim}2057.7cm^{-1}$, $1812.8{\sim}1926.5cm^{-1}$, $1623.8{\sim}1625.7cm^{-1}$ on adsorption and three bands were observed at $2140.6cm^{-1}$, $2073.1cm^{-1}$, $1969.0{\sim}2057.7cm^{-1}$ on vacumn desorption. The spectrum pattern observed for Ru/Ni-$SiO_{2}$ sample at 9/1 Ru/Ni mole content ratio on CO adsorption and on vacumn desorption is almost like the spectrum pattern observed for Ru-$SiO_{2}$ sample. But the spectrum patterns observed for Ru/Ni-$SiO_{2}$ samples under 8/2 Ru/Ni mole content ratio on CO adsorption and vacumn desorption are almost like the pattern observed for $Ni-SiO_{2}$ sample. It may be suggested surfaces of alloy clusters on the Ru/Ni-$SiO_{2}$ samples contain more Ni components than the mole content ratio of the sample considering the above phenomena. With Ru/Ni-$SiO_{2}$ samples the absorption band shifts may be ascribed to variations of surface concentration, strain variation due to atomic size difference, variation of bonding energy and electronic densities, and changes of surface geometries according to surface concentration variation. Studies for CO adsorption on Ru/Ni alloy cluster surface by LEED and Auger spectroscopy, interation between Ru/Ni alloy cluster and $SiO_{2}$, and MO calculation for the system would be needed to look into the phenomena.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

Professional Speciality of Communication Administration and, Occupational Group and Series Classes of Position in National Public Official Law -for Efficiency of Telecommunication Management- (통신행정의 전문성과 공무원법상 직군렬 - 전기통신의 관리들 중심으로-)

  • 조정현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.3 no.1
    • /
    • pp.26-27
    • /
    • 1978
  • It can be expected that intelligence and knowledge will be the core of the post-industrial society in a near future. Accordingly, the age of intelligence shall be accelerated extensively to find ourselves in an age of 'Communication' service enterprise. The communication actions will increase its efficiency and multiply its utility, indebted to its scientic principles and legal idea. The two basic elements of communication action, that is, communication station and communication men are considered to perform their function when they are properly supported and managed by the government administration. Since the communication action itself is composed of various factors, the elements such as communication stations and officials must be cultivated and managed by specialist or experts with continuous and extensive study practices concerned. With the above mind, this study reviewed our public service officials law with a view to improve it by providing some suggestions for communication experts and researchers to find suitable positions in the framework of government administration. In this study, I would like to suggest 'Occupational Group of Communication' that is consisted of a series of comm, management positions and research positions in parallel to the existing series of comm, technical position. The communication specialist or expert is required to be qualified with necessary scientific knowledge and techniques of communication, as well as prerequisites as government service officials. Communication experts must succeed in the first hand to obtain government licence concerned in with the government law and regulation, and international custom before they can be appointed to the official positions. This system of licence-prior-to-appointment is principally applied in the communication management position. And communication research positions are for those who shall engage themselves to the work of study and research in the field of both management and technical nature. It is hopefully expected that efficient and extensive management of communication activities, as well as scientific and continuous study over than communication enterprise will be upgraded at national dimensions.

  • PDF

The Present State of Domestic Acceptance of Various International Conventions for the Prevention of Marine Pollution (해양오염방지를 위한 각종 국제협약의 국내 수용 현황)

  • Kim, Kwang-Soo
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.12 no.4 s.27
    • /
    • pp.293-300
    • /
    • 2006
  • Domestic laws such as Korea Marine Pollution Prevention Law (KMPPL) which has been mae and amended according to the conclusions and amendments of various international conventions for the prevention a marine pollution such as MARPOL 73/78 were reviewed and compared with the major contents of the relevant international conventions. Alternative measures for legislating new laws or amending existing laws such as KMPPL for the acceptance of major contents of existing international conventions were proposed. Annex VI of MARPOL 73/78 into which the regulations for the prevention of air pollution from ship have been adopted has been recently accepted in KMPPL which should be applied to ships which are the moving sources of air pollution at sea rather tlnn in Korea Air Environment Conservation Law which should be applied to automobiles and industrial installations in land. The major contents of LC 72/95 have been accepted in KMPPL However, a few of substances requiring special care in Annex II of 72LC, a few of items in characteristics and composition for the matter in relation to criteria governing the issue of permits for the dumping of matter at sea in Annex III of 72LC, and a few of items in wastes or other matter that may be considered for dumping in Annex I of 96 Protocol have not been accepted in KMPPL yet. The major contents of OPRC 90 have been accepted in KMPPL. However, oil pollution emergency plans for sea ports and oil handling facilities, and national contingency plan for preparedness and response have not been accepted in KMPPL yet. The waste oil related articles if Basel Convention, which shall regulate and prohibit transboundary movement of hazardous waste, should be accepted in KMPPL in order to prevent the transfer if scrap-purpose tanker ships containing oil/water mixtures and chemicals remained on beard from advanced countries to developing and/or underdeveloped countries. International Convention for the Control if Harmful Anti-Fouling Systems on the Ships should be accepted in KMPPL rather tlnn in Korea Noxious Chemicals Management Law. International Convention for Ship's Ballast Water/Sediment Management should be accepted in KMPPL or by a new law in order to prevent domestic marine ecosystem and costal environment from the invasion of harmful exotic species through the discharge of ship's ballast water.

  • PDF

A Study on the Determinants of Patent Citation Relationships among Companies : MR-QAP Analysis (기업 간 특허인용 관계 결정요인에 관한 연구 : MR-QAP분석)

  • Park, Jun Hyung;Kwahk, Kee-Young;Han, Heejun;Kim, Yunjeong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.21-37
    • /
    • 2013
  • Recently, as the advent of the knowledge-based society, there are more people getting interested in the intellectual property. Especially, the ICT companies leading the high-tech industry are working hard to strive for systematic management of intellectual property. As we know, the patent information represents the intellectual capital of the company. Also now the quantitative analysis on the continuously accumulated patent information becomes possible. The analysis at various levels becomes also possible by utilizing the patent information, ranging from the patent level to the enterprise level, industrial level and country level. Through the patent information, we can identify the technology status and analyze the impact of the performance. We are also able to find out the flow of the knowledge through the network analysis. By that, we can not only identify the changes in technology, but also predict the direction of the future research. In the field using the network analysis there are two important analyses which utilize the patent citation information; citation indicator analysis utilizing the frequency of the citation and network analysis based on the citation relationships. Furthermore, this study analyzes whether there are any impacts between the size of the company and patent citation relationships. 74 S&P 500 registered companies that provide IT and communication services are selected for this study. In order to determine the relationship of patent citation between the companies, the patent citation in 2009 and 2010 is collected and sociomatrices which show the patent citation relationship between the companies are created. In addition, the companies' total assets are collected as an index of company size. The distance between companies is defined as the absolute value of the difference between the total assets. And simple differences are considered to be described as the hierarchy of the company. The QAP Correlation analysis and MR-QAP analysis is carried out by using the distance and hierarchy between companies, and also the sociomatrices that shows the patent citation in 2009 and 2010. Through the result of QAP Correlation analysis, the patent citation relationship between companies in the 2009's company's patent citation network and the 2010's company's patent citation network shows the highest correlation. In addition, positive correlation is shown in the patent citation relationships between companies and the distance between companies. This is because the patent citation relationship is increased when there is a difference of size between companies. Not only that, negative correlation is found through the analysis using the patent citation relationship between companies and the hierarchy between companies. Relatively it is indicated that there is a high evaluation about the patent of the higher tier companies influenced toward the lower tier companies. MR-QAP analysis is carried out as follow. The sociomatrix that is generated by using the year 2010 patent citation relationship is used as the dependent variable. Additionally the 2009's company's patent citation network and the distance and hierarchy networks between the companies are used as the independent variables. This study performed MR-QAP analysis to find the main factors influencing the patent citation relationship between the companies in 2010. The analysis results show that all independent variables have positively influenced the 2010's patent citation relationship between the companies. In particular, the 2009's patent citation relationship between the companies has the most significant impact on the 2010's, which means that there is consecutiveness regarding the patent citation relationships. Through the result of QAP correlation analysis and MR-QAP analysis, the patent citation relationship between companies is affected by the size of the companies. But the most significant impact is the patent citation relationships that had been done in the past. The reason why we need to maintain the patent citation relationship between companies is it might be important in the use of strategic aspect of the companies to look into relationships to share intellectual property between each other, also seen as an important auxiliary of the partner companies to cooperate with.

A Study on the Regional Characteristics of Broadband Internet Termination by Coupling Type using Spatial Information based Clustering (공간정보기반 클러스터링을 이용한 초고속인터넷 결합유형별 해지의 지역별 특성연구)

  • Park, Janghyuk;Park, Sangun;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.45-67
    • /
    • 2017
  • According to the Internet Usage Research performed in 2016, the number of internet users and the internet usage have been increasing. Smartphone, compared to the computer, is taking a more dominant role as an internet access device. As the number of smart devices have been increasing, some views that the demand on high-speed internet will decrease; however, Despite the increase in smart devices, the high-speed Internet market is expected to slightly increase for a while due to the speedup of Giga Internet and the growth of the IoT market. As the broadband Internet market saturates, telecom operators are over-competing to win new customers, but if they know the cause of customer exit, it is expected to reduce marketing costs by more effective marketing. In this study, we analyzed the relationship between the cancellation rates of telecommunication products and the factors affecting them by combining the data of 3 cities, Anyang, Gunpo, and Uiwang owned by a telecommunication company with the regional data from KOSIS(Korean Statistical Information Service). Especially, we focused on the assumption that the neighboring areas affect the distribution of the cancellation rates by coupling type, so we conducted spatial cluster analysis on the 3 types of cancellation rates of each region using the spatial analysis tool, SatScan, and analyzed the various relationships between the cancellation rates and the regional data. In the analysis phase, we first summarized the characteristics of the clusters derived by combining spatial information and the cancellation data. Next, based on the results of the cluster analysis, Variance analysis, Correlation analysis, and regression analysis were used to analyze the relationship between the cancellation rates data and regional data. Based on the results of analysis, we proposed appropriate marketing methods according to the region. Unlike previous studies on regional characteristics analysis, In this study has academic differentiation in that it performs clustering based on spatial information so that the regions with similar cancellation types on adjacent regions. In addition, there have been few studies considering the regional characteristics in the previous study on the determinants of subscription to high-speed Internet services, In this study, we tried to analyze the relationship between the clusters and the regional characteristics data, assuming that there are different factors depending on the region. In this study, we tried to get more efficient marketing method considering the characteristics of each region in the new subscription and customer management in high-speed internet. As a result of analysis of variance, it was confirmed that there were significant differences in regional characteristics among the clusters, Correlation analysis shows that there is a stronger correlation the clusters than all region. and Regression analysis was used to analyze the relationship between the cancellation rate and the regional characteristics. As a result, we found that there is a difference in the cancellation rate depending on the regional characteristics, and it is possible to target differentiated marketing each region. As the biggest limitation of this study and it was difficult to obtain enough data to carry out the analyze. In particular, it is difficult to find the variables that represent the regional characteristics in the Dong unit. In other words, most of the data was disclosed to the city rather than the Dong unit, so it was limited to analyze it in detail. The data such as income, card usage information and telecommunications company policies or characteristics that could affect its cause are not available at that time. The most urgent part for a more sophisticated analysis is to obtain the Dong unit data for the regional characteristics. Direction of the next studies be target marketing based on the results. It is also meaningful to analyze the effect of marketing by comparing and analyzing the difference of results before and after target marketing. It is also effective to use clusters based on new subscription data as well as cancellation data.

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.

Dynamic Virtual Ontology using Tags with Semantic Relationship on Social-web to Support Effective Search (효율적 자원 탐색을 위한 소셜 웹 태그들을 이용한 동적 가상 온톨로지 생성 연구)

  • Lee, Hyun Jung;Sohn, Mye
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.19-33
    • /
    • 2013
  • In this research, a proposed Dynamic Virtual Ontology using Tags (DyVOT) supports dynamic search of resources depending on user's requirements using tags from social web driven resources. It is general that the tags are defined by annotations of a series of described words by social users who usually tags social information resources such as web-page, images, u-tube, videos, etc. Therefore, tags are characterized and mirrored by information resources. Therefore, it is possible for tags as meta-data to match into some resources. Consequently, we can extract semantic relationships between tags owing to the dependency of relationships between tags as representatives of resources. However, to do this, there is limitation because there are allophonic synonym and homonym among tags that are usually marked by a series of words. Thus, research related to folksonomies using tags have been applied to classification of words by semantic-based allophonic synonym. In addition, some research are focusing on clustering and/or classification of resources by semantic-based relationships among tags. In spite of, there also is limitation of these research because these are focusing on semantic-based hyper/hypo relationships or clustering among tags without consideration of conceptual associative relationships between classified or clustered groups. It makes difficulty to effective searching resources depending on user requirements. In this research, the proposed DyVOT uses tags and constructs ontologyfor effective search. We assumed that tags are extracted from user requirements, which are used to construct multi sub-ontology as combinations of tags that are composed of a part of the tags or all. In addition, the proposed DyVOT constructs ontology which is based on hierarchical and associative relationships among tags for effective search of a solution. The ontology is composed of static- and dynamic-ontology. The static-ontology defines semantic-based hierarchical hyper/hypo relationships among tags as in (http://semanticcloud.sandra-siegel.de/) with a tree structure. From the static-ontology, the DyVOT extracts multi sub-ontology using multi sub-tag which are constructed by parts of tags. Finally, sub-ontology are constructed by hierarchy paths which contain the sub-tag. To create dynamic-ontology by the proposed DyVOT, it is necessary to define associative relationships among multi sub-ontology that are extracted from hierarchical relationships of static-ontology. The associative relationship is defined by shared resources between tags which are linked by multi sub-ontology. The association is measured by the degree of shared resources that are allocated into the tags of sub-ontology. If the value of association is larger than threshold value, then associative relationship among tags is newly created. The associative relationships are used to merge and construct new hierarchy the multi sub-ontology. To construct dynamic-ontology, it is essential to defined new class which is linked by two more sub-ontology, which is generated by merged tags which are highly associative by proving using shared resources. Thereby, the class is applied to generate new hierarchy with extracted multi sub-ontology to create a dynamic-ontology. The new class is settle down on the ontology. So, the newly created class needs to be belong to the dynamic-ontology. So, the class used to new hyper/hypo hierarchy relationship between the class and tags which are linked to multi sub-ontology. At last, DyVOT is developed by newly defined associative relationships which are extracted from hierarchical relationships among tags. Resources are matched into the DyVOT which narrows down search boundary and shrinks the search paths. Finally, we can create the DyVOT using the newly defined associative relationships. While static data catalog (Dean and Ghemawat, 2004; 2008) statically searches resources depending on user requirements, the proposed DyVOT dynamically searches resources using multi sub-ontology by parallel processing. In this light, the DyVOT supports improvement of correctness and agility of search and decreasing of search effort by reduction of search path.