• Title/Summary/Keyword: 검증 시스템

Search Result 12,908, Processing Time 0.044 seconds

Case study on flood water level prediction accuracy of LSTM model according to condition of reference hydrological station combination (참조 수문관측소 구성 조건에 따른 LSTM 모형 홍수위예측 정확도 검토 사례 연구)

  • Lee, Seungho;Kim, Sooyoung;Jung, Jaewon;Yoon, Kwang Seok
    • Journal of Korea Water Resources Association
    • /
    • v.56 no.12
    • /
    • pp.981-992
    • /
    • 2023
  • Due to recent global climate change, the scale of flood damage is increasing as rainfall is concentrated and its intensity increases. Rain on a scale that has not been observed in the past may fall, and long-term rainy seasons that have not been recorded may occur. These damages are also concentrated in ASEAN countries, and many people in ASEAN countries are affected, along with frequent occurrences of flooding due to typhoons and torrential rains. In particular, the Bandung region which is located in the Upper Chitarum River basin in Indonesia has topographical characteristics in the form of a basin, making it very vulnerable to flooding. Accordingly, through the Official Development Assistance (ODA), a flood forecasting and warning system was established for the Upper Citarium River basin in 2017 and is currently in operation. Nevertheless, the Upper Citarium River basin is still exposed to the risk of human and property damage in the event of a flood, so efforts to reduce damage through fast and accurate flood forecasting are continuously needed. Therefore, in this study an artificial intelligence-based river flood water level forecasting model for Dayeu Kolot as a target station was developed by using 10-minute hydrological data from 4 rainfall stations and 1 water level station. Using 10-minute hydrological observation data from 6 stations from January 2017 to January 2021, learning, verification, and testing were performed for lead time such as 0.5, 1, 2, 3, 4, 5 and 6 hour and LSTM was applied as an artificial intelligence algorithm. As a result of the study, good results were shown in model fit and error for all lead times, and as a result of reviewing the prediction accuracy according to the learning dataset conditions, it is expected to be used to build an efficient artificial intelligence-based model as it secures prediction accuracy similar to that of using all observation stations even when there are few reference stations.

A Study on the Continuous Usage Intention Factors of O2O Service (O2O 서비스의 지속사용의도에 미치는 영향요인 연구)

  • Sung Yong Jung;Jin Soo Kim
    • Information Systems Review
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2018
  • A smart phone has been widely spread around world and makes people enjoy online shopping in any time and any place. Recently it also changes the distribution environment. O2O (Online-to-Offline) service becomes new normal due to its convenience of ease shopping of product and services. O2O service market shows steady and steep growth, It is reported that, however, 80% of the businesses has been discontinued within the first year because of unstable business models, customer dissatisfaction and distrust of service. Therefore, it is very important research issue to find out influential factors promoting continuous usage intention of O2O service. Previous study shows that it only considers online characteristics and lack of analysis about offline characteristics and social impact factors. The purpose of this paper is to find out continuous usage intention factors of O2O services by literature review, case analysis, and empirical test. A comprehensive research model and related hypothesis are developed and tested by using a structural equation, Survey was carried out among users who have used O2O service including payment service for at least once. Finally 611 samples are selected out of total 813 surveys. The result shows that the model is theoretically proved and 12 out of 17 hypotheses are accepted. The contribution of this paper is that it provides a new theoretical research model about continuous usage intention factors as well as practical guidelines about promoting continuous usage and growth strategies of O2O service.

A Study on the Change of Cyber Attacks in North Korea (북한의 사이버 공격 변화 양상에 대한 연구)

  • Chanyoung Park;Hyeonsik Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.175-181
    • /
    • 2024
  • The U.N. Security Council's North Korea Sanctions Committee estimated that the amount of North Korea's cyberattacks on virtual asset-related companies from 2017 to 2023 was about 4 trillion won. North Korea's cyberattacks have secured funds through cryptocurrency hacking as it has been restricted from securing foreign currency due to economic sanctions by the international community, and it also shows the form of technology theft against defense companies, and illegal assets are being used to maintain the Kim Jong-un regime and develop nuclear and missile development. When North Korea conducted its sixth nuclear test on September 3, 2017, and declared the completion of its national nuclear armament following the launch of an intercontinental ballistic missile on November 29 of the same year, the U.N. imposed sanctions on North Korea, which are considered the strongest economic sanctions in history. In these difficult economic situations, North Korea tried to overcome the crisis through cyberattacks, but as a result of analyzing the changes through the North's cyber attack cases, the strategic goal from the first period from 2009 to 2016 was to verify and show off North Korea's cyber capabilities through the neutralization of the national network and the takeover of information, and was seen as an intention to create social chaos in South Korea. When foreign currency earnings were limited due to sanctions against North Korea in 2016, the second stage seized virtual currency and secured funds to maintain the Kim Jong-un regime and advance nuclear and missile development. The third stage is a technology hacking of domestic and foreign defense companies, focusing on taking over key technologies to achieve the five strategic weapons tasks proposed by Chairman Kim Jong-un at the 8th Party Congress in 2021. At the national level, security measures for private companies as well as state agencies should be established against North Korea's cyberattacks, and measures for legal systems, technical problems, and budgets related to science are urgently needed. It is also necessary to establish a system and manpower to respond to the ever-developing cyberattacks by focusing on cultivating and securing professional manpower such as white hackers.

An Empirical Study on Influencing Factors of Switching Intention from Online Shopping to Webrooming (온라인 쇼핑에서 웹루밍으로의 쇼핑전환 의도에 영향을 미치는 요인에 대한 연구)

  • Choi, Hyun-Seung;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.19-41
    • /
    • 2016
  • Recently, the proliferation of mobile devices such as smartphones and tablet personal computers and the development of information communication technologies (ICT) have led to a big trend of a shift from single-channel shopping to multi-channel shopping. With the emergence of a "smart" group of consumers who want to shop in more reasonable and convenient ways, the boundaries apparently dividing online and offline shopping have collapsed and blurred more than ever before. Thus, there is now fierce competition between online and offline channels. Ever since the emergence of online shopping, a major type of multi-channel shopping has been "showrooming," where consumers visit offline stores to examine products before buying them online. However, because of the growing use of smart devices and the counterattack of offline retailers represented by omni-channel marketing strategies, one of the latest huge trends of shopping is "webrooming," where consumers visit online stores to examine products before buying them offline. This has become a threat to online retailers. In this situation, although it is very important to examine the influencing factors for switching from online shopping to webrooming, most prior studies have mainly focused on a single- or multi-channel shopping pattern. Therefore, this study thoroughly investigated the influencing factors on customers switching from online shopping to webrooming in terms of both the "search" and "purchase" processes through the application of a push-pull-mooring (PPM) framework. In order to test the research model, 280 individual samples were gathered from undergraduate and graduate students who had actual experience with webrooming. The results of the structural equation model (SEM) test revealed that the "pull" effect is strongest on the webrooming intention rather than the "push" or "mooring" effects. This proves a significant relationship between "attractiveness of webrooming" and "webrooming intention." In addition, the results showed that both the "perceived risk of online search" and "perceived risk of online purchase" significantly affect "distrust of online shopping." Similarly, both "perceived benefit of multi-channel search" and "perceived benefit of offline purchase" were found to have significant effects on "attractiveness of webrooming" were also found. Furthermore, the results indicated that "online purchase habit" is the only influencing factor that leads to "online shopping lock-in." The theoretical implications of the study are as follows. First, by examining the multi-channel shopping phenomenon from the perspective of "shopping switching" from online shopping to webrooming, this study complements the limits of the "channel switching" perspective, represented by multi-channel freeriding studies that merely focused on customers' channel switching behaviors from one to another. While extant studies with a channel switching perspective have focused on only one type of multi-channel shopping, where consumers just move from one particular channel to different channels, a study with a shopping switching perspective has the advantage of comprehensively investigating how consumers choose and navigate among diverse types of single- or multi-channel shopping alternatives. In this study, only limited shopping switching behavior from online shopping to webrooming was examined; however, the results should explain various phenomena in a more comprehensive manner from the perspective of shopping switching. Second, this study extends the scope of application of the push-pull-mooring framework, which is quite commonly used in marketing research to explain consumers' product switching behaviors. Through the application of this framework, it is hoped that more diverse shopping switching behaviors can be examined in future research. This study can serve a stepping stone for future studies. One of the most important practical implications of the study is that it may help single- and multi-channel retailers develop more specific customer strategies by revealing the influencing factors of webrooming intention from online shopping. For example, online single-channel retailers can ease the distrust of online shopping to prevent consumers from churning by reducing the perceived risk in terms of online search and purchase. On the other hand, offline retailers can develop specific strategies to increase the attractiveness of webrooming by letting customers perceive the benefits of multi-channel search or offline purchase. Although this study focused only on customers switching from online shopping to webrooming, the results can be expanded to various types of shopping switching behaviors embedded in single- and multi-channel shopping environments, such as showrooming and mobile shopping.

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

A Study on Risk Parity Asset Allocation Model with XGBoos (XGBoost를 활용한 리스크패리티 자산배분 모형에 관한 연구)

  • Kim, Younghoon;Choi, HeungSik;Kim, SunWoong
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.135-149
    • /
    • 2020
  • Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.

The Influence of Ventilation and Shade on the Mean Radiant Temperature of Summer Outdoor (통풍과 차양이 하절기 옥외공간의 평균복사온도에 미치는 영향)

  • Lee, Chun-Seok;Ryu, Nam-Hyung
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.5
    • /
    • pp.100-108
    • /
    • 2012
  • The purpose of the study was to evaluate the influence of shading and ventilation on Mean Radiant Temperature(MRT) of the outdoor space at a summer outdoor. The Wind Speed(WS), Air Temperature(AT) and Globe Temperature(GT) were recorded every minute from $1^{st}$ of May to the $30^{th}$ of September 2011 at a height of 1.2m above in four experimental plots with different shading and ventilating conditions, with a measuring system consisting of a vane type anemometer(Barini Design's BDTH), Resistance Temperature Detector(RTD, Pt-100), standard black globe(${\O}$ 150mm) and data acquisition systems(National Instrument's Labview and Compfile Techs' Moacon). To implement four different ventilating and shading conditions, three hexahedral steel frames, and one natural plot were established in the open grass field. Two of the steel frames had a dimension of $3m(W){\times}3m(L){\times}1.5m(H)$ and every vertical side covered with transparent polyethylene film to prevent lateral ventilation(Ventilation Blocking Plot: VP), and an additional shading curtain was applied on the top side of a frame(Shading and Ventilation Blocking Plot: SVP). The third was $1.5m(W){\times}1.5m(L){\times}1.5m(H)$, only the top side of which was covered by the shading curtain without the lateral film(Shading Plot: SP). The last plot was natural condition without any kind of shading and wind blocking material(Natural Open Plot: NP). Based on the 13,262 records of 44 sunny days, the time serial difference of AT and GT for 24 hour were analyzed and compared, and statistical analysis was done based on the 7,172 records of daytime period from 7 A.M. to 8 P.M., while the relation between the MRT and solar radiation and wind speed was analyzed based on the records of the hottest period from 11 A.M. to 4 P.M.. The major findings were as follows: 1. The peak AT was $40.8^{\circ}C$ at VP and $35.6^{\circ}C$ at SP showing the difference about $5^{\circ}C$, but the difference of average AT was very small within${\pm}1^{\circ}C$. 2. The difference of the peak GT was $12^{\circ}C$ showing $52.5^{\circ}C$ at VP and $40.6^{\circ}C$ at SP, while the gap of average GT between the two plots was $6^{\circ}C$. Comparing all four plots including NP and SVP, it can be said that the shading decrease $6^{\circ}C$ GT while the wind blocking increase $3^{\circ}C$ GT. 3. According to the calculated MRT, the shading has a cooling effect in reducing a maximum of $13^{\circ}C$ and average $9^{\circ}C$ MRT, while the wind blocking has heating effect of increasing average $3^{\circ}C$ MRT. In other words, the MRT of the shaded area with natural ventilation could be cooler than the wind blocking the sunny site to about $16^{\circ}C$ MRT maximum. 4. The regression and correlation tests showed that the shading is more important than the ventilation in reducing the MRT, while both of them do an important role in improving the outdoor thermal comfort. In summary, the results of this study showed that the shade is the first and the ventilation is the second important factor in terms of improving outdoor thermal comfort in summer daylight hours. Therefore, it can be apparently said that the more shade by the forest, shading trees etc., the more effective in conditioning the microclimate of an outdoor space reducing the useless or even harmful heat energy for human activities. Furthermore, the delicately designed wind corridor or outdoor ventilation system can improve even the thermal environment of urban area.