• Title/Summary/Keyword: 개별 시스템

Search Result 2,360, Processing Time 0.03 seconds

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Quality Assurance for Intensity Modulated Radiation Therapy (세기조절방사선치료(Intensity Modulated Radiation Therapy; IMRT)의 정도보증(Quality Assurance))

  • Cho Byung Chul;Park Suk Won;Oh Do Hoon;Bae Hoonsik
    • Radiation Oncology Journal
    • /
    • v.19 no.3
    • /
    • pp.275-286
    • /
    • 2001
  • Purpose : To setup procedures of quality assurance (OA) for implementing intensity modulated radiation therapy (IMRT) clinically, report OA procedures peformed for one patient with prostate cancer. Materials and methods : $P^3IMRT$ (ADAC) and linear accelerator (Siemens) with multileaf collimator are used to implement IMRT. At first, the positional accuracy, reproducibility of MLC, and leaf transmission factor were evaluated. RTP commissioning was peformed again to consider small field effect. After RTP recommissioning, a test plan of a C-shaped PTV was made using 9 intensity modulated beams, and the calculated isocenter dose was compared with the measured one in solid water phantom. As a patient-specific IMRT QA, one patient with prostate cancer was planned using 6 beams of total 74 segmented fields. The same beams were used to recalculate dose in a solid water phantom. Dose of these beams were measured with a 0.015 cc micro-ionization chamber, a diode detector, films, and an array detector and compared with calculated one. Results : The positioning accuracy of MLC was about 1 mm, and the reproducibility was around 0.5 mm. For leaf transmission factor for 10 MV photon beams, interleaf leakage was measured $1.9\%$ and midleaf leakage $0.9\%$ relative to $10\times\;cm^2$ open filed. Penumbra measured with film, diode detector, microionization chamber, and conventional 0.125 cc chamber showed that $80\~20\%$ penumbra width measured with a 0.125 cc chamber was 2 mm larger than that of film, which means a 0.125 cc ionization chamber was unacceptable for measuring small field such like 0.5 cm beamlet. After RTP recommissioning, the discrepancy between the measured and calculated dose profile for a small field of $1\times1\;cm^2$ size was less than $2\%$. The isocenter dose of the test plan of C-shaped PTV was measured two times with micro-ionization chamber in solid phantom showed that the errors upto $12\%$ for individual beam, but total dose delivered were agreed with the calculated within $2\%$. The transverse dose distribution measured with EC-L film was agreed with the calculated one in general. The isocenter dose for the patient measured in solid phantom was agreed within $1.5\%$. On-axis dose profiles of each individual beam at the position of the central leaf measured with film and array detector were found that at out-of-the-field region, the calculated dose underestimates about $2\%$, at inside-the-field the measured one was agreed within $3\%$, except some position. Conclusion : It is necessary more tight quality control of MLC for IMRT relative to conventional large field treatment and to develop QA procedures to check intensity pattern more efficiently. At the conclusion, we did setup an appropriate QA procedures for IMRT by a series of verifications including the measurement of absolute dose at the isocenter with a micro-ionization chamber, film dosimetry for verifying intensity pattern, and another measurement with an array detector for comparing off-axis dose profile.

  • PDF

A Study on the Present Condition and Improvement of Cultural Heritage Management in Seoul - Based on the Results of Regular Surveys (2016~2018) - (서울특별시 지정문화재 관리 현황 진단 및 개선방안 연구 - 정기조사(2016~2018) 결과를 중심으로 -)

  • Cho, Hong-seok;Suh, Hyun-jung;Kim, Ye-rin;Kim, Dong-cheon
    • Korean Journal of Heritage: History & Science
    • /
    • v.52 no.2
    • /
    • pp.80-105
    • /
    • 2019
  • With the increasing complexity and irregularity of disaster types, the need for cultural asset preservation and management from a proactive perspective has increased as a number of cultural properties have been destroyed and damaged by various natural and humanistic factors. In consideration of these circumstances, the Cultural Heritage Administration enacted an Act in December 2005 to enforce the regular commission of surveys for the systematic preservation and management of cultural assets, and through a recent revision of this Act, the investigation cycle has been reduced from five to three years, and the object of regular inspections has been expanded to cover registered cultural properties. According to the ordinance, a periodic survey of city- or province-designated heritage is to be carried out mainly by metropolitan and provincial governments. The Seoul Metropolitan Government prepared a legal basis for commissioning regular surveys under the Seoul Special City Cultural Properties Protection Ordinance 2008 and, in recognition of the importance of preventive management due to the large number of cultural assets located in the city center and the high demand for visits, conducted regular surveys of the entire city-designated cultural assets from 2016 to 2018. Upon the first survey being completed, it was considered necessary to review the policy effectiveness of the system and to conduct a comprehensive review of the results of the regular surveys that had been carried out to enhance the management of cultural assets. Therefore, the present study examined the comprehensive management status of the cultural assets designated by the Seoul Metropolitan Government for three years (2016-2018), assessing the performance and identifying limitations. Additionally, ways to improve it were sought, and a DB establishment plan for the establishment of an integrated management system under the auspices of the Seoul Metropolitan Government was proposed. Specifically, survey forms were administered under the Guidelines for the Operation of Periodic Surveys of National Designated Cultural Assets; however, the types of survey forms were reclassified and further subdivided in consideration of the characteristics of the designated cultural assets, and manuals were developed for consistent and specific information technologies in respect of the scope and manner of the survey. Based on this analysis, it was confirmed that 401 cases (77.0%) out of 521 cases were generally well preserved; however, 102 cases (19.6%) were found to require special measures such as attention, precision diagnosis, and repair. Meanwhile, there were 18 cases (3.4%) of unsurveyed cultural assets. These were inaccessible to the investigation at this time due to reasons such as unknown location or closure to the public. Regarding the specific types of cultural assets, among a total of 171 cultural real estate properties, 63 cases (36.8%) of structural damage were caused by the failure and elimination of members, and 73 cases (42.7%) of surface area damage were the result of biological damage. Almost all plants and geological earth and scenic spots were well preserved. In the case of movable cultural assets, 25 cases (7.1%) among 350 cases were found to have changed location, and structural damage and surface area damage was found according to specific material properties, excluding ceramics. In particular, papers, textiles, and leather goods, with material properties that are vulnerable to damage, were found to have greater damage than those of other materials because they were owned and managed by individuals and temples. Thus, it has been confirmed that more proactive management is needed. Accordingly, an action plan for the comprehensive preservation and management status check shall be developed according to management status and urgency, and the project promotion plan and the focus management target should be selected and managed first. In particular, concerning movable cultural assets, there have been some cases in which new locations have gone unreported after changes in ownership (management); therefore, a new system is required to strengthen the obligation to report changes in ownership (management) or location. Based on the current status diagnosis and improvement measures, it is expected that the foundation of a proactive and efficient cultural asset management system can be realized through the establishment of an effective mid- to long-term database of the integrated management system pursued by the Seoul Metropolitan Government.

Methodology for Identifying Issues of User Reviews from the Perspective of Evaluation Criteria: Focus on a Hotel Information Site (사용자 리뷰의 평가기준 별 이슈 식별 방법론: 호텔 리뷰 사이트를 중심으로)

  • Byun, Sungho;Lee, Donghoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.23-43
    • /
    • 2016
  • As a result of the growth of Internet data and the rapid development of Internet technology, "big data" analysis has gained prominence as a major approach for evaluating and mining enormous data for various purposes. Especially, in recent years, people tend to share their experiences related to their leisure activities while also reviewing others' inputs concerning their activities. Therefore, by referring to others' leisure activity-related experiences, they are able to gather information that might guarantee them better leisure activities in the future. This phenomenon has appeared throughout many aspects of leisure activities such as movies, traveling, accommodation, and dining. Apart from blogs and social networking sites, many other websites provide a wealth of information related to leisure activities. Most of these websites provide information of each product in various formats depending on different purposes and perspectives. Generally, most of the websites provide the average ratings and detailed reviews of users who actually used products/services, and these ratings and reviews can actually support the decision of potential customers in purchasing the same products/services. However, the existing websites offering information on leisure activities only provide the rating and review based on one stage of a set of evaluation criteria. Therefore, to identify the main issue for each evaluation criterion as well as the characteristics of specific elements comprising each criterion, users have to read a large number of reviews. In particular, as most of the users search for the characteristics of the detailed elements for one or more specific evaluation criteria based on their priorities, they must spend a great deal of time and effort to obtain the desired information by reading more reviews and understanding the contents of such reviews. Although some websites break down the evaluation criteria and direct the user to input their reviews according to different levels of criteria, there exist excessive amounts of input sections that make the whole process inconvenient for the users. Further, problems may arise if a user does not follow the instructions for the input sections or fill in the wrong input sections. Finally, treating the evaluation criteria breakdown as a realistic alternative is difficult, because identifying all the detailed criteria for each evaluation criterion is a challenging task. For example, if a review about a certain hotel has been written, people tend to only write one-stage reviews for various components such as accessibility, rooms, services, or food. These might be the reviews for most frequently asked questions, such as distance between the nearest subway station or condition of the bathroom, but they still lack detailed information for these questions. In addition, in case a breakdown of the evaluation criteria was provided along with various input sections, the user might only fill in the evaluation criterion for accessibility or fill in the wrong information such as information regarding rooms in the evaluation criteria for accessibility. Thus, the reliability of the segmented review will be greatly reduced. In this study, we propose an approach to overcome the limitations of the existing leisure activity information websites, namely, (1) the reliability of reviews for each evaluation criteria and (2) the difficulty of identifying the detailed contents that make up the evaluation criteria. In our proposed methodology, we first identify the review content and construct the lexicon for each evaluation criterion by using the terms that are frequently used for each criterion. Next, the sentences in the review documents containing the terms in the constructed lexicon are decomposed into review units, which are then reconstructed by using the evaluation criteria. Finally, the issues of the constructed review units by evaluation criteria are derived and the summary results are provided. Apart from the derived issues, the review units are also provided. Therefore, this approach aims to help users save on time and effort, because they will only be reading the relevant information they need for each evaluation criterion rather than go through the entire text of review. Our proposed methodology is based on the topic modeling, which is being actively used in text analysis. The review is decomposed into sentence units rather than considering the whole review as a document unit. After being decomposed into individual review units, the review units are reorganized according to each evaluation criterion and then used in the subsequent analysis. This work largely differs from the existing topic modeling-based studies. In this paper, we collected 423 reviews from hotel information websites and decomposed these reviews into 4,860 review units. We then reorganized the review units according to six different evaluation criteria. By applying these review units in our methodology, the analysis results can be introduced, and the utility of proposed methodology can be demonstrated.

Stock Price Prediction by Utilizing Category Neutral Terms: Text Mining Approach (카테고리 중립 단어 활용을 통한 주가 예측 방안: 텍스트 마이닝 활용)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.123-138
    • /
    • 2017
  • Since the stock market is driven by the expectation of traders, studies have been conducted to predict stock price movements through analysis of various sources of text data. In order to predict stock price movements, research has been conducted not only on the relationship between text data and fluctuations in stock prices, but also on the trading stocks based on news articles and social media responses. Studies that predict the movements of stock prices have also applied classification algorithms with constructing term-document matrix in the same way as other text mining approaches. Because the document contains a lot of words, it is better to select words that contribute more for building a term-document matrix. Based on the frequency of words, words that show too little frequency or importance are removed. It also selects words according to their contribution by measuring the degree to which a word contributes to correctly classifying a document. The basic idea of constructing a term-document matrix was to collect all the documents to be analyzed and to select and use the words that have an influence on the classification. In this study, we analyze the documents for each individual item and select the words that are irrelevant for all categories as neutral words. We extract the words around the selected neutral word and use it to generate the term-document matrix. The neutral word itself starts with the idea that the stock movement is less related to the existence of the neutral words, and that the surrounding words of the neutral word are more likely to affect the stock price movements. And apply it to the algorithm that classifies the stock price fluctuations with the generated term-document matrix. In this study, we firstly removed stop words and selected neutral words for each stock. And we used a method to exclude words that are included in news articles for other stocks among the selected words. Through the online news portal, we collected four months of news articles on the top 10 market cap stocks. We split the news articles into 3 month news data as training data and apply the remaining one month news articles to the model to predict the stock price movements of the next day. We used SVM, Boosting and Random Forest for building models and predicting the movements of stock prices. The stock market opened for four months (2016/02/01 ~ 2016/05/31) for a total of 80 days, using the initial 60 days as a training set and the remaining 20 days as a test set. The proposed word - based algorithm in this study showed better classification performance than the word selection method based on sparsity. This study predicted stock price volatility by collecting and analyzing news articles of the top 10 stocks in market cap. We used the term - document matrix based classification model to estimate the stock price fluctuations and compared the performance of the existing sparse - based word extraction method and the suggested method of removing words from the term - document matrix. The suggested method differs from the word extraction method in that it uses not only the news articles for the corresponding stock but also other news items to determine the words to extract. In other words, it removed not only the words that appeared in all the increase and decrease but also the words that appeared common in the news for other stocks. When the prediction accuracy was compared, the suggested method showed higher accuracy. The limitation of this study is that the stock price prediction was set up to classify the rise and fall, and the experiment was conducted only for the top ten stocks. The 10 stocks used in the experiment do not represent the entire stock market. In addition, it is difficult to show the investment performance because stock price fluctuation and profit rate may be different. Therefore, it is necessary to study the research using more stocks and the yield prediction through trading simulation.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Information types and characteristics within the Wireless Emergency Alert in COVID-19: Focusing on Wireless Emergency Alerts in Seoul (코로나 19 하에서 재난문자 내의 정보유형 및 특성: 서울특별시 재난문자를 중심으로)

  • Yoon, Sungwook;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.45-68
    • /
    • 2022
  • The central and local governments of the Republic of Korea provided information necessary for disaster response through wireless emergency alerts (WEAs) in order to overcome the pandemic situation in which COVID-19 rapidly spreads. Among all channels for delivering disaster information, wireless emergency alert is the most efficient, and since it adopts the CBS(Cell Broadcast Service) method that broadcasts directly to the mobile phone, it has the advantage of being able to easily access disaster information through the mobile phone without the effort of searching. In this study, the characteristics of wireless emergency alerts sent to Seoul during the past year and one month (January 2020 to January 2021) were derived through various text mining methodologies, and various types of information contained in wireless emergency alerts were analyzed. In addition, it was confirmed through the population mobility by age in the districts of Seoul that what kind of influence it had on the movement behavior of people. After going through the process of classifying key words and information included in each character, text analysis was performed so that individual sent characters can be used as an analysis unit by applying a document cluster analysis technique based on the included words. The number of WEAs sent to the Seoul has grown dramatically since the spread of Covid-19. In January 2020, only 10 WEAs were sent to the Seoul, but the number of the WEAs increased 5 times in March, and 7.7 times over the previous months. Since the basic, regional local government were authorized to send wireless emergency alerts independently, the sending behavior of related to wireless emergency alerts are different for each local government. Although most of the basic local governments increased the transmission of WEAs as the number of confirmed cases of Covid-19 increases, the trend of the increase in WEAs according to the increase in the number of confirmed cases of Covid-19 was different by region. By using structured econometric model, the effect of disaster information included in wireless emergency alerts on population mobility was measured by dividing it into baseline effect and accumulating effect. Six types of disaster information, including date, order, online URL, symptom, location, normative guidance, were identified in WEAs and analyzed through econometric modelling. It was confirmed that the types of information that significantly change population mobility by age are different. Population mobility of people in their 60s and 70s decreased when wireless emergency alerts included information related to date and order. As date and order information is appeared in WEAs when they intend to give information about Covid-19 confirmed cases, these results show that the population mobility of higher ages decreased as they reacted to the messages reporting of confirmed cases of Covid-19. Online information (URL) decreased the population mobility of in their 20s, and information related to symptoms reduced the population mobility of people in their 30s. On the other hand, it was confirmed that normative words that including the meaning of encouraging compliance with quarantine policies did not cause significant changes in the population mobility of all ages. This means that only meaningful information which is useful for disaster response should be included in the wireless emergency alerts. Repeated sending of wireless emergency alerts reduces the magnitude of the impact of disaster information on population mobility. It proves indirectly that under the prolonged pandemic, people started to feel tired of getting repetitive WEAs with similar content and started to react less. In order to effectively use WEAs for quarantine and overcoming disaster situations, it is necessary to reduce the fatigue of the people who receive WEA by sending them only in necessary situations, and to raise awareness of WEAs.

Analysis of promising countries for export using parametric and non-parametric methods based on ERGM: Focusing on the case of information communication and home appliance industries (ERGM 기반의 모수적 및 비모수적 방법을 활용한 수출 유망국가 분석: 정보통신 및 가전 산업 사례를 중심으로)

  • Jun, Seung-pyo;Seo, Jinny;Yoo, Jae-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.175-196
    • /
    • 2022
  • Information and communication and home appliance industries, which were one of South Korea's main industries, are gradually losing their export share as their export competitiveness is weakening. This study objectively analyzed export competitiveness and suggested export-promising countries in order to help South Korea's information communication and home appliance industries improve exports. In this study, network properties, centrality, and structural hole analysis were performed during network analysis to evaluate export competitiveness. In order to select promising export countries, we proposed a new variable that can take into account the characteristics of an already established International Trade Network (ITN), that is, the Global Value Chain (GVC), in addition to the existing economic factors. The conditional log-odds for individual links derived from the Exponential Random Graph Model (ERGM) in the analysis of the cross-border trade network were assumed as a proxy variable that can indicate the export potential. In consideration of the possibility of ERGM linkage, a parametric approach and a non-parametric approach were used to recommend export-promising countries, respectively. In the parametric method, a regression analysis model was developed to predict the export value of the information and communication and home appliance industries in South Korea by additionally considering the link-specific characteristics of the network derived from the ERGM to the existing economic factors. Also, in the non-parametric approach, an abnormality detection algorithm based on the clustering method was used, and a promising export country was proposed as a method of finding outliers that deviate from two peers. According to the research results, the structural characteristic of the export network of the industry was a network with high transferability. Also, according to the centrality analysis result, South Korea's influence on exports was weak compared to its size, and the structural hole analysis result showed that export efficiency was weak. According to the model for recommending promising exporting countries proposed by this study, in parametric analysis, Iran, Ireland, North Macedonia, Angola, and Pakistan were promising exporting countries, and in nonparametric analysis, Qatar, Luxembourg, Ireland, North Macedonia and Pakistan were analyzed as promising exporting countries. There were differences in some countries in the two models. The results of this study revealed that the export competitiveness of South Korea's information and communication and home appliance industries in GVC was not high compared to the size of exports, and thus showed that exports could be further reduced. In addition, this study is meaningful in that it proposed a method to find promising export countries by considering GVC networks with other countries as a way to increase export competitiveness. This study showed that, from a policy point of view, the international trade network of the information communication and home appliance industries has an important mutual relationship, and although transferability is high, it may not be easily expanded to a three-party relationship. In addition, it was confirmed that South Korea's export competitiveness or status was lower than the export size ranking. This paper suggested that in order to improve the low out-degree centrality, it is necessary to increase exports to Italy or Poland, which had significantly higher in-degrees. In addition, we argued that in order to improve the centrality of out-closeness, it is necessary to increase exports to countries with particularly high in-closeness. In particular, it was analyzed that Morocco, UAE, Argentina, Russia, and Canada should pay attention as export countries. This study also provided practical implications for companies expecting to expand exports. The results of this study argue that companies expecting export expansion need to pay attention to countries with a relatively high potential for export expansion compared to the existing export volume by country. In particular, for companies that export daily necessities, countries that should pay attention to the population are presented, and for companies that export high-end or durable products, countries with high GDP, or purchasing power, relatively low exports are presented. Since the process and results of this study can be easily extended and applied to other industries, it is also expected to develop services that utilize the results of this study in the public sector.

The Effects on CRM Performance and Relationship Quality of Successful Elements in the Establishment of Customer Relationship Management: Focused on Marketing Approach (CRM구축과정에서 마케팅요인이 관계품질과 CRM성과에 미치는 영향)

  • Jang, Hyeong-Yu
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.4
    • /
    • pp.119-155
    • /
    • 2008
  • Customer Relationship Management(CRM) has been a sustainable competitive edge of many companies. CRM analyzes customer data for designing and executing targeted marketing analysing customer behavior in order to make decisions relating to products and services including management information system. It is critical for companies to get and maintain profitable customers. How to manage relationships with customers effectively has become an important issue for both academicians and practitioners in recent years. However, the existing academic literature and the practical applications of customer relationship management(CRM) strategies have been focused on the technical process and organizational structure about the implementation of CRM. These limited focus on CRM lead to the result of numerous reports of failed implementations of various types of CRM projects. Many of these failures are also related to the absence of marketing approach. Identifying successful factors and outcomes focused on marketing concept before introducing a CRM project are a pre-implementation requirements. Many researchers have attempted to find the factors that contribute to the success of CRM. However, these research have some limitations in terms of marketing approach without explaining how the marketing based factors contribute to the CRM success. An understanding of how to manage relationship with crucial customers effectively based marketing approach has become an important topic for both academicians and practitioners. However, the existing papers did not provide a clear antecedent and outcomes factors focused on marketing approach. This paper attempt to validate whether or not such various marketing factors would impact on relational quality and CRM performance in terms of marketing oriented perceptivity. More specifically, marketing oriented factors involving market orientation, customer orientation, customer information orientation, and core customer orientation can influence relationship quality(satisfaction and trust) and CRM outcome(customer retention and customer share). Another major goals of this research are to identify the effect of relationship quality on CRM outcomes consisted of customer retention and share to show the relationship strength between two factors. Based on meta analysis for conventional studies, I can construct the following research model. An empirical study was undertaken to test the hypotheses with data from various companies. Multiple regression analysis and t-test were employed to test the hypotheses. The reliability and validity of our measurements were tested by using Cronbach's alpha coefficient and principal factor analysis respectively, and seven hypotheses were tested through performing correlation test and multiple regression analysis. The first key outcome is a theoretically and empirically sound CRM factors(marketing orientation, customer orientation, customer information orientation, and core customer orientation.) in the perceptive of marketing. The intensification of ${\beta}$coefficient among antecedents factors in terms of marketing was not same. In particular, The effects on customer trust of marketing based CRM antecedents were significantly confirmed excluding core customer orientation. It was notable that the direct effects of core customer orientation on customer trust were not exist. This means that customer trust which is firmly formed by long term tasks will not be directly linked to the core customer orientation. the enduring management concerned with this interactions is probably more important for the successful implementation of CRM. The second key result is that the implementation and operation of successful CRM process in terms of marketing approach have a strong positive association with both relationship quality(customer trust/customer satisfaction) and CRM performance(customer retention and customer possession). The final key fact that relationship quality has a strong positive effect on customer retention and customer share confirms that improvements in customer satisfaction and trust improve accessibility to customers, provide more consistent service and ensure value-for-money within the front office which result in growth of customer retention and customer share. Particularly, customer satisfaction and trust which is main components of relationship quality are found to be positively related to the customer retention and customer share. Interactive managements of these main variables play key roles in connecting the successful antecedent of CRM with final outcome involving customer retention and share. Based on research results, This paper suggest managerial implications concerned with constructions and executions of CRM focusing on the marketing perceptivity. I can conclude in general the CRM can be achieved by the recognition of antecedents and outcomes based on marketing concept. The implementation of marketing concept oriented CRM will be connected with finding out about customers' purchasing habits, opinions and preferences profiling individuals and groups to market more effectively and increase sales changing the way you operate to improve customer service and marketing. Benefiting from CRM is not just a question of investing the right software, but adapt CRM users to the concept of marketing including marketing orientation, customer orientation, and customer information orientation. No one deny that CRM is a process or methodology used to develop stronger relationships being composed of many technological components, but thinking about CRM in primarily technological terms is a big mistake. We can infer from this paper that the more useful way to think and implement about CRM is as a process that will help bring together lots of pieces of marketing concept about customers, marketing effectiveness, and market trends. Finally, a real situation we conducted our research may enable academics and practitioners to understand the antecedents and outcomes in the perceptive of marketing more clearly.

  • PDF