• Title/Summary/Keyword: input factors

Search Result 1,589, Processing Time 0.032 seconds

Interface Application of a Virtual Assistant Agent in an Immersive Virtual Environment (몰입형 가상환경에서 가상 보조 에이전트의 인터페이스 응용)

  • Giri Na;Jinmo Kim
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.1
    • /
    • pp.1-10
    • /
    • 2024
  • In immersive virtual environments including mixed reality (MR) and virtual reality (VR), avatars or agents, which are virtual humans, are being studied and applied in various ways as factors that increase users' social presence. Recently, studies are being conducted to apply generative AI as an agent to improve user learning effects or suggest a collaborative environment in an immersive virtual environment. This study proposes a novel method for interface application of a virtual assistant agent (VAA) using OpenAI's ChatGPT in an immersive virtual environment including VR and MR. The proposed method consists of an information agent that responds to user queries and a control agent that controls virtual objects and environments according to user needs. We set up a development environment that integrates the Unity 3D engine, OpenAI, and packages and development tools for user participation in MR and VR. Additionally, we set up a workflow that leads from voice input to the creation of a question query to an answer query, or a control request query to a control script. Based on this, MR and VR experience environments were produced, and experiments to confirm the performance of VAA were divided into response time of information agent and accuracy of control agent. It was confirmed that the interface application of the proposed VAA can increase efficiency in simple and repetitive tasks along with user-friendly features. We present a novel direction for the interface application of an immersive virtual environment through the proposed VAA and clarify the discovered problems and limitations so far.

Study on the 'innovation' in higher education under the national university innovation support project (대학혁신지원사업에서 '혁신'은 어디에 있는가? :부·울·경 지역 대학혁신전략을 중심으로)

  • Wongyeum Cho;Yeongyo Cho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.519-531
    • /
    • 2024
  • The purpose of this study is to analyze the aspects and characteristics of educational innovation planned and implemented at the university site targeting universities in Busan, Ulsan, and Gyeongnam, and to explore their limitations and tasks. For this purpose, we analyzed the contents of innovation strategy programs among the plans of 17 universities in the national innovation support projects in Busan, Ulsan, and Gyeongnam area. First, the university innovation strategy was divided into input, process, infrastructure, and other factors, and among them, the process factor was divided into education, research, and industry-university cooperation to examine the aspects and characteristics of innovation. As a result of the study, the aspects of university innovation at universities in Busan, Ulsan, and Gyeongnam were analyzed in the areas of education, research, and industry-academia cooperation. Characteristics of innovation were emphasis on convergence education, competency development, smart system foundation, introduction of innovative teaching and learning techniques, consumer-centeredness, and regional linkage. The limitations and tasks of university innovation revealed through the research are as follows. First, a specialized university innovation business structure should be prepared in consideration of the context of local universities. Second, established strategies with high innovativeness must be implemented and sustained, and consensus among members is required for this. Third, the innovation of universities should not mean the centralization of academics, and the role and efforts of universities as a research institutions should be improved. Fourth, it should not be overlooked that more important than the visible innovation strategy of university innovation is the education innovation that occurs directly to students as a result of the education effect.

Predicting the splitting tensile strength of manufactured-sand concrete containing stone nano-powder through advanced machine learning techniques

  • Manish Kewalramani;Hanan Samadi;Adil Hussein Mohammed;Arsalan Mahmoodzadeh;Ibrahim Albaijan;Hawkar Hashim Ibrahim;Saleh Alsulamy
    • Advances in nano research
    • /
    • v.16 no.4
    • /
    • pp.375-394
    • /
    • 2024
  • The extensive utilization of concrete has given rise to environmental concerns, specifically concerning the depletion of river sand. To address this issue, waste deposits can provide manufactured-sand (MS) as a substitute for river sand. The objective of this study is to explore the application of machine learning techniques to facilitate the production of manufactured-sand concrete (MSC) containing stone nano-powder through estimating the splitting tensile strength (STS) containing compressive strength of cement (CSC), tensile strength of cement (TSC), curing age (CA), maximum size of the crushed stone (Dmax), stone nano-powder content (SNC), fineness modulus of sand (FMS), water to cement ratio (W/C), sand ratio (SR), and slump (S). To achieve this goal, a total of 310 data points, encompassing nine influential factors affecting the mechanical properties of MSC, are collected through laboratory tests. Subsequently, the gathered dataset is divided into two subsets, one for training and the other for testing; comprising 90% (280 samples) and 10% (30 samples) of the total data, respectively. By employing the generated dataset, novel models were developed for evaluating the STS of MSC in relation to the nine input features. The analysis results revealed significant correlations between the CSC and the curing age CA with STS. Moreover, when delving into sensitivity analysis using an empirical model, it becomes apparent that parameters such as the FMS and the W/C exert minimal influence on the STS. We employed various loss functions to gauge the effectiveness and precision of our methodologies. Impressively, the outcomes of our devised models exhibited commendable accuracy and reliability, with all models displaying an R-squared value surpassing 0.75 and loss function values approaching insignificance. To further refine the estimation of STS for engineering endeavors, we also developed a user-friendly graphical interface for our machine learning models. These proposed models present a practical alternative to laborious, expensive, and complex laboratory techniques, thereby simplifying the production of mortar specimens.

Development of a Program for Calculating Typhoon Wind Speed and Data Visualization Based on Satellite RGB Images for Secondary-School Textbooks (인공위성 RGB 영상 기반 중등학교 교과서 태풍 풍속 산출 및 데이터 시각화 프로그램 개발)

  • Chae-Young Lim;Kyung-Ae Park
    • Journal of the Korean earth science society
    • /
    • v.45 no.3
    • /
    • pp.173-191
    • /
    • 2024
  • Typhoons are significant meteorological phenomena that cause interactions among the ocean, atmosphere, and land within Earth's system. In particular, wind speed, a key characteristic of typhoons, is influenced by various factors such as central pressure, trajectory, and sea surface temperature. Therefore, a comprehensive understanding based on actual observational data is essential. In the 2015 revised secondary school textbooks, typhoon wind speed is presented through text and illustrations; hence, exploratory activities that promote a deeper understanding of wind speed are necessary. In this study, we developed a data visualization program with a graphical user interface (GUI) to facilitate the understanding of typhoon wind speeds with simple operations during the teaching-learning process. The program utilizes red-green-blue (RGB) image data of Typhoons Mawar, Guchol, and Bolaven -which occurred in 2023- from the Korean geostationary satellite GEO-KOMPSAT-2A (GK-2A) as the input data. The program is designed to calculate typhoon wind speeds by inputting cloud movement coordinates around the typhoon and visualizes the wind speed distribution by inputting parameters such as central pressure, storm radius, and maximum wind speed. The GUI-based program developed in this study can be applied to typhoons observed by GK-2A without errors and enables scientific exploration based on actual observations beyond the limitations of textbooks. This allows students and teachers to collect, process, analyze, and visualize real observational data without needing a paid program or professional coding knowledge. This approach is expected to foster digital literacy, an essential competency for the future.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

Behavioural Analysis of Password Authentication and Countermeasure to Phishing Attacks - from User Experience and HCI Perspectives (사용자의 패스워드 인증 행위 분석 및 피싱 공격시 대응방안 - 사용자 경험 및 HCI의 관점에서)

  • Ryu, Hong Ryeol;Hong, Moses;Kwon, Taekyoung
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.79-90
    • /
    • 2014
  • User authentication based on ID and PW has been widely used. As the Internet has become a growing part of people' lives, input times of ID/PW have been increased for a variety of services. People have already learned enough to perform the authentication procedure and have entered ID/PW while ones are unconscious. This is referred to as the adaptive unconscious, a set of mental processes incoming information and producing judgements and behaviors without our conscious awareness and within a second. Most people have joined up for various websites with a small number of IDs/PWs, because they relied on their memory for managing IDs/PWs. Human memory decays with the passing of time and knowledges in human memory tend to interfere with each other. For that reason, there is the potential for people to enter an invalid ID/PW. Therefore, these characteristics above mentioned regarding of user authentication with ID/PW can lead to human vulnerabilities: people use a few PWs for various websites, manage IDs/PWs depending on their memory, and enter ID/PW unconsciously. Based on the vulnerability of human factors, a variety of information leakage attacks such as phishing and pharming attacks have been increasing exponentially. In the past, information leakage attacks exploited vulnerabilities of hardware, operating system, software and so on. However, most of current attacks tend to exploit the vulnerabilities of the human factors. These attacks based on the vulnerability of the human factor are called social-engineering attacks. Recently, malicious social-engineering technique such as phishing and pharming attacks is one of the biggest security problems. Phishing is an attack of attempting to obtain valuable information such as ID/PW and pharming is an attack intended to steal personal data by redirecting a website's traffic to a fraudulent copy of a legitimate website. Screens of fraudulent copies used for both phishing and pharming attacks are almost identical to those of legitimate websites, and even the pharming can include the deceptive URL address. Therefore, without the supports of prevention and detection techniques such as vaccines and reputation system, it is difficult for users to determine intuitively whether the site is the phishing and pharming sites or legitimate site. The previous researches in terms of phishing and pharming attacks have mainly studied on technical solutions. In this paper, we focus on human behaviour when users are confronted by phishing and pharming attacks without knowing them. We conducted an attack experiment in order to find out how many IDs/PWs are leaked from pharming and phishing attack. We firstly configured the experimental settings in the same condition of phishing and pharming attacks and build a phishing site for the experiment. We then recruited 64 voluntary participants and asked them to log in our experimental site. For each participant, we conducted a questionnaire survey with regard to the experiment. Through the attack experiment and survey, we observed whether their password are leaked out when logging in the experimental phishing site, and how many different passwords are leaked among the total number of passwords of each participant. Consequently, we found out that most participants unconsciously logged in the site and the ID/PW management dependent on human memory caused the leakage of multiple passwords. The user should actively utilize repudiation systems and the service provider with online site should support prevention techniques that the user can intuitively determined whether the site is phishing.

Evaluation of Oral Health Promotion Program Connected with Hypertension and Diabetes Management Programs: Use of a Logical Model (일부 보건소 고혈압·당뇨관리교실 연계 구강건강증진 프로그램 운영 및 평가: 논리적 모형을 이용하여)

  • Yoo, Sang-Hee;Shin, Bo-Mi;Bae, Soo-Myoung;Shin, Sun-Jung
    • Journal of dental hygiene science
    • /
    • v.16 no.4
    • /
    • pp.293-301
    • /
    • 2016
  • This study aimed to design and operate a complementary integrated health management program based on the connection between the hypertension and diabetes management programs and the oral health programs at a public health center. It also proposed to suggest the phased evaluation indicators. In this study, 48 adults registered in the hypertension and diabetes management program were selected from the Gangneung public health center. The clinic-specific programs were led by dental hygienists and operated for visitors twice every two weeks. The programs were designed based on the logical model, and indicators for evaluating the structure, process, and outcome were presented and applied to the input, process, output, and outcome. The evaluation indices consisted of quantitative and qualitative indicators, and the planning and operation, goal achievement, and effect of each program were assessed. The process evaluations were assessed by the appropriateness of the managers and the operating fidelity of the programs. Indicators for evaluating the outcomes were gingival bleeding, oral health knowledge, oral health awareness, and the satisfaction of the participant and the manager. The clinic-specific programs resulted in positive changes in the evaluated outcomes. The integrated health management of visitors to the hypertension and diabetes management program is important as the general and oral health has common risk factors. Furthermore, long-term operation and continuous monitoring of oral health programs are necessary to evaluate the common factors in chronic disease management.

The Effects of Global Entrepreneurship and Social Capital Within Supply Chain on the Export Performance (글로벌 기업가정신과 공급사슬 내 사회적 자본이 수출성과에 미치는 영향)

  • Yoon, Heon-Deok;Kwak, Ki-Young;Seo, Ri-Bin
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.7 no.3
    • /
    • pp.1-16
    • /
    • 2012
  • Under the international business circumstance, global supply chain management is considered a vital strategic challenge to small and medium-sized enterprises(SMEs) suffering from deficient resources and capabilities to exploit overseas markets comparing with large corporations. That is because they can expand their business domains into overseas markets by establishing strategic alliances with global supply chain partners. Although a wide range of previous researches have emphasized the cooperative networks in the chain, most are ignoring the importance of developing relational characteristics such as trust and reciprocity with the partners. Besides, verifying the relational factors influencing firms' export performances, some studies proposed different and inconsistent factors. According to the social capital theory, which is the social quality and networks facilitating close cooperation of inter-individual and inter-organization, provides the integrated view to identify the relational characteristics in the aspects of network, trust and reciprocal norm. Meanwhile, a number of researchers shows that global entrepreneurship is the internal and intangible resource necessary to promote SMEs' internationalization. Upon closer examination, however, they cannot explain clearly its influencing mechanism in the inter-firm cooperative relationships. This study is to verify the effect of social capital accumulated within global supply chain on SMEs' qualitative and quantitative export performance. In addition, we shed new light on global entrepreneurship expected to be concerned with the formation of social capital and the enhancement of export performances. For this purpose, the questionnaires, developed through literature review, were collected from 192 Korean SMEs affiliated in Korean Medium Industries Association and Global Chief Executive Officer's Club focusing on their memberships' international business. As a result of multi-regression analysis, the social capital - network, trust and reciprocal norm shared with global supply chain partner - as well as global entrepreneurship - innovativeness, proactiveness and risk-taking - have positive effect on SMEs' export performances. Also global entrepreneurship affects positively social capital which has mediating effect partially in the relationship between global entrepreneurship and performances. These results means that there is a structural process - global entrepreneurship(input), social capital(output), and export performances(outcome). In other words, a firm should consistently invest in and develop the social capital with global supply chain partners in order to achieve common goals, establish strategic collaborations and obtain long-term export performances. Furthermore, it is required to foster the global entrepreneurship in an organization so as to build up the social capital. More detailed practical issues and discussion are made in the conclusion.

  • PDF

Variation Analysis of Distance and Exposure Dose in Radiation Control Area and Monitoring Area according to the Thickness of Radiation Protection Tool Using the Calculation Model: Non-Destructive Test Field (계산 모델을 활용한 방사선방어용 도구 두께에 따른 방사선관리구역 및 감시구역의 거리 및 피폭선량 변화 분석 : 방사선투과검사 분야 중심으로)

  • Gwon, Da Yeong;Park, Chan-hee;Kim, Hye Jin;Kim, Yongmin
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.3
    • /
    • pp.279-287
    • /
    • 2020
  • Recently, interest in radiation protection is increasing because of the occurrence of accidents related to exposure dose. So, the nuclear safety act provides to install the shields to avoid exceeding the dose limit. In particular, when the worker conducts the non-destructive testing (NDT) without the fixed shielding structure, we should monitor the access to the workplace based on a constant dose rate. However, when we apply for permits for NDT work in these work environments, the consideration factors to the estimation of the distance and exposure dose are not legally specified. Therefore, we developed the excel model that automatically calculates the distance, exposure dose, and cost if we input the factors. We applied the assumption data to this model. As a result of the application, the distance change rate was low when the thickness of the lead blanket and collimator is above 25 mm, 21.5 mm, respectively. However, we didn't consider the scattering and build-up factor. And, we assumed the shape of the lead blanket and collimator. Therefore, if we make up for these limitations and use the actual data, we expect that we can build a database on the distance and exposure dose.

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.