• Title/Summary/Keyword: 정보서비스 평가

Search Result 3,483, Processing Time 0.029 seconds

A Study on Difficulty Factors of Youth Startups for Activating Local Startups (지역창업 활성화를 위한 청년창업 애로 요인에 관한 연구)

  • Ahn, Tae-Uk;Kang, Tae-Won
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.15 no.2
    • /
    • pp.67-80
    • /
    • 2020
  • This study has been conducted at a time when Korean government continues to extend support for youth startups as part of its policy to create jobs and the focus moves from career and employment to youth startups with a growing interest in the field of youth startups. Against this background, this study aims to identify difficulty factors of youth startups in areas besides the Seoul Metropolitan Area, seek ways to overcome difficulty factors, and propose policy implications. To this end, this study set five criteria and 25 sub-criteria to evaluate the difficulties of youth startups by reviewing previous studies and conducting literature review, and performing brainstorming method. The empirical analysis of the evaluation criteria was performed, using the analytic hierarchy process (AHP) method, on youths aged 19 to 39 in Gunsan area. The analysis results showed that the largest difficulty factors facing local youths include business model establishment, business administration and management, and startup funding in the criteria. As for sub-criteria, the largest difficulty factors are market information acquisition, technology commercialization, project feasibility, technology development, and new market pioneering in descending order. Local youths have much difficulty about the process of turning a business item into a product and commercializing it. According to a comparative analysis by gender, men were a relatively high difficulty in commercializing business models than women. men were a relatively high difficulty in commercializing business models than women. On the other hand, women were higher than men in all factors (management management, entrepreneurship, improvement of entrepreneurship system, and improvement of entrepreneurship awareness) except for factors affecting business model. In addition, the factors of entrepreneurship were found to be relatively different among young people (college students, prospective entrepreneurs, entrepreneurs). In conclusion, it was suggested that in order to revitalize youth entrepreneurship in the region, it is necessary to actively resolve the difficulties of business model commercialization rather than entrepreneurship funds. In addition, it is necessary to strategically support customized entrepreneurship support and situational administrative services because gender and hierarchical difficulties are different than general solutions. This study presented practical priorities and derivation methods for the entrepreneurship difficulties faced by local youth, and suggested measures and improvements for vitalizing local youth entrepreneurship in the future.

Using CR System at the Department of Radiation Oncology PACS Evaluation (방사선 종양학과에서 CR System을 이용한 PACS 유용성 평가)

  • Hong, Seung-Il;Kim, Young-Jae
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.2
    • /
    • pp.143-149
    • /
    • 2012
  • Today each hospital is trend that change rapidly by up to date, digitization and introducing newest medical treatment equipment. So, we introduce new CR system and supplement film system's shortcoming and PACS, EMR, RTP system's network that is using in hospital harmoniously and accomplish quality improvement of medical treatment and service elevation about business efficiency enlargement and patient Accordingly, we wish to introduce our case that integrate reflex that happen with radiation oncology here upon to PACS using CR system and estimate the availability. We measured that is Gantry, Collimator Star Shot, Light vs. Radiation, HDR QA(Dwell position accuracy) with Medical LINAC(MEVATRON-MX) Then, PACS was implemented on the digital images on the monitor that can be confirmed through the QA. Also, for cooperation with OCS system that is using from present source and impose code that need in treatment in each treatment, did so that Order that connect to network, input to CR may appear, did so that can solve support data mistake (active Pinacle's case supports DICOM3 file from present source but PACS does not support DICOM3 files.) of Pinacle and PACS that is Planning System and look at Planning premier in PACS. All image and data constructed integration to PACS as can refer and conduct premier in Hospital anywhere using CR system. Use Dosimetry IP in Filmless environment and QA's trial such as Light/Radition field size correspondence, gantry rotation axis' accuracy, collimator rotation axis' accuracy, brachy therapy's Dwell position check is available. Business efficiency by decrease and so on of unnecessary human strength consumption was augmented accordingly with session shortening as that integrate premier that is neted with radiation oncology using CR system to PACS. and for the future patient information security is essential.

Changing Trends of Climatic Variables of Agro-Climatic Zones of Rice in South Korea (벼 작물 농업기후지대의 연대별 기후요소 변화 특성)

  • Jung, Myung-Pyo;Shim, Kyo-Moon;Kim, Yongseok;Kim, Seok-Cheol;So, Kyu-Ho
    • Journal of Climate Change Research
    • /
    • v.5 no.1
    • /
    • pp.13-19
    • /
    • 2014
  • In the past, Korea agro-climatic zone except Jeju-do was classified into nineteen based on rice culture by using air temperature, precipitation, and sunshine duration etc. during rice growing periods. It has been used for selecting safety zone of rice cultivation and countermeasures to meteorological disasters. In this study, the climatic variables such as air temperature, precipitation, and sunshine duration of twenty agro-climatic zones including Jeju-do were compared decennially (1970's, 1980's, 1990's, and 2000's). The meteorological data were obtained in Meteorological Information Portal Service System-Disaster Prevention, Korea Meteorological Administration. The temperature of 1970s, 1980s, 1990s, and 2000s were $12.0{\pm}0.14^{\circ}C$, $11.9{\pm}0.13^{\circ}C$, $12.2{\pm}0.14^{\circ}C$, and $12.6{\pm}0.13^{\circ}C$, respectively. The precipitation of 1970s, 1980s, 1990s, and 2000s were $1,270.3{\pm}20.05mm$, $1,343.0{\pm}26.01mm$, $1,350.6{\pm}27.13mm$, and $1,416.8{\pm}24.87mm$, respectively. And the sunshine duration of 1970s, 1980s, 1990s, and 2000s were $421.7{\pm}18.37hours$, $2,352.4{\pm}15.01hours$, $2,196.3{\pm}12.32hours$, and $2,146.8{\pm}15.37hours$, respectively. The temperature in Middle-Inland zone ($+1.2^{\circ}C$) and Eastern-Southern zone ($+1.1^{\circ}C$) remarkably increased. The temperature increased most in Taebak highly Cold zone ($+364mm$) and Taebak moderately Cold Zone ($+326mm$). The sunshine duration decreased most in Middle-Inland Zone (-995 hours). The temperature (F=2.708, df=3, p= 0.046) and precipitation (F=5.037, df=3, p=0.002) increased significantly among seasons while the sunshine duration decreased significantly(F=26.181, df=3, p<0.0001) among seasons. In further study, it will need to reclassify agro-climatic zone of rice and it will need to conduct studies on safe cropping season, growth and developing of rice, and cultivation management system etc. based on reclassified agro-climatic zone.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Power Conscious Disk Scheduling for Multimedia Data Retrieval (저전력 환경에서 멀티미디어 자료 재생을 위한 디스크 스케줄링 기법)

  • Choi, Jung-Wan;Won, Yoo-Jip;Jung, Won-Min
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.4
    • /
    • pp.242-255
    • /
    • 2006
  • In the recent years, Popularization of mobile devices such as Smart Phones, PDAs and MP3 Players causes rapid increasing necessity of Power management technology because it is most essential factor of mobile devices. On the other hand, despite low price, hard disk has large capacity and high speed. Even it can be made small enough today, too. So it appropriates mobile devices. but it consumes too much power to embed In mobile devices. Due to these motivations, in this paper we had suggested methods of minimizing Power consumption while playing multimedia data in the disk media for real-time and we evaluated what we had suggested. Strict limitation of power consumption of mobile devices has a big impact on designing both hardware and software. One difference between real-time multimedia streaming data and legacy text based data is requirement about continuity of data supply. This fact is why disk drive must persist in active state for the entire playback duration, from power management point of view; it nay be a great burden. A legacy power management function of mobile disk drive affects quality of multimedia playback negatively because of excessive I/O requests when the disk is in standby state. Therefore, in this paper, we analyze power consumption profile of disk drive in detail, and we develop the algorithm which can play multimedia data effectively using less power. This algorithm calculates number of data block to be read and time duration of active/standby state. From this, the algorithm suggested in this paper does optimal scheduling that is ensuring continual playback of data blocks stored in mobile disk drive. And we implement our algorithms in publicly available MPEG player software. This MPEG player software saves up to 60% of power consumption as compared with full-time active stated disk drive, and 38% of power consumption by comparison with disk drive controlled by native power management method.

Design and Implementation of the SSL Component based on CBD (CBD에 기반한 SSL 컴포넌트의 설계 및 구현)

  • Cho Eun-Ae;Moon Chang-Joo;Baik Doo-Kwon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.3
    • /
    • pp.192-207
    • /
    • 2006
  • Today, the SSL protocol has been used as core part in various computing environments or security systems. But, the SSL protocol has several problems, because of the rigidity on operating. First, SSL protocol brings considerable burden to the CPU utilization so that performance of the security service in encryption transaction is lowered because it encrypts all data which is transferred between a server and a client. Second, SSL protocol can be vulnerable for cryptanalysis due to the key in fixed algorithm being used. Third, it is difficult to add and use another new cryptography algorithms. Finally. it is difficult for developers to learn use cryptography API(Application Program Interface) for the SSL protocol. Hence, we need to cover these problems, and, at the same time, we need the secure and comfortable method to operate the SSL protocol and to handle the efficient data. In this paper, we propose the SSL component which is designed and implemented using CBD(Component Based Development) concept to satisfy these requirements. The SSL component provides not only data encryption services like the SSL protocol but also convenient APIs for the developer unfamiliar with security. Further, the SSL component can improve the productivity and give reduce development cost. Because the SSL component can be reused. Also, in case of that new algorithms are added or algorithms are changed, it Is compatible and easy to interlock. SSL Component works the SSL protocol service in application layer. First of all, we take out the requirements, and then, we design and implement the SSL Component, confidentiality and integrity component, which support the SSL component, dependently. These all mentioned components are implemented by EJB, it can provide the efficient data handling when data is encrypted/decrypted by choosing the data. Also, it improves the usability by choosing data and mechanism as user intend. In conclusion, as we test and evaluate these component, SSL component is more usable and efficient than existing SSL protocol, because the increase rate of processing time for SSL component is lower that SSL protocol's.

Development of Water Footprint Inventory Using Input-Output Analysis (산업연관분석을 활용한 물발자국 인벤토리 개발)

  • Kim, Young Deuk;Lee, Sang Hyun;Ono, Yuya;Lee, Sung Hee
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.4
    • /
    • pp.401-412
    • /
    • 2013
  • Water footprint of a product and service is the volume of freshwater used to produce the product, measured in the life cycle or over the full supply chain. Since water footprint assessment helps us to understand how human activities and products relate to water scarcity and pollution, it can contribute to seek a sustainable way of water use in the consumption perspective. For the introduction of WFP scheme, it is indispensable to construct water inventory/accounting for the assessment, but there is no database in Korea to cover all industry sectors. Therefore, the aim of the study is to develop water footprint inventory within a nation at 403 industrial sectors using Input-Output Analysis. Water uses in the agricultural sector account for 79% of total water, and industrial sector have higher indirect water at most sectors, which is accounting for 82%. Most of the crop water is consumptive and direct water except rice. The greatest water use in the agricultural sectors is in rice paddy followed by aquaculture and fruit production, but the greatest water use intensity was not in the rice. The greatest water use intensity was 103,263 $m^3$/million KRW for other inedible crop production, which was attributed to the low economic value of the product with great water consumption in the cultivation. The next was timber tract followed by iron ores, raw timber, aquaculture, water supply and miscellaneous cereals like corn and other edible crops in terms of total water use intensity. In holistic view, water management considering indirect water in the industrial sector, i.e. supply chain management in the whole life cycle, is important to increase water use efficiency, since more than 56% of total water was indirect water by humanity. It is expected that the water use intensity data can be used for a water inventory to estimate water footprint of a product for the introduction of water footprint scheme in Korea.

A Match-Making System Considering Symmetrical Preferences of Matching Partners (상호 대칭적 만족성을 고려한 온라인 데이트시스템)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.177-192
    • /
    • 2012
  • This is a study of match-making systems that considers the mutual satisfaction of matching partners. Recently, recommendation systems have been applied to people recommendation, such as recommending new friends, employees, or dating partners. One of the prominent domain areas is match-making systems that recommend suitable dating partners to customers. A match-making system, however, is different from a product recommender system. First, a match-making system needs to satisfy the recommended partners as well as the customer, whereas a product recommender system only needs to satisfy the customer. Second, match-making systems need to include as many participants in a matching pool as possible for their recommendation results, even with unpopular customers. In other words, recommendations should not be focused only on a limited number of popular people; unpopular people should also be listed on someone else's matching results. In product recommender systems, it is acceptable to recommend the same popular items to many customers, since these items can easily be additionally supplied. However, in match-making systems, there are only a few popular people, and they may become overburdened with too many recommendations. Also, a successful match could cause a customer to drop out of the matching pool. Thus, match-making systems should provide recommendation services equally to all customers without favoring popular customers. The suggested match-making system, called Mutually Beneficial Matching (MBM), considers the reciprocal satisfaction of both the customer and the matched partner and also considers the number of customers who are excluded in the matching. A brief outline of the MBM method is as follows: First, it collects a customer's profile information, his/her preferable dating partner's profile information and the weights that he/she considers important when selecting dating partners. Then, it calculates the preference score of a customer to certain potential dating partners on the basis of the difference between them. The preference score of a certain partner to a customer is also calculated in this way. After that, the mutual preference score is produced by the two preference values calculated in the previous step using the proposed formula in this study. The proposed formula reflects the symmetry of preferences as well as their quantities. Finally, the MBM method recommends the top N partners having high mutual preference scores to a customer. The prototype of the suggested MBM system is implemented by JAVA and applied to an artificial dataset that is based on real survey results from major match-making companies in Korea. The results of the MBM method are compared with those of the other two conventional methods: Preference-Based Matching (PBM), which only considers a customer's preferences, and Arithmetic Mean-Based Matching (AMM), which considers the preferences of both the customer and the partner (although it does not reflect their symmetry in the matching results). We perform the comparisons in terms of criteria such as average preference of the matching partners, average symmetry, and the number of people who are excluded from the matching results by changing the number of recommendations to 5, 10, 15, 20, and 25. The results show that in many cases, the suggested MBM method produces average preferences and symmetries that are significantly higher than those of the PBM and AMM methods. Moreover, in every case, MBM produces a smaller pool of excluded people than those of the PBM method.

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.