• Title/Summary/Keyword: Systems approach

Search Result 10,004, Processing Time 0.042 seconds

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

A Mobile Landmarks Guide : Outdoor Augmented Reality based on LOD and Contextual Device (모바일 랜드마크 가이드 : LOD와 문맥적 장치 기반의 실외 증강현실)

  • Zhao, Bi-Cheng;Rosli, Ahmad Nurzid;Jang, Chol-Hee;Lee, Kee-Sung;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.1-21
    • /
    • 2012
  • In recent years, mobile phone has experienced an extremely fast evolution. It is equipped with high-quality color displays, high resolution cameras, and real-time accelerated 3D graphics. In addition, some other features are includes GPS sensor and Digital Compass, etc. This evolution advent significantly helps the application developers to use the power of smart-phones, to create a rich environment that offers a wide range of services and exciting possibilities. To date mobile AR in outdoor research there are many popular location-based AR services, such Layar and Wikitude. These systems have big limitation the AR contents hardly overlaid on the real target. Another research is context-based AR services using image recognition and tracking. The AR contents are precisely overlaid on the real target. But the real-time performance is restricted by the retrieval time and hardly implement in large scale area. In our work, we exploit to combine advantages of location-based AR with context-based AR. The system can easily find out surrounding landmarks first and then do the recognition and tracking with them. The proposed system mainly consists of two major parts-landmark browsing module and annotation module. In landmark browsing module, user can view an augmented virtual information (information media), such as text, picture and video on their smart-phone viewfinder, when they pointing out their smart-phone to a certain building or landmark. For this, landmark recognition technique is applied in this work. SURF point-based features are used in the matching process due to their robustness. To ensure the image retrieval and matching processes is fast enough for real time tracking, we exploit the contextual device (GPS and digital compass) information. This is necessary to select the nearest and pointed orientation landmarks from the database. The queried image is only matched with this selected data. Therefore, the speed for matching will be significantly increased. Secondly is the annotation module. Instead of viewing only the augmented information media, user can create virtual annotation based on linked data. Having to know a full knowledge about the landmark, are not necessary required. They can simply look for the appropriate topic by searching it with a keyword in linked data. With this, it helps the system to find out target URI in order to generate correct AR contents. On the other hand, in order to recognize target landmarks, images of selected building or landmark are captured from different angle and distance. This procedure looks like a similar processing of building a connection between the real building and the virtual information existed in the Linked Open Data. In our experiments, search range in the database is reduced by clustering images into groups according to their coordinates. A Grid-base clustering method and user location information are used to restrict the retrieval range. Comparing the existed research using cluster and GPS information the retrieval time is around 70~80ms. Experiment results show our approach the retrieval time reduces to around 18~20ms in average. Therefore the totally processing time is reduced from 490~540ms to 438~480ms. The performance improvement will be more obvious when the database growing. It demonstrates the proposed system is efficient and robust in many cases.

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

SKU recommender system for retail stores that carry identical brands using collaborative filtering and hybrid filtering (협업 필터링 및 하이브리드 필터링을 이용한 동종 브랜드 판매 매장간(間) 취급 SKU 추천 시스템)

  • Joe, Denis Yongmin;Nam, Kihwan
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.77-110
    • /
    • 2017
  • Recently, the diversification and individualization of consumption patterns through the web and mobile devices based on the Internet have been rapid. As this happens, the efficient operation of the offline store, which is a traditional distribution channel, has become more important. In order to raise both the sales and profits of stores, stores need to supply and sell the most attractive products to consumers in a timely manner. However, there is a lack of research on which SKUs, out of many products, can increase sales probability and reduce inventory costs. In particular, if a company sells products through multiple in-store stores across multiple locations, it would be helpful to increase sales and profitability of stores if SKUs appealing to customers are recommended. In this study, the recommender system (recommender system such as collaborative filtering and hybrid filtering), which has been used for personalization recommendation, is suggested by SKU recommendation method of a store unit of a distribution company that handles a homogeneous brand through a plurality of sales stores by country and region. We calculated the similarity of each store by using the purchase data of each store's handling items, filtering the collaboration according to the sales history of each store by each SKU, and finally recommending the individual SKU to the store. In addition, the store is classified into four clusters through PCA (Principal Component Analysis) and cluster analysis (Clustering) using the store profile data. The recommendation system is implemented by the hybrid filtering method that applies the collaborative filtering in each cluster and measured the performance of both methods based on actual sales data. Most of the existing recommendation systems have been studied by recommending items such as movies and music to the users. In practice, industrial applications have also become popular. In the meantime, there has been little research on recommending SKUs for each store by applying these recommendation systems, which have been mainly dealt with in the field of personalization services, to the store units of distributors handling similar brands. If the recommendation method of the existing recommendation methodology was 'the individual field', this study expanded the scope of the store beyond the individual domain through a plurality of sales stores by country and region and dealt with the store unit of the distribution company handling the same brand SKU while suggesting a recommendation method. In addition, if the existing recommendation system is limited to online, it is recommended to apply the data mining technique to develop an algorithm suitable for expanding to the store area rather than expanding the utilization range offline and analyzing based on the existing individual. The significance of the results of this study is that the personalization recommendation algorithm is applied to a plurality of sales outlets handling the same brand. A meaningful result is derived and a concrete methodology that can be constructed and used as a system for actual companies is proposed. It is also meaningful that this is the first attempt to expand the research area of the academic field related to the existing recommendation system, which was focused on the personalization domain, to a sales store of a company handling the same brand. From 05 to 03 in 2014, the number of stores' sales volume of the top 100 SKUs are limited to 52 SKUs by collaborative filtering and the hybrid filtering method SKU recommended. We compared the performance of the two recommendation methods by totaling the sales results. The reason for comparing the two recommendation methods is that the recommendation method of this study is defined as the reference model in which offline collaborative filtering is applied to demonstrate higher performance than the existing recommendation method. The results of this model are compared with the Hybrid filtering method, which is a model that reflects the characteristics of the offline store view. The proposed method showed a higher performance than the existing recommendation method. The proposed method was proved by using actual sales data of large Korean apparel companies. In this study, we propose a method to extend the recommendation system of the individual level to the group level and to efficiently approach it. In addition to the theoretical framework, which is of great value.

Analyzing Contextual Polarity of Unstructured Data for Measuring Subjective Well-Being (주관적 웰빙 상태 측정을 위한 비정형 데이터의 상황기반 긍부정성 분석 방법)

  • Choi, Sukjae;Song, Yeongeun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.83-105
    • /
    • 2016
  • Measuring an individual's subjective wellbeing in an accurate, unobtrusive, and cost-effective manner is a core success factor of the wellbeing support system, which is a type of medical IT service. However, measurements with a self-report questionnaire and wearable sensors are cost-intensive and obtrusive when the wellbeing support system should be running in real-time, despite being very accurate. Recently, reasoning the state of subjective wellbeing with conventional sentiment analysis and unstructured data has been proposed as an alternative to resolve the drawbacks of the self-report questionnaire and wearable sensors. However, this approach does not consider contextual polarity, which results in lower measurement accuracy. Moreover, there is no sentimental word net or ontology for the subjective wellbeing area. Hence, this paper proposes a method to extract keywords and their contextual polarity representing the subjective wellbeing state from the unstructured text in online websites in order to improve the reasoning accuracy of the sentiment analysis. The proposed method is as follows. First, a set of general sentimental words is proposed. SentiWordNet was adopted; this is the most widely used dictionary and contains about 100,000 words such as nouns, verbs, adjectives, and adverbs with polarities from -1.0 (extremely negative) to 1.0 (extremely positive). Second, corpora on subjective wellbeing (SWB corpora) were obtained by crawling online text. A survey was conducted to prepare a learning dataset that includes an individual's opinion and the level of self-report wellness, such as stress and depression. The participants were asked to respond with their feelings about online news on two topics. Next, three data sources were extracted from the SWB corpora: demographic information, psychographic information, and the structural characteristics of the text (e.g., the number of words used in the text, simple statistics on the special characters used). These were considered to adjust the level of a specific SWB. Finally, a set of reasoning rules was generated for each wellbeing factor to estimate the SWB of an individual based on the text written by the individual. The experimental results suggested that using contextual polarity for each SWB factor (e.g., stress, depression) significantly improved the estimation accuracy compared to conventional sentiment analysis methods incorporating SentiWordNet. Even though literature is available on Korean sentiment analysis, such studies only used only a limited set of sentimental words. Due to the small number of words, many sentences are overlooked and ignored when estimating the level of sentiment. However, the proposed method can identify multiple sentiment-neutral words as sentiment words in the context of a specific SWB factor. The results also suggest that a specific type of senti-word dictionary containing contextual polarity needs to be constructed along with a dictionary based on common sense such as SenticNet. These efforts will enrich and enlarge the application area of sentic computing. The study is helpful to practitioners and managers of wellness services in that a couple of characteristics of unstructured text have been identified for improving SWB. Consistent with the literature, the results showed that the gender and age affect the SWB state when the individual is exposed to an identical queue from the online text. In addition, the length of the textual response and usage pattern of special characters were found to indicate the individual's SWB. These imply that better SWB measurement should involve collecting the textual structure and the individual's demographic conditions. In the future, the proposed method should be improved by automated identification of the contextual polarity in order to enlarge the vocabulary in a cost-effective manner.

Step-by-step Participatory Design Process through Construction of Public Discourse and Community Development - An Analysis of New York Governors Island Park and Public Spaces - (공론과 커뮤니티 형성을 통한 단계적 참여 설계 - 뉴욕 거버너스 아일랜드 공원 및 공공 공간을 중심으로 -)

  • Choi, Hyeyoung;Pae, Jeong-Hann
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.44 no.2
    • /
    • pp.11-24
    • /
    • 2016
  • This study, which takes the New York Governors Island project as an example, is an in-depth analysis of the process of participatory design for transforming a space that had long been isolated from the public into a park and/or communal space. We found that the process of participatory design at each project phase consists of the following. First, in the preparatory phase, there was construction of public discourse: efforts to find a consensus to develop the space in a public-driven direction was led by various initiatives. Second, by opening the site to the public before the design phase and by consistently inviting a wide range of activities, a community was formed and its attachment to the site was established. Third, after the competition until the completion of the project master planning phase, substantial and active participatory design was enabled through social interaction of the community formed and developed over many years. Fourth, in the detail design and implementation phase, a traditional and passive way of participatory design such as public hearings was used, thereby preserving the expertise of professional designers. Public participation was reflected most in the master planning phase to create a broad framework of the park, while professionals designed the more specific aspects; this efficient and effective approach enabled both citizens and professionals to dutifully take part in developing the park. The results of this study can be summarized as follows. The mode of participatory design should be varied at each project phase, from construction of public discourse, to formation and development of the community, to participatory design for social interaction. Furthermore, it is effective to preserve the unique roles of citizens and design professionals. It is particularly important to extend the process of participatory design to include the construction of public discourse and community.

ICT Medical Service Provider's Knowledge and level of recognizing how to cope with fire fighting safety (ICT 의료시설 기반에서 종사자의 소방안전 지식과 대처방법 인식수준)

  • Kim, Ja-Sook;Kim, Ja-Ok;Ahn, Young-Joon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.1
    • /
    • pp.51-60
    • /
    • 2014
  • In this study, ICT medical service provider's level of knowledge fire fighting safety and methods on coping with fires in the regions of Gwangju and Jeonam Province of Korea were investigated to determine the elements affecting such levels and provide basic information on the manuals for educating how to cope with the fire fighting safety in medical facilities. The data were analyzed using SPSS Win 14.0. The scores of level of knowledge fire fighting safety of ICT medical service provider's were 7.06(10 point scale), and the scores of level of recognizing how to cope with fire fighting safety were 6.61(11 point scale). level of recognizing how to cope with fire fighting safety were significantly different according to gender(t=4.12, p<.001), age(${\chi}^2$=17.24, p<.001), length of career(${\chi}^2$=22.76, p<.001), experience with fire fighting safety education(t=6.10, p<.001), level of subjective knowledge on fire fighting safety(${\chi}^2$=53.83, p<.001). In order to enhance the level of understanding of fire fighting safety and methods of coping by the ICT medical service providers it is found that: self-directed learning through avoiding the education just conveying knowledge by lecture tailored learning for individuals fire fighting education focused on experiencing actual work by developing various contents emphasizing cooperative learning deploying patients by classification systems using simulations and a study on the implementation of digital anti-fire monitoring system with multipoint communication protocol, a design and development of the smoke detection system using infra-red laser for fire detection in the wide space, video based fire detection algorithm using gaussian mixture mode developing an education manual for coping with fire fighting safety through multi learning approach at the medical facilities are required.

Development of Systematic Process for Estimating Commercialization Duration and Cost of R&D Performance (기술가치 평가를 위한 기술사업화 기간 및 비용 추정체계 개발)

  • Jun, Seoung-Pyo;Choi, Daeheon;Park, Hyun-Woo;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.139-160
    • /
    • 2017
  • Technology commercialization creates effective economic value by linking the company's R & D processes and outputs to the market. This technology commercialization is important in that a company can retain and maintain a sustained competitive advantage. In order for a specific technology to be commercialized, it goes through the stage of technical planning, technology research and development, and commercialization. This process involves a lot of time and money. Therefore, the duration and cost of technology commercialization are important decision information for determining the market entry strategy. In addition, it is more important information for a technology investor to rationally evaluate the technology value. In this way, it is very important to scientifically estimate the duration and cost of the technology commercialization. However, research on technology commercialization is insufficient and related methodology are lacking. In this study, we propose an evaluation model that can estimate the duration and cost of R & D technology commercialization for small and medium-sized enterprises. To accomplish this, this study collected the public data of the National Science & Technology Information Service (NTIS) and the survey data provided by the Small and Medium Business Administration. Also this study will develop the estimation model of commercialization duration and cost of R&D performance on using these data based on the market approach, one of the technology valuation methods. Specifically, this study defined the process of commercialization as consisting of development planning, development progress, and commercialization. We collected the data from the NTIS database and the survey of SMEs technical statistics of the Small and Medium Business Administration. We derived the key variables such as stage-wise R&D costs and duration, the factors of the technology itself, the factors of the technology development, and the environmental factors. At first, given data, we estimates the costs and duration in each technology readiness level (basic research, applied research, development research, prototype production, commercialization), for each industry classification. Then, we developed and verified the research model of each industry classification. The results of this study can be summarized as follows. Firstly, it is reflected in the technology valuation model and can be used to estimate the objective economic value of technology. The duration and the cost from the technology development stage to the commercialization stage is a critical factor that has a great influence on the amount of money to discount the future sales from the technology. The results of this study can contribute to more reliable technology valuation because it estimates the commercialization duration and cost scientifically based on past data. Secondly, we have verified models of various fields such as statistical model and data mining model. The statistical model helps us to find the important factors to estimate the duration and cost of technology Commercialization, and the data mining model gives us the rules or algorithms to be applied to an advanced technology valuation system. Finally, this study reaffirms the importance of commercialization costs and durations, which has not been actively studied in previous studies. The results confirm the significant factors to affect the commercialization costs and duration, furthermore the factors are different depending on industry classification. Practically, the results of this study can be reflected in the technology valuation system, which can be provided by national research institutes and R & D staff to provide sophisticated technology valuation. The relevant logic or algorithm of the research result can be implemented independently so that it can be directly reflected in the system, so researchers can use it practically immediately. In conclusion, the results of this study can be a great contribution not only to the theoretical contributions but also to the practical ones.

High quality topological insulator Bi2Se3 grown on h-BN using molecular beam epitaxy

  • Park, Joon Young;Lee, Gil-Ho;Jo, Janghyun;Cheng, Austin K.;Yoon, Hosang;Watanabe, Kenji;Taniguchi, Takashi;Kim, Miyoung;Kim, Philip;Yi, Gyu-Chul
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2016.02a
    • /
    • pp.284-284
    • /
    • 2016
  • Topological insulator (TI) is a bulk-insulating material with topologically protected Dirac surface states in the band gap. In particular, $Bi_2Se_3$ attracted great attention as a model three-dimensional TI due to its simple electronic structure of the surface states in a relatively large band gap (~0.3 eV). However, experimental efforts using $Bi_2Se_3$ have been difficult due to the abundance of structural defects, which frequently results in the bulk conduction being dominant over the surface conduction in transport due to the bulk doping effects of the defect sites. One promising approach in avoiding this problem is to reduce the structural defects by heteroepitaxially grow $Bi_2Se_3$ on a substrate with a compatible lattice structure, while also preventing surface degradation by encapsulating the pristine interface between $Bi_2Se_3$ and the substrate in a clean growth environment. A particularly promising choice of substrate for the heteroepitaxial growth is hexagonal boron nitride (h-BN), which has the same two-dimensional (2D) van der Waals (vdW) layered structure and hexagonal lattice symmetry as $Bi_2Se_3$. Moreover, since h-BN is a dielectric insulator with a large bandgap energy of 5.97 eV and chemically inert surfaces, it is well suited as a substrate for high mobility electronic transport studies of vdW material systems. Here we report the heteroepitaxial growth and characterization of high quality topological insulator $Bi_2Se_3$ thin films prepared on h-BN layers. Especially, we used molecular beam epitaxy to achieve high quality TI thin films with extremely low defect concentrations and an ideal interface between the films and substrates. To optimize the morphology and microstructural quality of the films, a two-step growth was performed on h-BN layers transferred on transmission electron microscopy (TEM) compatible substrates. The resulting $Bi_2Se_3$ thin films were highly crystalline with atomically smooth terraces over a large area, and the $Bi_2Se_3$ and h-BN exhibited a clear heteroepitaxial relationship with an atomically abrupt and clean interface, as examined by high-resolution TEM. Magnetotransport characterizations revealed that this interface supports a high quality topological surface state devoid of bulk contribution, as evidenced by Hall, Shubnikov-de Haas, and weak anti-localization measurements. We believe that the experimental scheme demonstrated in this talk can serve as a promising method for the preparation of high quality TI thin films as well as many other heterostructures based on 2D vdW layered materials.

  • PDF