• Title/Summary/Keyword: Optimization problem

Search Result 4,343, Processing Time 0.038 seconds

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Applications of Fuzzy Theory on The Location Decision of Logistics Facilities (퍼지이론을 이용한 물류단지 입지 및 규모결정에 관한 연구)

  • 이승재;정창무;이헌주
    • Journal of Korean Society of Transportation
    • /
    • v.18 no.1
    • /
    • pp.75-85
    • /
    • 2000
  • In existing models in optimization, the crisp data improve has been used in the objective or constraints to derive the optimal solution, Besides, the subjective environments are eliminated because the complex and uncertain circumstances were regarded as Probable ambiguity, In other words those optimal solutions in the existing models could be the complete satisfactory solutions to the objective functions in the Process of application for industrial engineering methods to minimize risks of decision-making. As a result of those, decision-makers in location Problems couldn't face appropriately with the variation of demand as well as other variables and couldn't Provide the chance of wide selection because of the insufficient information. So under the circumstance. it has been to develop the model for the location and size decision problems of logistics facility in the use of the fuzzy theory in the intention of making the most reasonable decision in the Point of subjective view under ambiguous circumstances, in the foundation of the existing decision-making problems which must satisfy the constraints to optimize the objective function in strictly given conditions in this study. Introducing the Process used in this study after the establishment of a general mixed integer Programming(MIP) model based upon the result of existing studies to decide the location and size simultaneously, a fuzzy mixed integer Programming(FMIP) model has been developed in the use of fuzzy theory. And the general linear Programming software, LINDO 6.01 has been used to simulate, to evaluate the developed model with the examples and to judge of the appropriateness and adaptability of the model(FMIP) in the real world.

  • PDF

Bottom electrode optimization for the applications of ferroelectric memory device (강유전체 기억소자 응용을 위한 하부전극 최적화 연구)

  • Jung, S.M.;Choi, Y.S.;Lim, D.G.;Park, Y.;Song, J.T.;Yi, J.
    • Journal of the Korean Crystal Growth and Crystal Technology
    • /
    • v.8 no.4
    • /
    • pp.599-604
    • /
    • 1998
  • We have investigated Pt and $RuO_2$ as a bottom electrode for ferroelectric capacitor applications. The bottom electrodes were prepared by using an RF magnetron sputtering method. Some of the investigated parameters were a substrate temperature, gas flow rate, RF power for the film growth, and post annealing effect. The substrate temperature strongly influenced the surface morphology and resistivity of the bottom electrodes as well as the film crystallographic structure. XRD results on Pt films showed a mixed phase of (111) and (200) peak for the substrate temperature ranged from RT to $200^{\circ}C$, and a preferred (111) orientation for $300^{\circ}C$. From the XRD and AFM results, we recommend the substrate temperature of $300^{\circ}C$ and RF power 80W for the Pt bottom electrode growth. With the variation of an oxygen partial pressure from 0 to 50%, we learned that only Ru metal was grown with 0~5% of $O_2$ gas, mixed phase of Ru and $RuO_2$ for $O_ 2$ partial pressure between 10~40%, and a pure $RuO_2$ phase with $O_2$ partial pressure of 50%. This result indicates that a double layer of $RuO_2/Ru$ can be grown in a process with the modulation of gas flow rate. Double layer structure is expected to reduce the fatigue problem while keeping a low electrical resistivity. As post anneal temperature was increased from RT to $700^{\circ}C$, the resistivity of Pt and $RuO_2$ was decreased linearly. This paper presents the optimized process conditions of the bottom electrodes for memory device applications.

  • PDF

Patient Setup Aid with Wireless CCTV System in Radiation Therapy (무선 CCTV 시스템을 이용한 환자 고정 보조기술의 개발)

  • Park, Yang-Kyun;Ha, Sung-Whan;Ye, Sung-Joon;Cho, Woong;Park, Jong-Min;Park, Suk-Won;Huh, Soon-Nyung
    • Radiation Oncology Journal
    • /
    • v.24 no.4
    • /
    • pp.300-308
    • /
    • 2006
  • $\underline{Purpose}$: To develop a wireless CCTV system in semi-beam's eye view (BEV) to monitor daily patient setup in radiation therapy. $\underline{Materials\;and\;Methods}$: In order to get patient images in semi-BEV, CCTV cameras are installed in a custom-made acrylic applicator below the treatment head of a linear accelerator. The images from the cameras are transmitted via radio frequency signal (${\sim}2.4\;GHz$ and 10 mW RF output). An expected problem with this system is radio frequency interference, which is solved utilizing RF shielding with Cu foils and median filtering software. The images are analyzed by our custom-made software. In the software, three anatomical landmarks in the patient surface are indicated by a user, then automatically the 3 dimensional structures are obtained and registered by utilizing a localization procedure consisting mainly of stereo matching algorithm and Gauss-Newton optimization. This algorithm is applied to phantom images to investigate the setup accuracy. Respiratory gating system is also researched with real-time image processing. A line-laser marker projected on a patient's surface is extracted by binary image processing and the breath pattern is calculated and displayed in real-time. $\underline{Results}$: More than 80% of the camera noises from the linear accelerator are eliminated by wrapping the camera with copper foils. The accuracy of the localization procedure is found to be on the order of $1.5{\pm}0.7\;mm$ with a point phantom and sub-millimeters and degrees with a custom-made head/neck phantom. With line-laser marker, real-time respiratory monitoring is possible in the delay time of ${\sim}0.17\;sec$. $\underline{Conclusion}$: The wireless CCTV camera system is the novel tool which can monitor daily patient setups. The feasibility of respiratory gating system with the wireless CCTV is hopeful.

Opportunity Tree Framework Design For Optimization of Software Development Project Performance (소프트웨어 개발 프로젝트 성능의 최적화를 위한 Opportunity Tree 모델 설계)

  • Song Ki-Won;Lee Kyung-Whan
    • The KIPS Transactions:PartD
    • /
    • v.12D no.3 s.99
    • /
    • pp.417-428
    • /
    • 2005
  • Today, IT organizations perform projects with vision related to marketing and financial profit. The objective of realizing the vision is to improve the project performing ability in terms of QCD. Organizations have made a lot of efforts to achieve this objective through process improvement. Large companies such as IBM, Ford, and GE have made over $80\%$ of success through business process re-engineering using information technology instead of business improvement effect by computers. It is important to collect, analyze and manage the data on performed projects to achieve the objective, but quantitative measurement is difficult as software is invisible and the effect and efficiency caused by process change are not visibly identified. Therefore, it is not easy to extract the strategy of improvement. This paper measures and analyzes the project performance, focusing on organizations' external effectiveness and internal efficiency (Qualify, Delivery, Cycle time, and Waste). Based on the measured project performance scores, an OT (Opportunity Tree) model was designed for optimizing the project performance. The process of design is as follows. First, meta data are derived from projects and analyzed by quantitative GQM(Goal-Question-Metric) questionnaire. Then, the project performance model is designed with the data obtained from the quantitative GQM questionnaire and organization's performance score for each area is calculated. The value is revised by integrating the measured scores by area vision weights from all stakeholders (CEO, middle-class managers, developer, investor, and custom). Through this, routes for improvement are presented and an optimized improvement method is suggested. Existing methods to improve software process have been highly effective in division of processes' but somewhat unsatisfactory in structural function to develop and systemically manage strategies by applying the processes to Projects. The proposed OT model provides a solution to this problem. The OT model is useful to provide an optimal improvement method in line with organization's goals and can reduce risks which may occur in the course of improving process if it is applied with proposed methods. In addition, satisfaction about the improvement strategy can be improved by obtaining input about vision weight from all stakeholders through the qualitative questionnaire and by reflecting it to the calculation. The OT is also useful to optimize the expansion of market and financial performance by controlling the ability of Quality, Delivery, Cycle time, and Waste.

Limit Pricing by Noncooperative Oligopolists (과점산업(寡占産業)에서의 진입제한가격(進入制限價格))

  • Nam, Il-chong
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.127-148
    • /
    • 1990
  • A Milgrom-Roberts style signalling model of limit pricing is developed to analyze the possibility and the scope of limit pricing in general, noncooperative oligopolies. The model contains multiple incumbent firms facing a potential entrant and assumes an information asymmetry between incombents and the potential entrant about the market demand. There are two periods in the model. In period 1, n incumbent firms simultaneously and noncooperatively choose quantities. At the end of period 1, the potential entrant observes the market price and makes an entry decision. In period 2, depending on the entry decision of the entrant, n' or (n+1) firms choose quantities again before the game terminates. Since the choice of incumbent firms in period 1 depends on their information about demand, the market price in period 1 conveys information about the market demand. Thus, there is a systematic link between the market price and the profitability of entry. Using Bayes-Nash equilibrium as the solution concept, we find that there exist some demand conditions under which incumbent firms will limit price. In symmetric equilibria, incumbent firms each produce an output that is greater than the Cournot output and induce a price that is below the Cournot price. In doing so, each incumbent firm refrains from maximizing short-run profit and supplies a public good that is entry deterrence. The reason that entry is deterred by such a reduced price is that it conveys information about the demand of the industry that is unfavorable to the entrant. This establishes the possibility of limit pricing by noncooperative oligopolists in a setting that is fully rational, and also generalizes the result of Milgrom and Roberts to general oligopolies, confirming Bain's intuition. Limit pricing by incumbents explained above can be interpreted as a form of credible collusion in which each firm voluntarily deviates from myopic optimization in order to deter entry using their superior information. This type of implicit collusion differs from Folk-theorem type collusions in many ways and suggests that a collusion can be a credible one even in finite games as long as there is information asymmetry. Another important result is that as the number of incumbent firms approaches infinity, or as the industry approaches a competitive one, the probability that limit pricing occurs converges to zero and the probability of entry converges to that under complete information. This limit result confirms the intuition that as the number of agents sharing the same private information increases, the value of the private information decreases, and the probability that the information gets revealed increases. This limit result also supports the conventional belief that there is no entry problem in a competitive market. Considering the fact that limit pricing is generally believed to occur at an early stage of an industry and the fact that many industries in Korea are oligopolies in their infant stages, the theoretical results of this paper suggest that we should pay attention to the possibility of implicit collusion by incumbent firms aimed at deterring new entry using superior information. The long-term loss to the Korean economy from limit pricing can be very large if the industry in question is a part of the world market and the domestic potential entrant whose entry is deterred could .have developed into a competitor in the world market. In this case, the long-term loss to the Korean economy should include the lost opportunity in the world market in addition to the domestic long-run welfare loss.

  • PDF

Development of a Novel Medium with Chinese Cabbage Extract and Optimized Fermentation Conditions for the Cultivation of Leuconostoc citreum GR1 (폐배추 추출물을 이용한 Leuconostoc citreum GR1 종균 배양용 최적 배지 및 배양 조건 개발)

  • Moon, Shin-Hye;Chang, Hae-Choon;Kim, In-Cheol
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.42 no.7
    • /
    • pp.1125-1132
    • /
    • 2013
  • In the kimchi manufacturing process, the starter is cultured on a large-scale and needs to be supplied at a low price to kimchi factories. However, current high costs associated with the culture of lactic acid bacteria for the starter, have led to rising kimchi prices. To solve this problem, the development of a new medium for culturing lactic acid bacteria was studied. The base materials of a this novel medium consisted of Chinese cabbage extract, a carbon source, a nitrogen source, and inorganic salts. The optimal composition of this medium was determined to be 30% Chinese cabbage extract, 2% maltose, 0.25% yeast extract, and $2{\times}$ salt stock (2% sodium acetate trihydrate, 0.8% disodium hydrogen phosphate, 0.8% sodium citrate, 0.8% ammonium sulfate, 0.04% magnesium sulfate, 0.02% manganese sulfate). The newly developed medium was named MFL (medium for lactic acid bacteria). After culture for 24 hr at $30^{\circ}C$, the CFU/mL of Leuconostoc (Leuc.) citreum GR1 in MRS and MFL was $3.41{\times}10^9$ and $7.49{\times}10^9$, respectively. The number of cells in the MFL medium was 2.2 times higher than their number in the MRS media. In a scale-up process using this optimized medium, the fermentation conditions for Leuc. citreum GR1 were tested in a 2 L working volume using a 5 L jar fermentor at $30^{\circ}C$. At an impeller speed of 50 rpm (without pH control), the viable cell count was $8.60{\times}10^9$ CFU/mL. From studies on pH-stat control fermentation, the optimal pH and regulating agent was determined to be 6.8 and NaOH, respectively. At an impeller speed of 50 rpm with pH control, the viable cell count was $11.42{\times}10^9(1.14{\times}10^{10})$ CFU/mL after cultivation for 20 hr - a value was 3.34 times higher than that obtained using the MRS media in biomass production. This MFL media is expected to have economic advantages for the cultivation of Leuc. citreum GR1 as a starter for kimchi production.

Optimized Methods of Preimplantation Genetic Diagnosis for Trinucleotide Repeat Diseases of Huntington's Disease, Spinocerebellar Ataxia 3 and Fragile X Syndrome (삼핵산 반복서열 질환인 헌팅톤병, 척수소뇌성 운동실조증, X-염색체 취약 증후군의 착상전 유전진단 방법에 대한 연구)

  • Kim, Min-Jee;Lee, Hyoung-Song;Lim, Chun-Kyu;Cho, Jae-Won;Kim, Jin-Young;Koong, Mi-Kyoung;Son, In-Ok;Kang, Inn-Soo;Jun, Jin-Hyon
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.34 no.3
    • /
    • pp.179-188
    • /
    • 2007
  • Objectives: Many neurological diseases are known to be caused by expansion of trinucleotide repeats (TNRs). It is hard to diagnose the alteration of TNRs with single cell level for preimplantation genetic diagnosis (PGD). In this study, we describe methods optimized for PGD of TNRs related diseases such as Huntington's disease (HD), spinocerebellar ataxia 3 (SCA3) and fragile X syndrome (FXS). Methods: We performed the preclinical assays with heterozygous patient's lymphocytes by single cell PCR strategy. Fluorescent semi-nested PCR and fragment analysis using automatic genetic analyzer were applied for HD and SCA 3. Whole genome amplification with multiple displacement amplification (MDA) method and fluorescent PCR were carried out for FXS. Amplification and allele drop-out (ADO) rate were evaluated in each case. Results: The fluorescent semi-nested PCR of single lymphocyte showed 100.0% of amplification and 14.0% of ADO rate in HD, and 94.7% of amplification and 5.6% of ADO rate in SCA3, respectively. We could not detect the PCR product of CGG repeats in FXS using the fluorescent semi-nested PCR alone. After applying the MDA method in FXS, 84.2% of amplification and 31.3% of ADO rate were achieved. Conclusions: Fluorescent semi-nested PCR is a reliable method for PGD of HD and SCA3. The advanced MDA method overcomes the problem of amplification failure in CGG repeats of FXS case. Optimization of methods for single cell analysis could improve the sensitivity and reliability of PGD for complicated single gene disorders of TNRs.

Analysis of Determinants of Carbon Emissions Considering the Electricity Trade Situation of Connected Countries and the Introduction of the Carbon Emission Trading System in Europe (유럽 내 탄소배출권거래제 도입에 따른 연결계통국가들의 전력교역 상황을 고려한 탄소배출량 결정요인분석)

  • Yoon, Kyungsoo;Hong, Won Jun
    • Environmental and Resource Economics Review
    • /
    • v.31 no.2
    • /
    • pp.165-204
    • /
    • 2022
  • This study organized data from 2000 to 2014 for 20 grid-connected countries in Europe and analyzed the determinants of carbon emissions through the panel GLS method considering the problem of heteroscedasticity and autocorrelation. At the same time, the effect of introducing ETS was considered by dividing the sample period as of 2005 when the European emission trading system was introduced. Carbon emissions from individual countries were used as dependent variables, and proportion of generation by each source, power self-sufficiency ratio of neighboring countries, power production from resource-holding countries, concentration of power sources, total energy consumption per capita in the industrial sector, tax of electricity, net electricity export per capita, and size of national territory per capita. According to the estimation results, the proportion of nuclear power and renewable energy generation, concentration of power sources, and size of the national territory area per capita had a negative (-) effect on carbon emissions both before and after 2005. On the other hand, the proportion of coal power generation, the power supply and demand rate of neighboring countries, the power production of resource-holding countries, and the total energy consumption per capita in the industrial sector were found to have a positive (+) effect on carbon emissions. In addition, the proportion of gas generation had a negative (-) effect on carbon emissions, and tax of electricity were found to have a positive (+) effect. However, all of these were only significant before 2005. It was found that net electricity export per capita had a negative (-) effect on carbon emissions only after 2005. The results of this study suggest macroscopic strategies to reduce carbon emissions to green growth, suggesting mid- to long-term power mix optimization measures considering the electricity trade market and their role.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.