• Title/Summary/Keyword: simulation system

Search Result 27,087, Processing Time 0.053 seconds

The Study of Dose Distribution according to the Using Linac and Tomotherapy on Total Lymphnode Irradiation (선형가속기와 토모치료기를 이용한 전림프계의 방사선 치료시 선량분포에 관한 연구)

  • Kim, Youngjae;Seol, Gwanguk
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.4
    • /
    • pp.285-291
    • /
    • 2013
  • In this study, compare and analyze the dose distribution and availability of radiation therapy when using a different devices to TNI(Total Lymphnodal Irradiation). Test subjects(patients) are 15 people(Male 7, Female 8). Acquire CT Simulation images of the 15 people using Somatom Sansation Open 16 channel and then acquired images was transferred to each treatment planning system Pinnacle Ver 8.0 and Tomotherapy Planning System and separate the tumor tissue and normal tissues(whole lung, spinal cord, Rt kidney, Lt kidney). Tumor prescription dose was set to 750 cGy. and then Compare the Dose Compatibility, Normal Tissue's Absorbed Dose, Dose Distribution and DVH. Statistical analysis was performed SPSS Ver. 18.0 by paired sample Assay. The absorbed dose in the tumor tissue was $751.0{\pm}4.7cGy$ in tomotherapy planning, $746.9{\pm}14.1cGy$ in linac. Tomotherapy's absorbed dose in the tumor was more appropriate than linac. and These values are not statistically significant(p>0.05). Tomotherapy plan's absorbed dose in the normal tissues were less than linac's plan. This value was statistically significant(p<0.05) excepted of whole lung. In DVH, appropriated on tumor and normal tissues in tomotherapy and linac but tomotherapy's TER was better than linac. Namely, a result of Absorbed dose in tumor and normal tissue, Dose distribution pattern, DVH, Both radiation therapy devices were appropriated in radiation therapy on TER. The Linac has a short treatment time(about 15-20 min) and open space on treatment time. It cause infant and pediatric patients to receiving uncomfortable treatment. So, In this case, it will be fine that Linac based therapy was restricted use. and if the patient was cooperative, it will be show a better prognosis that Tomotherapy using Radiation Therapy.

Adaptive Lock Escalation in Database Management Systems (데이타베이스 관리 시스템에서의 적응형 로크 상승)

  • Chang, Ji-Woong;Lee, Young-Koo;Whang, Kyu-Young;Yang, Jae-Heon
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.742-757
    • /
    • 2001
  • Since database management systems(DBMSS) have limited lock resources, transactions requesting locks beyond the limit mutt be aborted. In the worst carte, if such transactions are aborted repeatedly, the DBMS can become paralyzed, i.e., transaction execute but cannot commit. Lock escalation is considered a solution to this problem. However, existing lock escalation methods do not provide a complete solution. In this paper, we prognose a new lock escalation method, adaptive lock escalation, that selves most of the problems. First, we propose a general model for lock escalation and present the concept of the unescalatable look, which is the major cause making the transactions to abort. Second, we propose the notions of semi lock escalation, lock blocking, and selective relief as the mechanisms to control the number of unescalatable locks. We then propose the adaptive lock escalation method using these notions. Adaptive lock escalation reduces needless aborts and guarantees that the DBMS is not paralyzed under excessive lock requests. It also allows graceful degradation of performance under those circumstances. Third, through extensive simulation, we show that adaptive lock escalation outperforms existing lock escalation methods. The results show that, compared to the existing methods, adaptive lock escalation reduces the number of aborts and the average response time, and increases the throughput to a great extent. Especially, it is shown that the number of concurrent transactions can be increased more than 16 ~256 fold. The contribution of this paper is significant in that it has formally analysed the role of lock escalation in lock resource management and identified the detailed underlying mechanisms. Existing lock escalation methods rely on users or system administrator to handle the problems of excessive lock requests. In contrast, adaptive lock escalation releases the users of this responsibility by providing graceful degradation and preventing system paralysis through automatic control of unescalatable locks Thus adaptive lock escalation can contribute to developing self-tuning: DBMSS that draw a lot of attention these days.

  • PDF

A Proposal for Korean armed forces preparing toward Future war: Examine the U.S. 'Mosaic Warfare' Concept (미래전을 대비한 한국군 발전방향 제언: 미국의 모자이크전 수행개념 고찰을 통하여)

  • Chang, Jin O;Jung, Jae-young
    • Maritime Security
    • /
    • v.1 no.1
    • /
    • pp.215-240
    • /
    • 2020
  • In 2017, the U.S. DARPA coined 'mosaic warfare' as a new way of warfighting. According to the Timothy Grayson, director of DARPA's Strategic Technologies Office, mosaic warfare is a "system of system" approach to warfghting designed around compatible "tiles" of capabilities, rather than uniquely shaped "puzzle pieces" that must be fitted into a specific slot in a battle plan in order for it to work. Prior to cover mosaic warfare theory and recent development, it deals analyze its background and several premises for better understanding. The U.S. DoD officials might acknowledge the current its forces vulnerability to the China's A2/AD assets. Furthermore, the U.S. seeks to complete military superiority even in other nation's territorial domains including sea and air. Given its rapid combat restoration capability and less manpower casualty, the U.S. would be able to ready to endure war of attrition that requires massive resources. The core concept of mosaic warfare is a "decision centric warfare". To embody this idea, it create adaptability for U.S. forces and complexity or uncertainty for the enemy through the rapid composition and recomposition of a more disag g reg ated U.S. military force using human command and machine control. This allows providing more options to friendly forces and collapse adversary's OODA loop eventually. Adaptable kill web, composable force packages, A.I., and context-centric C3 architecture are crucial elements to implement and carry out mosaic warfare. Recently, CSBA showed an compelling assessment of mosaic warfare simulation. In this wargame, there was a significant differences between traditional and mosaic teams. Mosaic team was able to mount more simultaneous actions, creating additional complexity to adversaries and overwhelming their decision-making with less friendly force's human casualty. It increase the speed of the U.S. force's decision-making, enabling commanders to better employ tempo. Consequently, this article finds out and suggests implications for Korea armed forces. First of all, it needs to examine and develop 'mosaic warfare' in terms of our security circumstance. In response to future warfare, reviewing overall force structure and architecture is required which is able to compose force element regardless domain. In regards to insufficient defense resources and budget, "choice" and "concentration" are also essential. It needs to have eyes on the neighboring countries' development of future war concept carefully.

  • PDF

Evaluation of the Positional Uncertainty of a Liver Tumor using 4-Dimensional Computed Tomography and Gated Orthogonal Kilovolt Setup Images (사차원전산화단층촬영과 호흡연동 직각 Kilovolt 준비 영상을 이용한 간 종양의 움직임 분석)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Park, Hee-Chul;Ahn, Jong-Ho;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jin-Sung;Han, Young-Yih;Lim, Do-Hoon;Choi, Doo-Ho
    • Radiation Oncology Journal
    • /
    • v.28 no.3
    • /
    • pp.155-165
    • /
    • 2010
  • Purpose: In order to evaluate the positional uncertainty of internal organs during radiation therapy for treatment of liver cancer, we measured differences in inter- and intra-fractional variation of the tumor position and tidal amplitude using 4-dimentional computed radiograph (DCT) images and gated orthogonal setup kilovolt (KV) images taken on every treatment using the on board imaging (OBI) and real time position management (RPM) system. Materials and Methods: Twenty consecutive patients who underwent 3-dimensional (3D) conformal radiation therapy for treatment of liver cancer participated in this study. All patients received a 4DCT simulation with an RT16 scanner and an RPM system. Lipiodol, which was updated near the target volume after transarterial chemoembolization or diaphragm was chosen as a surrogate for the evaluation of the position difference of internal organs. Two reference orthogonal (anterior and lateral) digital reconstructed radiograph (DRR) images were generated using CT image sets of 0% and 50% into the respiratory phases. The maximum tidal amplitude of the surrogate was measured from 3D conformal treatment planning. After setting the patient up with laser markings on the skin, orthogonal gated setup images at 50% into the respiratory phase were acquired at each treatment session with OBI and registered on reference DRR images by setting each beam center. Online inter-fractional variation was determined with the surrogate. After adjusting the patient setup error, orthogonal setup images at 0% and 50% into the respiratory phases were obtained and tidal amplitude of the surrogate was measured. Measured tidal amplitude was compared with data from 4DCT. For evaluation of intra-fractional variation, an orthogonal gated setup image at 50% into the respiratory phase was promptly acquired after treatment and compared with the same image taken just before treatment. In addition, a statistical analysis for the quantitative evaluation was performed. Results: Medians of inter-fractional variation for twenty patients were 0.00 cm (range, -0.50 to 0.90 cm), 0.00 cm (range, -2.40 to 1.60 cm), and 0.00 cm (range, -1.10 to 0.50 cm) in the X (transaxial), Y (superior-inferior), and Z (anterior-posterior) directions, respectively. Significant inter-fractional variations over 0.5 cm were observed in four patients. Min addition, the median tidal amplitude differences between 4DCTs and the gated orthogonal setup images were -0.05 cm (range, -0.83 to 0.60 cm), -0.15 cm (range, -2.58 to 1.18 cm), and -0.02 cm (range, -1.37 to 0.59 cm) in the X, Y, and Z directions, respectively. Large differences of over 1 cm were detected in 3 patients in the Y direction, while differences of more than 0.5 but less than 1 cm were observed in 5 patients in Y and Z directions. Median intra-fractional variation was 0.00 cm (range, -0.30 to 0.40 cm), -0.03 cm (range, -1.14 to 0.50 cm), 0.05 cm (range, -0.30 to 0.50 cm) in the X, Y, and Z directions, respectively. Significant intra-fractional variation of over 1 cm was observed in 2 patients in Y direction. Conclusion: Gated setup images provided a clear image quality for the detection of organ motion without a motion artifact. Significant intra- and inter-fractional variation and tidal amplitude differences between 4DCT and gated setup images were detected in some patients during the radiation treatment period, and therefore, should be considered when setting up the target margin. Monitoring of positional uncertainty and its adaptive feedback system can enhance the accuracy of treatments.

Dose verification for Gated Volumetric Modulated Arc Therapy according to Respiratory period (호흡연동 용적변조 회전방사선치료에서 호흡주기에 따른 선량전달 정확성 검증)

  • Jeon, Soo Dong;Bae, Sun Myung;Yoon, In Ha;Kang, Tae Young;Baek, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.137-147
    • /
    • 2014
  • Purpose : The purpose of this study is to verify the accuracy of dose delivery according to the patient's breathing cycle in Gated Volumetric Modulated Arc Therapy Materials and Methods : TrueBeam STxTM(Varian Medical System, Palo Alto, CA) was used in this experiment. The Computed tomography(CT) images that were acquired with RANDO Phantom(Alderson Research Laboratories Inc. Stamford. CT, USA), using Computerized treatment planning system(Eclipse 10.0, Varian, USA), were used to create VMAT plans using 10MV FFF with 1500 cGy/fx (case 1, 2, 3) and 220 cGy/fx(case 4, 5, 6) of doserate of 1200 MU/min. The regular respiratory period of 1.5, 2.5, 3.5 and 4.5 sec and the patients respiratory period of 2.2 and 3.5 sec were reproduced with the $QUASAR^{TM}$ Respiratory Motion Phantom(Modus Medical Devices Inc), and it was set up to deliver radiation at the phase mode between the ranges of 30 to 70%. The results were measured at respective respiratory conditions by a 2-Dimensional ion chamber array detector(I'mRT Matrixx, IBA Dosimetry, Germany) and a MultiCube Phantom(IBA Dosimetry, Germany), and the Gamma pass rate(3 mm, 3%) were compared by the IMRT analysis program(OmniPro I'mRT system software Version 1.7b, IBA Dosimetry, Germany) Results : The gamma pass rates of Case 1, 2, 3, 4, 5 and 6 were the results of 100.0, 97.6, 98.1, 96.3, 93.0, 94.8% at a regular respiratory period of 1.5 sec and 98.8, 99.5, 97.5, 99.5, 98.3, 99.6% at 2.5 sec, 99.6, 96.6, 97.5, 99.2, 97.8, 99.1% at 3.5 sec and 99.4, 96.3, 97.2, 99.0, 98.0, 99.3% at 4.5 sec, respectively. When a patient's respiration was reproduced, 97.7, 95.4, 96.2, 98.9, 96.2, 98.4% at average respiratory period of 2.2 sec, and 97.3, 97.5, 96.8, 100.0, 99.3, 99.8% at 3.5 sec, respectively. Conclusion : The experiment showed clinically reliable results of a Gamma pass rate of 95% or more when 2.5 sec or more of a regular breathing period and the patient's breathing were reproduced. While it showed the results of 93.0% and 94.8% at a regular breathing period of 1.5 sec of Case 5 and 6, it could be confirmed that the accurate dose delivery could be possible on the most respiratory conditions because based on the results of 100 patients's respiratory period analysis as no one sustained a respiration of 1.5 sec. But, pretreatment dose verification should be precede because we can't exclude the possibility of error occurrence due to extremely short respiratory period, also a training at the simulation and careful monitoring are necessary for a patient to maintain stable breathing. Consequently, more reliable and accurate treatments can be administered.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

Fast Join Mechanism that considers the switching of the tree in Overlay Multicast (오버레이 멀티캐스팅에서 트리의 스위칭을 고려한 빠른 멤버 가입 방안에 관한 연구)

  • Cho, Sung-Yean;Rho, Kyung-Taeg;Park, Myong-Soon
    • The KIPS Transactions:PartC
    • /
    • v.10C no.5
    • /
    • pp.625-634
    • /
    • 2003
  • More than a decade after its initial proposal, deployment of IP Multicast has been limited due to the problem of traffic control in multicast routing, multicast address allocation in global internet, reliable multicast transport techniques etc. Lately, according to increase of multicast application service such as internet broadcast, real time security information service etc., overlay multicast is developed as a new internet multicast technology. In this paper, we describe an overlay multicast protocol and propose fast join mechanism that considers switching of the tree. To find a potential parent, an existing search algorithm descends the tree from the root by one level at a time, and it causes long joining latency. Also, it is try to select the nearest node as a potential parent. However, it can't select the nearest node by the degree limit of the node. As a result, the generated tree has low efficiency. To reduce long joining latency and improve the efficiency of the tree, we propose searching two levels of the tree at a time. This method forwards joining request message to own children node. So, at ordinary times, there is no overhead to keep the tree. But the joining request came, the increasing number of searching messages will reduce a long joining latency. Also searching more nodes will be helpful to construct more efficient trees. In order to evaluate the performance of our fast join mechanism, we measure the metrics such as the search latency and the number of searched node and the number of switching by the number of members and degree limit. The simulation results show that the performance of our mechanism is superior to that of the existing mechanism.

Exploring the Temporal Relationship Between Traffic Information Web/Mobile Application Access and Actual Traffic Volume on Expressways (웹/모바일-어플리케이션 접속 지표와 TCS 교통량의 상관관계 연구)

  • RYU, Ingon;LEE, Jaeyoung;CHOI, Keechoo;KIM, Junghwa;AHN, Soonwook
    • Journal of Korean Society of Transportation
    • /
    • v.34 no.1
    • /
    • pp.1-14
    • /
    • 2016
  • In the recent years, the internet has become accessible without limitation of time and location to anyone with smartphones. It resulted in more convenient travel information access both on the pre-trip and en-route phase. The main objective of this study is to conduct a stationary test for traffic information web/mobile application access indexes from TCS (Toll Collection System); and analyzing the relationship between the web/mobile application access indexes and actual traffic volume on expressways, in order to analyze searching behavior of expressway related travel information. The key findings of this study are as follows: first, the results of ADF-test and PP-test confirm that the web/mobile application access indexes by time periods satisfy stationary conditions even without log or differential transformation. Second, the Pearson correlation test showed that there is a strong and positive correlation between the web/mobile application access indexes and expressway entry and exit traffic volume. In contrast, truck entry traffic volume from TCS has no significant correlation with the web/mobile application access indexes. Third, the time gap relationship between time-series variables (i.e., concurrent, leading and lagging) was analyzed by cross-correlation tests. The results indicated that the mobile application access leads web access, and the number of mobile application execution is concurrent with all web access indexes. Lastly, there was no web/mobile application access indexes leading expressway entry traffic volumes on expressways, and the highest correlation was observed between webpage view/visitor/new visitor/repeat visitor/application execution counts and expressway entry volume with a lag of one hour. It is expected that specific individual travel behavior can be predicted such as route conversion time and ratio if the data are subdivided by time periods and areas and utilizing traffic information users' location.

Effects of Encapsulation Layer on Center Crack and Fracture of Thin Silicon Chip using Numerical Analysis (봉지막이 박형 실리콘 칩의 파괴에 미치는 영향에 대한 수치해석 연구)

  • Choa, Sung-Hoon;Jang, Young-Moon;Lee, Haeng-Soo
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.25 no.1
    • /
    • pp.1-10
    • /
    • 2018
  • Recently, there has been rapid development in the field of flexible electronic devices, such as organic light emitting diodes (OLEDs), organic solar cells and flexible sensors. Encapsulation process is added to protect the flexible electronic devices from exposure to oxygen and moisture in the air. Using numerical simulation, we investigated the effects of the encapsulation layer on mechanical stability of the silicon chip, especially the fracture performance of center crack in multi-layer package for various loading condition. The multi-layer package is categorized in two type - a wide chip model in which the chip has a large width and encapsulation layer covers only the chip, and a narrow chip model in which the chip covers both the substrate and the chip with smaller width than the substrate. In the wide chip model where the external load acts directly on the chip, the encapsulation layer with high stiffness enhanced the crack resistance of the film chip as the thickness of the encapsulation layer increased regardless of loading conditions. In contrast, the encapsulation layer with high stiffness reduced the crack resistance of the film chip in the narrow chip model for the case of external tensile strain loading. This is because the external load is transferred to the chip through the encapsulation layer and the small load acts on the chip for the weak encapsulation layer in the narrow chip model. When the bending moment acts on the narrow model, thin encapsulation layer and thick encapsulation layer show the opposite results since the neutral axis is moving toward the chip with a crack and load acting on chip decreases consequently as the thickness of encapsulation layer increases. The present study is expected to provide practical design guidance to enhance the durability and fracture performance of the silicon chip in the multilayer package with encapsulation layer.