• Title/Summary/Keyword: Model-Based T&E System

Search Result 126, Processing Time 0.028 seconds

Wound Healing Potential of Antibacterial Microneedles Loaded with Green Tea

  • Park, So Young;Lee, Hyun Uk;Kim, Gun Hwa;Park, Edmond Changkyun;Han, Seung Hyun;Lee, Jeong Gyu;Kim, Dong Lak;Lee, Jouhahn
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2014.02a
    • /
    • pp.411.1-411.1
    • /
    • 2014
  • This study evaluates the utility of an antibacterial microneedle composed of green tea extract (GT) and hyaluronic acid (HA), for the efficient delivery of GT. These microneedles have the potential to be a patient-friendly method for the conventional sustained release of drugs. In this study, a fabrication method using a mold-based technique to produce GT/HA microneedles with a maximum area of ${\sim}60mm^2$ with antibacterial properties was used to manufacture transdermal drug delivery systems. Fourier transform infrared (FTIR) spectrometry was carried out to observe the potential modifications in the microneedles, when incorporated with GT. The degradation rate of GT in GT/HA microneedles was controlled simply by adjusting the HA composition. The effects of different ratios of GT in the HA microneedles were determined by measuring the release properties. In HA microneedles loaded with 70% GT (GT70), a continuous higher release rate were sustained for 72 h. The in vitro cytotoxicity assays demonstrated that GT/HA microneedles are not generally cytotoxic to chinese hamster ovary cells (CHO-K1), human embryonic kidney cells (293T), and mouse muscle cells (C2C12), which were treated for 12 and 24 h. Antimicrobial activity of the GT/HA microneedles was demonstrated by ~95% growth reduction of gram negative [Escherichia coli (E. coli), Pseudomonas putida (P. putida) and Salmonella typhimurium (S. typhimurium)] and gram positive bacteria [Staphylococcus aureus (S. Aureus) and Bacillus subtilis (B. subtilis)], with GT70. Furthermore, GT/HA microneedles reduced bacterial growth in the infected skin wound sites and improved skin wound healing process in rat model.

  • PDF

Accelerated Life Evaluation of Drive Shaft Using Vehicle Load Spectrum Modeling (차량 부하 스펙트럼 모델링을 이용한 구동축의 가속 수명 평가)

  • Kim, Do Sik;Lee, Geun Ho;Kang, E-Sok
    • Transactions of the KSME C: Technology and Education
    • /
    • v.5 no.2
    • /
    • pp.115-126
    • /
    • 2017
  • This paper proposes an accelerated life evaluation of drive shaft for the power train parts of special purpose vehicle. It is necessary the real load data of usage level driving load condition for life evaluation of power train parts, but we can't get the load spectrum data for evaluation in many case of special purpose vehicle. So, in this paper, the road load spectrum data for evaluation is created by modeling and simulation based on vehicle data and special road condition. The inverse power model is used for accelerated life test. The equivalent torque of load spectrum is achieved using the Miner's Rule. This paper also proposes the calibrated acceleration life test method for drive shaft. The fatigue test is performed through three stress levels. The lifetime at normal stress level is predicted by extrapolation, and is verified through comparison of experimental results and load spectrum data.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Systematic Approach Of Construction Management Based On Last Planner System And Its Implementation In The Construction Industry

  • Hussain, SM Abdul Mannan;Sekhar, Dr.T.Seshadri;Fatima, Asra
    • Journal of Construction Engineering and Project Management
    • /
    • v.5 no.2
    • /
    • pp.11-15
    • /
    • 2015
  • The Last PlannerSystem (LPS) has been implemented on construction projects to increase work flow reliability, a precondition for project performance againstproductivity and progress targets. The LPS encompasses four tiers of planning processes:master scheduling, phase scheduling, lookahead planning, and commitment / weeklywork planning. This research highlights deficiencies in the current implementation of LPS including poor lookahead planning which results in poor linkage between weeklywork plans and the master schedule. This poor linkage undetermines the ability of theweekly work planning process to select for execution tasks that are critical to projectsuccess. As a result, percent plan complete (PPC) becomes a weak indicator of project progress. The purpose of this research is to improve lookahead planning (the bridgebetween weekly work planning and master scheduling), improve PPC, and improve theselection of tasks that are critical to project success by increasing the link betweenShould, Can, Will, and Did (components of the LPS), thereby rendering PPC a betterindicator of project progress. The research employs the case study research method to describe deficiencies inthe current implementation of the LPS and suggest guidelines for a better application ofLPS in general and lookahead planning in particular. It then introduces an analyticalsimulation model to analyze the lookahead planning process. This is done by examining the impact on PPC of increasing two lookahead planning performance metrics: tasksanticipated (TA) and tasks made ready (TMR). Finally, the research investigates theimportance of the lookahead planning functions: identification and removal ofconstraints, task breakdown, and operations design.The research findings confirm the positive impact of improving lookaheadplanning (i.e., TA and TMR) on PPC. It also recognizes the need to perform lookaheadplanning differently for three types of work involving different levels of uncertainty:stable work, medium uncertainty work, and highly emergent work.The research confirms the LPS rules for practice and specifically the need to planin greater detail as time gets closer to performing the work. It highlights the role of LPSas a production system that incorporates deliberate planning (predetermined andoptimized) and situated planning (flexible and adaptive). Finally, the research presents recommendations for production planningimprovements in three areas: process related, (suggesting guidelines for practice),technical, (highlighting issues with current software programs and advocating theinclusion of collaborative planning capability), and organizational improvements(suggesting transitional steps when applying the LPS).

A Study on the Influence of IT Education Service Quality on Educational Satisfaction, Work Application Intention, and Recommendation Intention: Focusing on the Moderating Effects of Learner Position and Participation Motivation (IT교육 서비스품질이 교육만족도, 현업적용의도 및 추천의도에 미치는 영향에 관한 연구: 학습자 직위 및 참여동기의 조절효과를 중심으로)

  • Kang, Ryeo-Eun;Yang, Sung-Byung
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.169-196
    • /
    • 2017
  • The fourth industrial revolution represents a revolutionary change in the business environment and its ecosystem, which is a fusion of Information Technology (IT) and other industries. In line with these recent changes, the Ministry of Employment and Labor of South Korea announced 'the Fourth Industrial Revolution Leader Training Program,' which includes five key support areas such as (1) smart manufacturing, (2) Internet of Things (IoT), (3) big data including Artificial Intelligence (AI), (4) information security, and (5) bio innovation. Based on this program, we can get a glimpse of the South Korean government's efforts and willingness to emit leading human resource with advanced IT knowledge in various fusion technology-related and newly emerging industries. On the other hand, in order to nurture excellent IT manpower in preparation for the fourth industrial revolution, the role of educational institutions capable of providing high quality IT education services is most of importance. However, these days, most IT educational institutions have had difficulties in providing customized IT education services that meet the needs of consumers (i.e., learners), without breaking away from the traditional framework of providing supplier-oriented education services. From previous studies, it has been found that the provision of customized education services centered on learners leads to high satisfaction of learners, and that higher satisfaction increases not only task performance and the possibility of business application but also learners' recommendation intention. However, since research has not yet been conducted in a comprehensive way that consider both antecedent and consequent factors of the learner's satisfaction, more empirical research on this is highly desirable. With the advent of the fourth industrial revolution, a rising interest in various convergence technologies utilizing information technology (IT) has brought with the growing realization of the important role played by IT-related education services. However, research on the role of IT education service quality in the context of IT education is relatively scarce in spite of the fact that research on general education service quality and satisfaction has been actively conducted in various contexts. In this study, therefore, the five dimensions of IT education service quality (i.e., tangibles, reliability, responsiveness, assurance, and empathy) are derived from the context of IT education, based on the SERVPERF model and related previous studies. In addition, the effects of these detailed IT education service quality factors on learners' educational satisfaction and their work application/recommendation intentions are examined. Furthermore, the moderating roles of learner position (i.e., practitioner group vs. manager group) and participation motivation (i.e., voluntary participation vs. involuntary participation) in relationships between IT education service quality factors and learners' educational satisfaction, work application intention, and recommendation intention are also investigated. In an analysis using the structural equation model (SEM) technique based on a questionnaire given to 203 participants of IT education programs in an 'M' IT educational institution in Seoul, South Korea, tangibles, reliability, and assurance were found to have a significant effect on educational satisfaction. This educational satisfaction was found to have a significant effect on both work application intention and recommendation intention. Moreover, it was discovered that learner position and participation motivation have a partial moderating impact on the relationship between IT education service quality factors and educational satisfaction. This study holds academic implications in that it is one of the first studies to apply the SERVPERF model (rather than the SERVQUAL model, which has been widely adopted by prior studies) is to demonstrate the influence of IT education service quality on learners' educational satisfaction, work application intention, and recommendation intention in an IT education environment. The results of this study are expected to provide practical guidance for IT education service providers who wish to enhance learners' educational satisfaction and service management efficiency.

Enzyme Kinetics Based Modeling of Respiration Rate for 'Fuyu' Persimmon (Diospyros kaki) Fruits (효소반응속도론에 기초한 단감의 호흡 모델에 관한 연구)

  • Ahn, Gwang-Hwan;Lee, Dong-Sun
    • Korean Journal of Food Science and Technology
    • /
    • v.36 no.4
    • /
    • pp.580-585
    • /
    • 2004
  • Respiration of 'Fuyu' persimmon (Diospyros kaki) fruits were measured in terms of oxygen consumption rate and carbon dioxide evolution by closed system experiments at 0, 5, and $20^{\circ}C$. Enzyme kinetics-based respiration model was used to describe respiration rate as function of $O_2\;and\;CO_2$ gas concentrations $(R=V_m[O_2]/K_m+(1+[CO_2]/K_i)[O_2])$, and Arrhenius equation was applied to analyze temperature effect. $V_m\;and\;K_m$ increased, while $K_i$ decreased, with increasing temperature. $K_m\;of\;O_2$ consumption was greater than that of $CO_2$ evolution at equal temperature. Inhibitory effect of reduced $O_2$ level on $O_2$ consumption was more prominent than that on $CO_2$ evolution. Activation energy of respiration decreased with reduced $O_2$ and elevated $CO_2$ concentrations. Activation energy of $CO_2$ evolution was greater than that of $O_2$ consumption. Permeable package experiments verified respiration model parameters by showing good agreement between predicted and experimental gas concentrations in package.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

On the Sequences of Dialogue Acts and the Dialogue Flows-w.r.t. the appointment scheduling dialogues (대화행위의 연쇄관계와 대화흐름에 대하여 -[일정협의 대화] 중심으로)

  • 박혜은;이민행
    • Korean Journal of Cognitive Science
    • /
    • v.10 no.2
    • /
    • pp.27-34
    • /
    • 1999
  • The main purpose of this paper is to propose a general dialogue flow in 'the a appointment scheduling dialogues' in German using the concept of dialogue acts. A basic a assumption of this research is that dialogue acts contribute to the improvement of a translation system. They might be very useful to solve the problems that syntactic and semantic module could not resolve using contextual knowledge. The classification of the dialogue acts was conducted as a work of VERBMOBIL project and was based on real dialogues transcribed by experts. The real dialogues were analyzed in terms of the dialogue acts. We empirically analyzed the sequences of the dialogue acts not only in a series of dialogue turns but also in one dialogue turn. We attempted to analyZe the sequences in one dialogue turn additionally because the dialogue data used in this research showed some difference from the ones in other existing researches. By examining the sequences in dialogue acts. we proposed the dialogue flowchart in 'the a appointment scheduling dialogues' 'Based on the statistical analysis of the sequences of the most frequent dialogue acts. the dialogue flowcharts seem to represent' the a appointment scheduling dialogues' in general. A further research is required on c classification of dialogue acts which was a base for the analysis of dialogues. In order to e extract the most generalized model. we did not subcategorize each dialogue acts and used a limited number of items of dialogue acts. However. generally defined dialogue acts need to be defined more concretely and new dialogue acts for specific situations should be a added.

  • PDF

Expression of Heat Shock Protein HspA2 in Human Tissues (인간 조직에서 Heat Shock Protein A2 (HspA2) 단백질의 발현)

  • Son, W.Y.;Hwang, S.H.;Han, C.T.;Lee, J.H.;Choi, Y.J.;Kim, S.;Kim, Y.C.
    • Clinical and Experimental Reproductive Medicine
    • /
    • v.26 no.2
    • /
    • pp.225-230
    • /
    • 1999
  • In mouse, the heat shock protein 70-2 (hsp70-2) is found to have special function in spermatogenesis. Based on the observation, the hypothesis that human hspA2 (human gene; 98.2% amino acid homology with hsp70-2) might have important function in spermatogenesis in human testes was proposed. To test the hypothesis, we examined the expression of hspA2 in human tissues. Expression vector pDMC4 for expression of the human hspA2 protein using pTricHisB (invitrogen, USA) was constructed and the expressed hspA2 protein was cross-reacted with antiserum 2A raised against mouse hsp70-2 protein. Based on the cross-reactivity, we determined the expression level of hspA2 protein in human tissues by western blot analysis using the antiserum 2A. We demonstrated that antiserum 2A antibodies detected human hspA2 protein with specificity which was produced in the E.coli expression system. On Western blot analyses, significant hspA2 expression was observed in testes with normal spermatogenesis, whereas a low level of hspA2 was expressed in testis with Sertoli-cell only syndrome. Also, a small amount of hspA2 was detected in breast, stomach, prostate, colon, liver, ovary, and epididymis. These results demonstrate that the hspA2 protein is highly expressed in male specific germ cells, which in turn suggests that hspA2 protein might playa specific role during meiosis in human testes as suggested in the murine model. However, further studies should be attempted to determine the function of hspA2 protein in human spermatogenesis.

  • PDF

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.