• Title/Summary/Keyword: the dynamic model

Search Result 11,242, Processing Time 0.05 seconds

A Study on the Interactive Narrative - Focusing on the analysis of VR animation <Wolves in the Walls> (인터랙티브 내러티브에 관한 연구 - VR 애니메이션 <Wolves in the Walls>의 분석을 중심으로)

  • Zhuang Sheng
    • Trans-
    • /
    • v.15
    • /
    • pp.25-56
    • /
    • 2023
  • VR is a dynamic image simulation technology with very high information density. Among them, spatial depth, temporality, and realism bring an unprecedented sense of immersion to the experience. However, due to its high information density, the information contained in it is very easy to be manipulated, creating an illusion of objectivity. Users need guidance to help them interpret the high density of dynamic image information. Just like setting up navigation interfaces and interactivity in games, interactivity in virtual reality is a way to interpret virtual content. At present, domestic research on VR content is mainly focused on technology exploration and visual aesthetic experience. However, there is still a lack of research on interactive storytelling design, which is an important part of VR content creation. In order to explore a better interactive storytelling model in virtual reality content, this paper analyzes the interactive storytelling features of the VR animated version of <Wolves in the walls> through the methods of literature review and case study. We find that the following rules can be followed when creating VR content: 1. the VR environment should fully utilize the advantages of free movement for users, and users should not be viewed as mere observers. The user's sense of presence should be fully considered when designing interaction modules. Break down the "fourth wall" to encourage audience interaction in the virtual reality environment, and make the hot media of VR "cool". 2.Provide developer-driven narrative in the early stages of the work so that users are not confused about the ambiguous world situation when they first enter a virtual environment with a high degree of freedom. 1.Unlike some games that guide users through text, you can guide them through a more natural interactive approach that adds natural dialog between the user and story characters (NPC). Also, since gaze guidance is an important part of story progression, you should set up spatial scene user gaze guidance elements within it. For example, you can provide eye-following cues, motion cues, language cues, and more. By analyzing the interactive storytelling features and innovations of the VR animation <Wolves in the walls>, I hope to summarize the main elements of interactive storytelling from its content. Based on this, I hope to explore how to better showcase interactive storytelling in virtual reality content and provide thoughts on future VR content creation.

Differentiation of True Recurrence from Delayed Radiation Therapy-related Changes in Primary Brain Tumors Using Diffusion-weighted Imaging, Dynamic Susceptibility Contrast Perfusion Imaging, and Susceptibility-weighted Imaging (확산강조영상, 역동적조영관류영상, 자화율강조영상을 이용한 원발성 뇌종양환자에서의 종양재발과 지연성 방사선치료연관변화의 감별)

  • Kim, Dong Hyeon;Choi, Seung Hong;Ryoo, Inseon;Yoon, Tae Jin;Kim, Tae Min;Lee, Se-Hoon;Park, Chul-Kee;Kim, Ji-Hoon;Sohn, Chul-Ho;Park, Sung-Hye;Kim, Il Han
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.2
    • /
    • pp.120-132
    • /
    • 2014
  • Purpose : To compare dynamic susceptibility contrast imaging, diffusion-weighted imaging, and susceptibility-weighted imaging (SWI) for the differentiation of tumor recurrence and delayed radiation therapy (RT)-related changes in patients treated with RT for primary brain tumors. Materials and Methods: We enrolled 24 patients treated with RT for various primary brain tumors, who showed newly appearing enhancing lesions more than one year after completion of RT on follow-up MRI. The enhancing-lesions were confirmed as recurrences (n=14) or RT-changes (n=10). We calculated the mean values of normalized cerebral blood volume (nCBV), apparent diffusion coefficient (ADC), and proportion of dark signal intensity on SWI (proSWI) for the enhancing-lesions. All the values between the two groups were compared using t-test. A multivariable logistic regression model was used to determine the best predictor of differential diagnosis. The cutoff value of the best predictor obtained from receiver-operating characteristic curve analysis was applied to calculate the sensitivity, specificity, and accuracy for the diagnosis. Results: The mean nCBV value was significantly higher in the recurrence group than in the RT-change group (P=.004), and the mean proSWI was significantly lower in the recurrence group (P<.001). However, no significant difference was observed in the mean ADC values between the two groups. A multivariable logistic regression analysis showed that proSWI was the only independent variable for the differentiation; the sensitivity, specificity, and accuracy were 78.6% (11 of 14), 100% (10 of 10), and 87.5% (21 of 24), respectively. Conclusion: The proSWI was the most promising parameter for the differentiation of newly developed enhancing-lesions more than one year after RT completion in brain tumor patients.

Estimating Grain Weight and Grain Nitrogen Content with Temperature, Solar Radiation and Growth Traits During Grain-Filling Period in Rice (등숙기 온도 및 일사량과 생육형질을 이용한 벼 종실중 및 종실질소함량 추정)

  • Lee, Chung-Kuen;Kim, Jun-Hwan;Son, Ji-Young;Yoon, Young-Hwan;Seo, Jong-Ho;Kwon, Young-Up;Shin, Jin-Chul;Lee, Byun-Woo
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.55 no.4
    • /
    • pp.275-283
    • /
    • 2010
  • This experiment was conducted to construct process models to estimate grain weight (GW) and grain nitrogen content (GN) in rice. A model was developed to describe the dynamic pattern of GW and GN during grain-filling period considering their relationships with temperature, solar radiation and growth traits such as LAI, shoot dry-weight, shoot nitrogen content, grain number during grain filling. Firstly, maximum grain weight (GWmax) and maximum grain nitrogen content (GNmax) equation was formulated in relation to Accumulated effective temperature (AET) ${\times}$ Accumulated radiation (AR) using boundary line analysis. Secondly, GW and GN equation were created by relating the difference between GW and GWmax and the difference between GN and GNmax, respectively, with growth traits. Considering the statistics such as coefficient of determination and relative root mean square of error and number of predictor variables, appropriate models for GW and GN were selected. Model for GW includes GWmax determined by AET ${\times}$ AR, shoot dry weight and grain number per unit land area as predictor variables while model for GN includes GNmax determined by AET ${\times}$ AR, shoot N content and grain number per unit land area. These models could explain the variations of GW and GN caused not only by variations of temperature and solar radiation but also by variations of growth traits due to different sowing date, nitrogen fertilization amount and row spacing with relatively high accuracy.

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

Analysis of Greenhouse Thermal Environment by Model Simulation (시뮬레이션 모형에 의한 온실의 열환경 분석)

  • 서원명;윤용철
    • Journal of Bio-Environment Control
    • /
    • v.5 no.2
    • /
    • pp.215-235
    • /
    • 1996
  • The thermal analysis by mathematical model simulation makes it possible to reasonably predict heating and/or cooling requirements of certain greenhouses located under various geographical and climatic environment. It is another advantages of model simulation technique to be able to make it possible to select appropriate heating system, to set up energy utilization strategy, to schedule seasonal crop pattern, as well as to determine new greenhouse ranges. In this study, the control pattern for greenhouse microclimate is categorized as cooling and heating. Dynamic model was adopted to simulate heating requirements and/or energy conservation effectiveness such as energy saving by night-time thermal curtain, estimation of Heating Degree-Hours(HDH), long time prediction of greenhouse thermal behavior, etc. On the other hand, the cooling effects of ventilation, shading, and pad ||||&|||| fan system were partly analyzed by static model. By the experimental work with small size model greenhouse of 1.2m$\times$2.4m, it was found that cooling the greenhouse by spraying cold water directly on greenhouse cover surface or by recirculating cold water through heat exchangers would be effective in greenhouse summer cooling. The mathematical model developed for greenhouse model simulation is highly applicable because it can reflects various climatic factors like temperature, humidity, beam and diffuse solar radiation, wind velocity, etc. This model was closely verified by various weather data obtained through long period greenhouse experiment. Most of the materials relating with greenhouse heating or cooling components were obtained from model greenhouse simulated mathematically by using typical year(1987) data of Jinju Gyeongnam. But some of the materials relating with greenhouse cooling was obtained by performing model experiments which include analyzing cooling effect of water sprayed directly on greenhouse roof surface. The results are summarized as follows : 1. The heating requirements of model greenhouse were highly related with the minimum temperature set for given greenhouse. The setting temperature at night-time is much more influential on heating energy requirement than that at day-time. Therefore It is highly recommended that night- time setting temperature should be carefully determined and controlled. 2. The HDH data obtained by conventional method were estimated on the basis of considerably long term average weather temperature together with the standard base temperature(usually 18.3$^{\circ}C$). This kind of data can merely be used as a relative comparison criteria about heating load, but is not applicable in the calculation of greenhouse heating requirements because of the limited consideration of climatic factors and inappropriate base temperature. By comparing the HDM data with the results of simulation, it is found that the heating system design by HDH data will probably overshoot the actual heating requirement. 3. The energy saving effect of night-time thermal curtain as well as estimated heating requirement is found to be sensitively related with weather condition: Thermal curtain adopted for simulation showed high effectiveness in energy saving which amounts to more than 50% of annual heating requirement. 4. The ventilation performances doting warm seasons are mainly influenced by air exchange rate even though there are some variations depending on greenhouse structural difference, weather and cropping conditions. For air exchanges above 1 volume per minute, the reduction rate of temperature rise on both types of considered greenhouse becomes modest with the additional increase of ventilation capacity. Therefore the desirable ventilation capacity is assumed to be 1 air change per minute, which is the recommended ventilation rate in common greenhouse. 5. In glass covered greenhouse with full production, under clear weather of 50% RH, and continuous 1 air change per minute, the temperature drop in 50% shaded greenhouse and pad & fan systemed greenhouse is 2.6$^{\circ}C$ and.6.1$^{\circ}C$ respectively. The temperature in control greenhouse under continuous air change at this time was 36.6$^{\circ}C$ which was 5.3$^{\circ}C$ above ambient temperature. As a result the greenhouse temperature can be maintained 3$^{\circ}C$ below ambient temperature. But when RH is 80%, it was impossible to drop greenhouse temperature below ambient temperature because possible temperature reduction by pad ||||&|||| fan system at this time is not more than 2.4$^{\circ}C$. 6. During 3 months of hot summer season if the greenhouse is assumed to be cooled only when greenhouse temperature rise above 27$^{\circ}C$, the relationship between RH of ambient air and greenhouse temperature drop($\Delta$T) was formulated as follows : $\Delta$T= -0.077RH+7.7 7. Time dependent cooling effects performed by operation of each or combination of ventilation, 50% shading, pad & fan of 80% efficiency, were continuously predicted for one typical summer day long. When the greenhouse was cooled only by 1 air change per minute, greenhouse air temperature was 5$^{\circ}C$ above outdoor temperature. Either method alone can not drop greenhouse air temperature below outdoor temperature even under the fully cropped situations. But when both systems were operated together, greenhouse air temperature can be controlled to about 2.0-2.3$^{\circ}C$ below ambient temperature. 8. When the cool water of 6.5-8.5$^{\circ}C$ was sprayed on greenhouse roof surface with the water flow rate of 1.3 liter/min per unit greenhouse floor area, greenhouse air temperature could be dropped down to 16.5-18.$0^{\circ}C$, whlch is about 1$0^{\circ}C$ below the ambient temperature of 26.5-28.$0^{\circ}C$ at that time. The most important thing in cooling greenhouse air effectively with water spray may be obtaining plenty of cool water source like ground water itself or cold water produced by heat-pump. Future work is focused on not only analyzing the feasibility of heat pump operation but also finding the relationships between greenhouse air temperature(T$_{g}$ ), spraying water temperature(T$_{w}$ ), water flow rate(Q), and ambient temperature(T$_{o}$).

  • PDF

Utility-Based Video Adaptation in MPEG-21 for Universal Multimedia Access (UMA를 위한 유틸리티 기반 MPEG-21 비디오 적응)

  • 김재곤;김형명;강경옥;김진웅
    • Journal of Broadcast Engineering
    • /
    • v.8 no.4
    • /
    • pp.325-338
    • /
    • 2003
  • Video adaptation in response to dynamic resource conditions and user preferences is required as a key technology to enable universal multimedia access (UMA) through heterogeneous networks by a multitude of devices In a seamless way. Although many adaptation techniques exist, selections of appropriate adaptations among multiple choices that would satisfy given constraints are often ad hoc. To provide a systematic solution, we present a general conceptual framework to model video entity, adaptation, resource, utility, and relations among them. It allows for formulation of various adaptation problems as resource-constrained utility maximization. We apply the framework to a practical case of dynamic bit rate adaptation of MPEG-4 video streams by employing combination of frame dropping and DCT coefficient dropping. Furthermore, we present a descriptor, which has been accepted as a part of MPEG-21 Digital Item Adaptation (DIA), for supporting terminal and network quality of service (QoS) in an interoperable manner. Experiments are presented to demonstrate the feasibility of the presented framework using the descriptor.

Mechanism-based View of Innovative Capability Building in POSCO (메커니즘 관점에서 본 조직변신과 포스코의 혁신패턴 연구)

  • Kim, So-Hyung
    • Journal of Distribution Science
    • /
    • v.11 no.6
    • /
    • pp.59-65
    • /
    • 2013
  • Purpose - Studies of mechanism as a competitive strategy, a relatively new field in the study of strategic management research, has recently drawn the attention of the business management scholars. The literature has so far proposed the subjective-based view, environment-based view, and the resource-based view in its analyses of firm management. Hence, it is highly likely for the firm management to be reasonably thought of as a combination of and interaction among the three key elements of subject, environment, and resources this is the mechanism-based view (MBV). It is reasonable to consider firm management to be the combination of and interaction among the three key elements of subject, environment, and resources. The overall dynamic process that integrates these three elements and creates functional harmony is identified as the mechanism, the principle of firm management. Much of the extant literatures on MBV has mainly focused on case studies, a qualitative approach prone to subjectivity of the researcher, although the intuition from the study may lead to meaningful insights into a firm-specific mechanism. This study's focus is also on case analysis, but it still attempts a quantitative approach in order to reach a scientific and systematic understanding of the MBV. Research design, data, and methodology - I used both a qualitative and quantitative approach to a single model, given the complexity of the innovation processes. I conducted in-depth interviews with POSCO employees-20 from general management, two from human resources, eight from information technology, five from finance and accounting, and five from production and logistics management. Once the innovative events were selected, the interview results were double-checked by the interviewees themselves to ensure the accuracy of the answers recorded. Based on the interview, I then conducted statistical validation using the survey results as well. Results - This study analyzes the building process of innovation and the effect of the mechanism pattern on innovation by examining the case of POSCO, which has survived over the past 21 years. I apply a new analytical tool to study mechanism innovation types, perform a new classification, and describe the interrelationships among the mechanism factors. This process allows me to see how the "Subject"factor interacts with the other factors. I found that, in the innovation process of the adoption stage, Subject had a mediating effect but that the mediating effect of resource and performance was smaller than the effect of Subject on performance alone. During the implementation stage, the mediating effect of Subject increased. Conclusion - Therefore, I have confirmed that the subject utilizes resources reasonably and efficiently. I have also advanced mechanism studies: whereas the field's research methods have been largely confined to single case studies, I have used both qualitative and quantitative methods to examine the relationships among mechanisms.

A Study on the Long-run Equilibrium Relationship and Causality between the Prices of Fisheries Products at Different Levels of Distribution -Focused on Hairtail and Squid in Pusan- (수산물의 유통단계별 가격간 장기균형관계와 인과성 분석 -부산지역의 갈치, 오징어를 중심으로-)

  • 강석규;이광진
    • The Journal of Fisheries Business Administration
    • /
    • v.29 no.2
    • /
    • pp.77-96
    • /
    • 1998
  • Fisheries products in Korea generally go through three markets, namely the wholesale market at production site (Market A), the wholesale market at consumption site (Market B), and the retail market (Market C), from producers to end consumers. As the products move from Market A through Market B to Market C, the marginal gap of prices asked in these markets demonstrates an apparent relationship. The producers, middlemen, consumers, and governmental departments concerned may influence the marketing prices of fisheries products. This study employing the cointegration theory tries to investigate whether causality of the price-setting among these markets exists and, if any, what it is. The authors have focused their attention on fisheries markets in Pusan, analyzing the long-run equilibrium relationship and causality between the prices of hairtail and squid among markets at different levels. Data used in this study cover the period f개m August 1984 to December 1997 fer hairtail, and the period from May 1989 to December 1997 for squid. The main findings of the study may be summarized as follows: First, regardless of the price time-series of hairtail and squid in individual market, the first difference is necessary fur satisfying the stationary conditions since each time-series is a first integration. This means homogeneous integration of time-series, which is a requirement of the long-run equilibrium of prices at different markets, is satisfied. Second, the study of the long-run equilibrium relationship between the prices at Market A and Market B shows that a long-run equilibrium relationship does exist for selling prices of the two species at Market A and Market B. Third, the ECM (error correction model ) used here to describe the long- and short-run dynamics of price change demonstrates that, in the case of squid, the price change in Market A will lead to a corresponding price change in Market B in the long-run period. In the short-run, however, the price at Market H is not only influenced by the price change in Market A but influence the price at Market A as well, that is, the Prices between Market A and Market B have a feedback effect. It should be stressed that the limitation in data collection, which cover only two species of hairtail and squid, is likely to cause a sampling bias. Nonetheless, we may conclude that a dynamic relation in the formation of prices does exist in view of the transaction amount of species at different markets. It is believed that the conclusion drawn from this study would not only contribute to a long-lasted debate on the direction of causality of price-setting among academic circle and fishing community, but would provide a useful standard for the policy makers in charge of the price-setting of fisheries products as well.

  • PDF

Effects of particle size and loading rate on the tensile failure of asphalt specimens based on a direct tensile test and particle flow code simulation

  • Q. Wang;D.C. Wang;J.W. Fu;Vahab Sarfarazi;Hadi Haeri;C.L. Guo;L.J. Sun;Mohammad Fatehi Marji
    • Structural Engineering and Mechanics
    • /
    • v.86 no.5
    • /
    • pp.607-619
    • /
    • 2023
  • This study, it was tried to evaluate the asphalt behavior under tensile loading conditions through indirect Brazilian and direct tensile tests, experimentally and numerically. This paper is important from two points of view. The first one, a new test method was developed for the determination of the direct tensile strength of asphalt and its difference was obtained from the indirect test method. The second one, the effects of particle size and loading rate have been cleared on the tensile fracture mechanism. The experimental direct tensile strength of the asphalt specimens was measured in the laboratory using the compression-to-tensile load converting (CTLC) device. Some special types of asphalt specimens were prepared in the form of slabs with a central hole. The CTLC device is then equipped with this specimen and placed in the universal testing machine. Then, the direct tensile strength of asphalt specimens with different sizes of ingredients can be measured at different loading rates in the laboratory. The particle flow code (PFC) was used to numerically simulate the direct tensile strength test of asphalt samples. This numerical modeling technique is based on the versatile discrete element method (DEM). Three different particle diameters were chosen and were tested under three different loading rates. The results show that when the loading rate was 0.016 mm/sec, two tensile cracks were initiated from the left and right of the hole and propagated perpendicular to the loading axis till coalescence to the model boundary. When the loading rate was 0.032 mm/sec, two tensile cracks were initiated from the left and right of the hole and propagated perpendicular to the loading axis. The branching occurs in these cracks. This shows that the crack propagation is under quasi-static conditions. When the loading rate was 0.064 mm/sec, mixed tensile and shear cracks were initiated below the loading walls and branching occurred in these cracks. This shows that the crack propagation is under dynamic conditions. The loading rate increases and the tensile strength increases. Because all defects mobilized under a low loading rate and this led to decreasing the tensile strength. The experimental results for the direct tensile strengths of asphalt specimens of different ingredients were in good accordance with their corresponding results approximated by DEM software.

Target Identification for Metabolic Engineering: Incorporation of Metabolome and Transcriptome Strategies to Better Understand Metabolic Fluxes

  • Lindley, Nic
    • Proceedings of the Korean Society for Applied Microbiology Conference
    • /
    • 2004.06a
    • /
    • pp.60-61
    • /
    • 2004
  • Metabolic engineering is now a well established discipline, used extensively to determine and execute rational strategies of strain development to improve the performance of micro-organisms employed in industrial fermentations. The basic principle of this approach is that performance of the microbial catalyst should be adequately characterised metabolically so as to clearlyidentify the metabolic network constraints, thereby identifying the most probable targets for genetic engineering and the extent to which improvements can be realistically achieved. In order to harness correctly this potential, it is clear that the physiological analysis of each strain studied needs to be undertaken under conditions as close as possible to the physico-chemical environment in which the strain evolves within the full-scale process. Furthermore, this analysis needs to be undertaken throughoutthe entire fermentation so as to take into account the changing environment in an essentially dynamic situation in which metabolic stress is accentuated by the microbial activity itself, leading to increasingly important stress response at a metabolic level. All too often these industrial fermentation constraints are overlooked, leading to identification of targets whose validity within the industrial context is at best limited. Thus the conceptual error is linked to experimental design rather than inadequate methodology. New tools are becoming available which open up new possibilities in metabolic engineering and the characterisation of complex metabolic networks. Traditionally metabolic analysis was targeted towards pre-identified genes and their corresponding enzymatic activities within pre-selected metabolic pathways. Those pathways not included at the onset were intrinsically removed from the network giving a fundamentally localised vision of pathway functionality. New tools from genome research extend this reductive approach so as to include the global characteristics of a given biological model which can now be seen as an integrated functional unit rather than a specific sub-group of biochemical reactions, thereby facilitating the resolution of complexnetworks whose exact composition cannot be estimated at the onset. This global overview of whole cell physiology enables new targets to be identified which would classically not have been suspected previously. Of course, as with all powerful analytical tools, post-genomic technology must be used carefully so as to avoid expensive errors. This is not always the case and the data obtained need to be examined carefully to avoid embarking on the study of artefacts due to poor understanding of cell biology. These basic developments and the underlying concepts will be illustrated with examples from the author's laboratory concerning the industrial production of commodity chemicals using a number of industrially important bacteria. The different levels of possibleinvestigation and the extent to which the data can be extrapolated will be highlighted together with the extent to which realistic yield targets can be attained. Genetic engineering strategies and the performance of the resulting strains will be examined within the context of the prevailing experimental conditions encountered in the industrial fermentor. Examples used will include the production of amino acids, vitamins and polysaccharides. In each case metabolic constraints can be identified and the extent to which performance can be enhanced predicted

  • PDF