• Title/Summary/Keyword: R&D process

Search Result 3,652, Processing Time 0.047 seconds

Strategic Issues in Managing Complexity in NPD Projects (신제품개발 과정의 복잡성에 대한 주요 연구과제)

  • Kim, Jongbae
    • Asia Marketing Journal
    • /
    • v.7 no.3
    • /
    • pp.53-76
    • /
    • 2005
  • With rapid technological and market change, new product development (NPD) complexity is a significant issue that organizations continually face in their development projects. There are numerous factors, which cause development projects to become increasingly costly & complex. A product is more likely to be successfully developed and marketed when the complexity inherent in NPD projects is clearly understood and carefully managed. Based upon the previous studies, this study examines the nature and importance of complexity in developing new products and then identifies several issues in managing complexity. Issues considered include: definition of complexity : consequences of complexity; and methods for managing complexity in NPD projects. To achieve high performance in managing complexity in development projects, these issues need to be addressed, for example: A. Complexity inherent in NPD projects is multi-faceted and multidimensional. What factors need to be considered in defining and/or measuring complexity in a development project? For example, is it sufficient if complexity is defined only from a technological perspective, or is it more desirable to consider the entire array of complexity sources which NPD teams with different functions (e.g., marketing, R&D, manufacturing, etc.) face in the development process? Moreover, is it sufficient if complexity is measured only once during a development project, or is it more effective and useful to trace complexity changes over the entire development life cycle? B. Complexity inherent in a project can have negative as well as positive influences on NPD performance. Thus, which complexity impacts are usually considered negative and which are positive? Project complexity also can affect the entire organization. Any complexity could be better assessed in broader and longer perspective. What are some ways in which the long-term impact of complexity on an organization can be assessed and managed? C. Based upon previous studies, several approaches for managing complexity are derived. What are the weaknesses & strengths of each approach? Is there a desirable hierarchy or order among these approaches when more than one approach is used? Are there differences in the outcomes according to industry and product types (incremental or radical)? Answers to these and other questions can help organizations effectively manage the complexity inherent in most development projects. Complexity is worthy of additional attention from researchers and practitioners alike. Large-scale empirical investigations, jointly conducted by researchers and practitioners, will help gain useful insights into understanding and managing complexity. Those organizations that can accurately identify, assess, and manage the complexity inherent in projects are likely to gain important competitive advantages.

  • PDF

Development of a Simultaneous Analytical Method for Azocyclotin, Cyhexatin, and Fenbutatin Oxide Detection in Livestock Products using the LC-MS/MS (LC-MS/MS를 이용한 축산물 중 유기주석계 농약 Azocyclotin, Cyhexatin 및 Fenbutatin oxide의 동시시험법 개발)

  • Nam Young Kim;Eun-Ji Park;So-Ra Park;Jung Mi Lee;Yong Hyun Jung;Hae Jung Yoon
    • Journal of Food Hygiene and Safety
    • /
    • v.38 no.5
    • /
    • pp.361-372
    • /
    • 2023
  • Organotin pesticide is used as an acaricide in agriculture and may contaminate livestock products. This study aims to develop a rapid and straightforward analytical method for detecting organotin pesticides, specifically azocyclotin, cyhexatin, and fenbutatin oxide, in various livestock products, including beef, pork, chicken, egg, and milk, using liquid chromatography-tandem mass spectrometry (LC-MS/MS). The extraction process involved the use of 1% acetic acid in a mixture of acetonitrile and ethyl acetate (1:1). This was followed by the addition of anhydrous magnesium sulfate (MgSO4) and anhydrous sodium chloride. The extracts were subsequently purified using octadecyl (C18) and primary secondary amine (PSA), after which the supernatant was evaporated. Organotin pesticide recovery ranged from 75.7 to 115.3%, with a coefficient of variation (CV) below 25.3%. The results meet the criteria range of the Codex guidelines (CODEX CAC/GL 40). The analytical method in this study will be invaluable for the analysis of organotin pesticides in livestock products.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

Development of a Model of Brain-based Evolutionary Scientific Teaching for Learning (뇌기반 진화적 과학 교수학습 모형의 개발)

  • Lim, Chae-Seong
    • Journal of The Korean Association For Science Education
    • /
    • v.29 no.8
    • /
    • pp.990-1010
    • /
    • 2009
  • To derive brain-based evolutionary educational principles, this study examined the studies on the structural and functional characteristics of human brain, the biological evolution occurring between- and within-organism, and the evolutionary attributes embedded in science itself and individual scientist's scientific activities. On the basis of the core characteristics of human brain and the framework of universal Darwinism or universal selectionism consisted of generation-test-retention (g-t-r) processes, a Model of Brain-based Evolutionary Scientific Teaching for Learning (BEST-L) was developed. The model consists of three components, three steps, and assessment part. The three components are the affective (A), behavioral (B), and cognitive (C) components. Each component consists of three steps of Diversifying $\rightarrow$ Emulating (Executing, Estimating, Evaluating) $\rightarrow$ Furthering (ABC-DEF). The model is 'brain-based' in the aspect of consecutive incorporation of the affective component which is based on limbic system of human brain associated with emotions, the behavioral component which is associated with the occipital lobes performing visual processing, temporal lobes performing functions of language generation and understanding, and parietal lobes, which receive and process sensory information and execute motor activities of the body, and the cognitive component which is based on the prefrontal lobes involved in thinking, planning, judging, and problem solving. On the other hand, the model is 'evolutionary' in the aspect of proceeding according to the processes of the diversifying step to generate variants in each component, the emulating step to test and select useful or valuable things among the variants, and the furthering step to extend or apply the selected things. For three components of ABC, to reflect the importance of emotional factors as a starting point in scientific activity as well as the dominant role of limbic system relative to cortex of brain, the model emphasizes the DARWIN (Driving Affective Realm for Whole Intellectual Network) approach.

The Ability of Anti-tumor Necrosis Factor Alpha(TNF-${\alpha}$) Antibodies Produced in Sheep Colostrums

  • Yun, Sung-Seob
    • 한국유가공학회:학술대회논문집
    • /
    • 2007.09a
    • /
    • pp.49-58
    • /
    • 2007
  • Inflammatory process leads to the well-known mucosal damage and therefore a further disturbance of the epithelial barrier function, resulting abnormal intestinal wall function, even further accelerating the inflammatory process[1]. Despite of the records, etiology and pathogenesis of IBD remain rather unclear. There are many studies over the past couple of years have led to great advanced in understanding the inflammatory bowel disease(IBD) and their underlying pathophysiologic mechanisms. From the current understanding, it is likely that chronic inflammation in IBD is due to aggressive cellular immune responses including increased serum concentrations of different cytokines. Therefore, targeted molecules can be specifically eliminated in their expression directly on the transcriptional level. Interesting therapeutic trials are expected against adhesion molecules and pro-inflammatory cytokines such as TNF-${\alpha}$. The future development of immune therapies in IBD therefore holds great promises for better treatment modalities of IBD but will also open important new insights into a further understanding of inflammation pathophysiology. Treatment of cytokine inhibitors such as Immunex(Enbrel) and J&J/Centocor(Remicade) which are mouse-derived monoclonal antibodies have been shown in several studies to modulate the symptoms of patients, however, theses TNF inhibitors also have an adverse effect immune-related problems and also are costly and must be administered by injection. Because of the eventual development of unwanted side effects, these two products are used in only a select patient population. The present study was performed to elucidate the ability of TNF-${\alpha}$ antibodies produced in sheep colostrums to neutralize TNF-${\alpha}$ action in a cell-based bioassay and in a small animal model of intestinal inflammation. In vitro study, inhibitory effect of anti-TNF-${\alpha}$ antibody from the sheep was determined by cell bioassay. The antibody from the sheep at 1 in 10,000 dilution was able to completely inhibit TNF-${\alpha}$ activity in the cell bioassay. The antibodies from the same sheep, but different milkings, exhibited some variability in inhibition of TNF-${\alpha}$ activity, but were all greater than the control sample. In vivo study, the degree of inflammation was severe to experiment, despite of the initial pilot trial, main trial 1 was unable to figure out of any effect of antibody to reduce the impact of PAF and LPS. Main rat trial 2 resulted no significant symptoms like characteristic acute diarrhea and weight loss of colitis. This study suggested that colostrums from sheep immunized against TNF-${\alpha}$ significantly inhibited TNF-${\alpha}$ bioactivity in the cell based assay. And the higher than anticipated variability in the two animal models precluded assessment of the ability of antibody to prevent TNF-${\alpha}$ induced intestinal damage in the intact animal. Further study will require to find out an alternative animal model, which is more acceptable to test anti-TNF-${\alpha}$ IgA therapy for reducing the impact of inflammation on gut dysfunction. And subsequent pre-clinical and clinical testing also need generation of more antibody as current supplies are low.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Study on the effect of small and medium-sized businesses being selected as suitable business types, on the franchise industry (중소기업적합업종선정이 프랜차이즈산업에 미치는 영향에 관한 연구)

  • Kang, Chang-Dong;Shin, Geon-Chel;Jang, Jae Nam
    • Journal of Distribution Research
    • /
    • v.17 no.5
    • /
    • pp.1-23
    • /
    • 2012
  • The conflict between major corporations and small and medium-sized businesses is being aggravated, the trickle down effect is not working properly, and, as the controversy surrounding the effectiveness of the business limiting system continues to swirl, the plan proposed to protect the business domain of small and medium-sized businesses, resolve polarization between these businesses and large corporations, and protect small family run stores is the suitable business type designation system for small and medium-sized businesses. The current status of carrying out this system of selecting suitable business types among small and medium-sized businesses involves receiving applications for 234 items among the suitable business types and items from small and medium-sized businesses in manufacturing, and then selecting the items of the consultative group by analyzing and investigating the actual conditions. Suitable business type designation in the service industry will involve designation with priority on business types that are experiencing social conflict. Three major classifications of the service industry, related to the livelihood of small and medium-sized businesses, will be first designated, and subsequently this will be expanded sequentially. However, there is the concern that when designated as a suitable business type or item, this will hinder the growth motive for small to medium-sized businesses, and designation all cause decrease in consumer welfare. Also it is highly likely that it will operate as a prior regulation, cause side-effects by limiting competition systematically, and also be in violation against the main regulations of the FTA system. Moreover, it is pointed out that the system does not sufficiently reflect reverse discrimination factor against large corporations. Because conflict between small to medium sized businesses and large corporations results from the expansion of corporations to the service industry, which is unrelated to their key industry, it is necessary to introduce an advanced contract method like a master franchise or local franchise system and to develop local small to medium sized businesses through a franchise system to protect these businesses and dealers. However, this method may have an effect that contributes to stronger competitiveness of small to medium sized franchise businesses by advancing their competitiveness and operational methods a step further, but also has many negative aspects. First, as revealed by the Ministry of Knowledge Economy, the franchise industry is contributing to the strengthening of competitiveness through the economy of scale by organizing existing individual proprietors and increasing the success rate of new businesses. It is also revealed to be a response measure by the government to stabilize the economy of ordinary people and is emphasized as a 'useful way' to revitalize the service industry and improve the competitiveness of individual proprietors, and has been involved in contributions to creating jobs and expanding the domestic market by providing various services to consumers. From this viewpoint, franchises fit the purpose of the suitable business type system and is not something that is against it. Second, designation as a suitable business type may decrease investment for overseas expansion, R&D, and food safety, as well negatively affect the expansion of overseas corporations that have entered the domestic market, due to the contraction and low morale of large domestic franchise corporations that have competitiveness internationally. Also because domestic franchise businesses are hard pressed to secure competitiveness with multinational overseas franchise corporations that are operating in Korea, the system may cause difficulty for domestic franchise businesses in securing international competitiveness and also may result in reverse discrimination against these overseas franchise corporations. Third, the designation of suitable business type and item can limit the opportunity of selection for consumers who have up to now used those products and can cause a negative effect that reduces consumer welfare. Also, because there is the possibility that the range of consumer selection may be reduced when a few small to medium size businesses monopolize the market, by causing reverse discrimination between these businesses, the role of determining the utility of products must be left ot the consumer not the government. Lastly, it is desirable that this is carried out with the supplementation of deficient parts in the future, because fair trade is already secured with the enforcement of the franchise trade law and the best trade standard of the Fair Trade Commission. Overlapping regulations by the suitable business type designation is an excessive restriction in the franchise industry. Now, it is necessary to establish in the domestic franchise industry an environment where a global franchise corporation, which spreads Korean culture around the world, is capable of growing, and the active support by the government is needed. Therefore, systems that do not consider the process or background of the growth of franchise businesses and harm these businesses for the sole reason of them being large corporations must be removed. The inhibition of growth to franchise enterprises may decrease the sales of franchise stores, in some cases even bankrupt them, as well as cause other problems. Therefore the suitable business type system should not hinder large corporations, and as both small dealers and small to medium size businesses both aim at improving competitiveness and combined growth, large corporations, small dealers and small to medium sized businesses, based on their mutual cooperation, should not include franchise corporations that continue business relations with them in this system.

  • PDF

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.

Studies on the Physical and Chemical Denatures of Cocoon Bave Sericin throughout Silk Filature Processes (제사과정 전후에서의 견사세리신의 물리화학적 성질변화에 관한 연구)

  • 남중희
    • Journal of Sericultural and Entomological Science
    • /
    • v.16 no.1
    • /
    • pp.21-48
    • /
    • 1974
  • The studies were carried out to disclose the physical and chemical properties of sericin fraction obtained from silk cocoon shells and its characteristics of swelling and solubility. The following results were obtained. 1. The physical and chemical properties of sericin fraction. 1) In contrast to the easy water soluble sericin, the hard soluble sericin contains fewer amino acids include of polar side radical while the hard soluble amino acid sach as alanine and leucine were detected. 2) The easy soluble amino acids were found mainly on the outer part of the fibroin, but the hard soluble amino acids were located in the near parts to the fibroin. 3) The swelling and solubility of the sericin could be hardly assayed by the analysis of the amino acid composition, and could be considered to tee closely related to the compound of the sericin crystal and secondary structure. 4) The X-ray patterns of the cocoon filament were ring shape, but they disappeared by the degumming treatment. 5) The sericin of tussah silkworm (A. pernyi), showed stronger circular patterns in the meridian than the regular silkworm (Bombyx mori). 6) There was no pattern difference between Fraction A and B. 7) X-ray diffraction patterns of the Sericin 1, ll and 111 were similar except interference of 8.85A (side chain spacing). 8) The amino acids above 150 in molecular weight such as Cys. Tyr. Phe. His. and Arg. were not found quantitatively by the 60 minutes-hydrolysis (6N-HCI). 9) The X-ray Pattern of 4.6A had a tendency to disappear with hot-water, ether, and alcohol treatment. 10) The partial hydrolysis of sericin showed a cirucular interference (2A) on the meridian. 11) The sericin pellet after hydrolysis was considered to be peptides composed with specific amino acids. 12) The decomposing temperature of Sericin 111 was higher than that of Sericin I and II. 13) Thermogram of the inner portioned sericin of the cocoon shell had double endothermic peaks at 165$^{\circ}C$, and 245$^{\circ}C$, and its decomposing temperature was higher than that of other portioned sericin. 14) The infrared spectroscopic properties among sericin I, II, III and sericin extracted from each layer portion of the cocoon shell were similar. II. The characteristics of seriein swelling and solubility related with silk processing. 1) Fifteen minutes was required to dehydrate the free moisture of cocoon shells with centrifugal force controlled at 13${\times}$10$^4$ dyne/g at 3,000 R.P.M. B) It took 30 minutes for the sericin to show positive reaction with the Folin-Ciocaltue reagent at room temperature. 3) The measurable wave length of the visible radiation was 500-750m${\mu}$, and the highest absorbance was observed at the wave length of 650m${\mu}$. 4) The colorimetric analysis should be conducted at 650mu for low concentration (10$\mu\textrm{g}$/$m\ell$), and at 500m${\mu}$ for the higher concentration to obtain an exact analysis. 5) The absorbing curves of sericin and egg albumin at different wave lengths were similar, but the absorbance of the former was slightly higher than that of the latter. 6) The quantity of the sericin measured by the colorimetric analysis, turned out to be less than by the Kjeldahl method. 7) Both temperature and duration in the cocoon cooking process has much effect on the swelling and solubility of the cocoon shells, but the temperature was more influential than the duration of the treatment. 8) The factorial relation between the temperature and the duration of treatment of the cocoon cooking to check for siricin swelling and solubility showed that the treatment duration should be gradually increased to reach optimum swelling and solubility of sericin with low temperature(70$^{\circ}C$) . High temperature, however, showed more sharp increase. 9) The more increased temperature in the drying of fresh cocoons, the less the sericin swelling and solubility were obtained. 10) In a specific cooking duration, the heavier the cocoon shell is, the less the swelling and solubility were obtained. 11) It was considered that there are differences in swelling or solubility between the filaments of each cocoon layer. 12) Sericin swelling or solubility in the cocoon filament was decreased by the wax extraction.. 13) The ionic surface active agent accelerated the swelling and solubility of the sericin at the range of pH 6-7. 14) In the same conditions as above, the cation agent was absorbed into the sericin. 15) In case of the increase of Ca ang Mg in the reeling water, its pH value drifted toward the acidity. 16) A buffering action was observed between the sericin and the water hardness constituents in the reeling water. 17) The effect of calcium on the swelling and solubility of the sericin was more moderate than that of magnecium. 18) The solute of the water hardness constituents increased the electric conductivity in the reeling water.

  • PDF