• Title/Summary/Keyword: 지연비용

Search Result 707, Processing Time 0.022 seconds

Effectiveness Analysis for Traffic and Pedestrian Volumes of Pedestrian Pushbutton Signal (차량 및 보행자 교통량에 따른 보행자 작동신호기의 효과 분석)

  • Cho, Han-Seon;Park, Ji-Hyung;Noh, Jung-Hyun
    • International Journal of Highway Engineering
    • /
    • v.9 no.4
    • /
    • pp.33-43
    • /
    • 2007
  • Because usually signal controllers on the crosswalks of mid-block provide pedestrian signals every cycle based on the fixed signal plan, pedestrian signals are provided even when there is no pedestrian demand. Consequently, signal is operated inefficiently and this may cause drivels to experience useless delay or violate the signal. Even though recently pushbuttons have been installed to improve the efficiency of pedestrian signal control in the crosswalks of mid-block and the pedestrian safety. they are not spread out national-wide in Korea because of the cost of the pushbutton equipments and the lack of an acknowledgement of the efficiency of the pushbutton. In this study, the effectiveness of the pushbutton on saving the vehicle delay was verified through before and after study in 4 study sites using a traffic micro-simulation model, VISSIM. To evaluate the viability of the pushbutton, a benefit/cost analysis was also performed for 4 study sites. It was found that B/C ratio of all of 4 study sites was greater than 1. The sensitivity analysis for the traffic volume and pedestrian volume were performed to identify the impact of the both volume on the operation of pushbutton. And, a benefit/cost analysis was performed for all scenarios. It was found that when the pedestrian volume is greater than 90ped/h, the pedestrian signal was operated same as the fixed signal plan. That is, there is no benefit of pushbutton at all once the pedestrian volume is greater than 90ped/h. When the pedestrian volume is equal to or less than 90ped/h and the traffic volume is greater than 2,500veh/h, B/C ratio is greater than 1. Also it was found that as traffic volume increases and pedestrian volume decreases, the benefit increases. In this study, the criteria for installation of pushbutton on the crosswalks of mid-block are developed through the sensitivity analysis and benefit/cost analysis. The results of this study may be used as a criteria for expansion of pushbutton system.

  • PDF

Production and characterization of rice starch from broken rice using alkaline steeping and enzymatic digestion methods (쇄미로부터 알칼리침지법과 효소소화법을 이용한 쌀전분의 생산 및 특성)

  • Kim, Reejae;Lim, SongI;Kim, Hyun-Seok
    • Korean Journal of Food Science and Technology
    • /
    • v.53 no.6
    • /
    • pp.731-738
    • /
    • 2021
  • This study investigated the physicochemical properties of rice starch isolated from broken rice using alkaline steeping (AKL) and enzymatic digestion (ENZ) methods. Broken rice starch (BRS) by AKL and ENZ possessed crude protein contents (0.6-1.4%) acceptable to commercial products of native starch and belonged to an intermediate amylose rice starch. AKL-BRS and ENZ-BRS showed a typical A-type crystal packing arrangement with small variations in their relative crystallinity. ENZ-BRS exhibited higher gelatinization onset and peak temperatures, and a narrower gelatinization temperature range than AKL-BRS, indicating that annealing occurred in ENZ-BRS. Lower swelling power and solubility were generally observed in the ENZ-BRS. ENZ-BRS also showed slower viscosity development, higher peak and trough viscosities, and lower breakdown, final, and setback viscosities, compared to those in AKL-BRS. These results are ascribed to the annealing phenomenon in ENZ-BRS. Overall, BRS from cheap broken rice using AKL and ENZ could contribute to the expansion of rice starch utilization in food and non-food industries.

Development of 3D Printed Snack-dish for the Elderly with Dementia (3D 프린팅 기술을 활용한 치매노인 전용 영양(수분)보충 식품섭취용기 개발)

  • Lee, Ji-Yeon;Kim, Cheol-Ho;Kim, Kug-Weon;Lee, Kyong-Ae;Koh, Kwangoh;Kim, Hee-Seon
    • Korean Journal of Community Nutrition
    • /
    • v.26 no.5
    • /
    • pp.327-336
    • /
    • 2021
  • Objectives: This study was conducted to create a 3D printable snack dish model for the elderly with low food or fluid intake along with barriers towards eating. Methods: The decision was made by the hybrid-brainstorming method for creating the 3D model. Experts were assigned based on their professional areas such as clinical nutrition, food hygiene and chemical safety for the creation process. After serial feedback processes, the grape shape was suggested as the final model. After various concept sketching and making clay models, 3D-printing technology was applied to produce a prototype. Results: 3D design modeling process was conducted by SolidWorks program. After considering Dietary reference intakes for Koreans (KDRIs) and other survey data, appropriate supplementary water serving volume was decided as 285 mL which meets 30% of Adequate intake. To consider printing output conditions, this model has six grapes in one bunch with a safety lid. The FDM printer and PLA filaments were used for food hygiene and safety. To stimulate cognitive functions and interests of eating, numbers one to six was engraved on the lid of the final 3D model. Conclusions: The newly-developed 3D model was designed to increase intakes of nutrients and water in the elderly with dementia during snack time. Since dementia patients often forget to eat, engraving numbers on the grapes was conducted to stimulate cognitive function related to the swallowing and chewing process. We suggest that investigations on the types of foods or fluids are needed in the developed 3D model snack dish for future studies.

A Prediction of N-value Using Artificial Neural Network (인공신경망을 이용한 N치 예측)

  • Kim, Kwang Myung;Park, Hyoung June;Goo, Tae Hun;Kim, Hyung Chan
    • The Journal of Engineering Geology
    • /
    • v.30 no.4
    • /
    • pp.457-468
    • /
    • 2020
  • Problems arising during pile design works for plant construction, civil and architecture work are mostly come from uncertainty of geotechnical characteristics. In particular, obtaining the N-value measured through the Standard Penetration Test (SPT) is the most important data. However, it is difficult to obtain N-value by drilling investigation throughout the all target area. There are many constraints such as licensing, time, cost, equipment access and residential complaints etc. it is impossible to obtain geotechnical characteristics through drilling investigation within a short bidding period in overseas. The geotechnical characteristics at non-drilling investigation points are usually determined by the engineer's empirical judgment, which can leads to errors in pile design and quantity calculation causing construction delay and cost increase. It would be possible to overcome this problem if N-value could be predicted at the non-drilling investigation points using limited minimum drilling investigation data. This study was conducted to predicted the N-value using an Artificial Neural Network (ANN) which one of the Artificial intelligence (AI) method. An Artificial Neural Network treats a limited amount of geotechnical characteristics as a biological logic process, providing more reliable results for input variables. The purpose of this study is to predict N-value at the non-drilling investigation points through patterns which is studied by multi-layer perceptron and error back-propagation algorithms using the minimum geotechnical data. It has been reviewed the reliability of the values that predicted by AI method compared to the measured values, and we were able to confirm the high reliability as a result. To solving geotechnical uncertainty, we will perform sensitivity analysis of input variables to increase learning effect in next steps and it may need some technical update of program. We hope that our study will be helpful to design works in the future.

A Machine Learning-based Total Production Time Prediction Method for Customized-Manufacturing Companies (주문생산 기업을 위한 기계학습 기반 총생산시간 예측 기법)

  • Park, Do-Myung;Choi, HyungRim;Park, Byung-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.177-190
    • /
    • 2021
  • Due to the development of the fourth industrial revolution technology, efforts are being made to improve areas that humans cannot handle by utilizing artificial intelligence techniques such as machine learning. Although on-demand production companies also want to reduce corporate risks such as delays in delivery by predicting total production time for orders, they are having difficulty predicting this because the total production time is all different for each order. The Theory of Constraints (TOC) theory was developed to find the least efficient areas to increase order throughput and reduce order total cost, but failed to provide a forecast of total production time. Order production varies from order to order due to various customer needs, so the total production time of individual orders can be measured postmortem, but it is difficult to predict in advance. The total measured production time of existing orders is also different, which has limitations that cannot be used as standard time. As a result, experienced managers rely on persimmons rather than on the use of the system, while inexperienced managers use simple management indicators (e.g., 60 days total production time for raw materials, 90 days total production time for steel plates, etc.). Too fast work instructions based on imperfections or indicators cause congestion, which leads to productivity degradation, and too late leads to increased production costs or failure to meet delivery dates due to emergency processing. Failure to meet the deadline will result in compensation for delayed compensation or adversely affect business and collection sectors. In this study, to address these problems, an entity that operates an order production system seeks to find a machine learning model that estimates the total production time of new orders. It uses orders, production, and process performance for materials used for machine learning. We compared and analyzed OLS, GLM Gamma, Extra Trees, and Random Forest algorithms as the best algorithms for estimating total production time and present the results.

Strategic Issues in Managing Complexity in NPD Projects (신제품개발 과정의 복잡성에 대한 주요 연구과제)

  • Kim, Jongbae
    • Asia Marketing Journal
    • /
    • v.7 no.3
    • /
    • pp.53-76
    • /
    • 2005
  • With rapid technological and market change, new product development (NPD) complexity is a significant issue that organizations continually face in their development projects. There are numerous factors, which cause development projects to become increasingly costly & complex. A product is more likely to be successfully developed and marketed when the complexity inherent in NPD projects is clearly understood and carefully managed. Based upon the previous studies, this study examines the nature and importance of complexity in developing new products and then identifies several issues in managing complexity. Issues considered include: definition of complexity : consequences of complexity; and methods for managing complexity in NPD projects. To achieve high performance in managing complexity in development projects, these issues need to be addressed, for example: A. Complexity inherent in NPD projects is multi-faceted and multidimensional. What factors need to be considered in defining and/or measuring complexity in a development project? For example, is it sufficient if complexity is defined only from a technological perspective, or is it more desirable to consider the entire array of complexity sources which NPD teams with different functions (e.g., marketing, R&D, manufacturing, etc.) face in the development process? Moreover, is it sufficient if complexity is measured only once during a development project, or is it more effective and useful to trace complexity changes over the entire development life cycle? B. Complexity inherent in a project can have negative as well as positive influences on NPD performance. Thus, which complexity impacts are usually considered negative and which are positive? Project complexity also can affect the entire organization. Any complexity could be better assessed in broader and longer perspective. What are some ways in which the long-term impact of complexity on an organization can be assessed and managed? C. Based upon previous studies, several approaches for managing complexity are derived. What are the weaknesses & strengths of each approach? Is there a desirable hierarchy or order among these approaches when more than one approach is used? Are there differences in the outcomes according to industry and product types (incremental or radical)? Answers to these and other questions can help organizations effectively manage the complexity inherent in most development projects. Complexity is worthy of additional attention from researchers and practitioners alike. Large-scale empirical investigations, jointly conducted by researchers and practitioners, will help gain useful insights into understanding and managing complexity. Those organizations that can accurately identify, assess, and manage the complexity inherent in projects are likely to gain important competitive advantages.

  • PDF

Comparison of Batch Assay and Random Assay Using Automatic Dispenser in Radioimmunoassay (핵의학 체외 검사에서 자동분주기를 이용한 Random Assay 가능성평가)

  • Moon, Seung-Hwan;Lee, Ho-Young;Shin, Sun-Young;Min, Gyeong-Sun;Lee, Hyun-Joo;Jang, Su-Jin;Kang, Ji-Yeon;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.4
    • /
    • pp.323-329
    • /
    • 2009
  • Purpose: Radioimmunoassay (RIA) was usually performed by the batch assay. To improve the efficiency of RIA without increase of the cost and time, random assay could be a choice. We investigated the possibility of the random assay using automatic dispenser by assessing the agreement between batch assay and random assay. Materials and Methods: The experiments were performed with four items; Triiodothyronine (T3), free thyroxine (fT4), Prostate specific antigen (PSA), Carcinoembryonic antigen (CEA). In each item, the sera of twenty patients, the standard, and the control samples were used. The measurements were done 4 times with 3 hour time intervals by random assay and batch assay. The coefficient of variation (CV) of the standard samples and patients' data in T3, fT4, PSA, and CEA were assessed. ICC (Intraclass correlation coefficient) and coefficient of correlation were measured to assessing the agreement between two methods. Results: The CVs (%) of T3, fT4, PSA, and CEA measured by batch assay were 3.2$\pm$1.7%, 3.9$\pm$2.1%, 7.1$\pm$6.2%, 11.2$\pm$7.2%. The CVs by random assay were 2.1$\pm$1.7%, 4.8$\pm$3.1%, 3.6$\pm$4.8%, and 7.4$\pm$6.2%. The ICC between the batch assay and random assay were 0.9968 (T3), 0.9973 (fT4), 0.9996 (PSA), and 0.9901 (CEA). The coefficient of correlation between the batch assay and random assay were 0.9924(T3), 0.9974 (fT4), 0.9994 (PSA), and 0.9989 (CEA) (p<0.05). Conclusion: The results of random assay showed strong agreement with the batch assay in a day. These results suggest that random assay using automatic dispenser could be used in radioimmunoassay.