• Title/Summary/Keyword: Cost Evaluation Method

Search Result 1,235, Processing Time 0.027 seconds

Solution Algorithms for Logit Stochastic User Equilibrium Assignment Model (확률적 로짓 통행배정모형의 해석 알고리듬)

  • 임용택
    • Journal of Korean Society of Transportation
    • /
    • v.21 no.2
    • /
    • pp.95-105
    • /
    • 2003
  • Because the basic assumptions of deterministic user equilibrium assignment that all network users have perfect information of network condition and determine their routes without errors are known to be unrealistic, several stochastic assignment models have been proposed to relax this assumption. However. it is not easy to solve such stochastic assignment models due to the probability distribution they assume. Also. in order to avoid all path enumeration they restrict the number of feasible path set, thereby they can not preciously explain the travel behavior when the travel cost is varied in a network loading step. Another problem of the stochastic assignment models is stemmed from that they use heuristic approach in attaining optimal moving size, due to the difficulty for evaluation of their objective function. This paper presents a logit-based stochastic assignment model and its solution algorithm to cope with the problems above. We also provide a stochastic user equilibrium condition of the model. The model is based on path where all feasible paths are enumerated in advance. This kind of method needs a more computing demand for running the model compared to the link-based one. However, there are same advantages. It could describe the travel behavior more exactly, and too much computing time does not require than we expect, because we calculate the path set only one time in initial step Two numerical examples are also given in order to assess the model and to compare it with other methods.

Effect of Reflow Number and Surface Finish on the High Speed Shear Properties of Sn-Ag-Cu Lead-free Solder Bump (리플로우 횟수와 표면처리에 따른 Sn-Ag-Cu계 무연 솔더 범프의 고속전단 특성평가)

  • Jang, Im-Nam;Park, Jai-Hyun;Ahn, Yong-Sik
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.16 no.3
    • /
    • pp.11-17
    • /
    • 2009
  • The drop impact reliability comes to be important for evaluation of the life time of mobile electronic products such as cellular phone. The drop impact reliability of solder joint is generally affected by the kinds of pad and reflow number, therefore, the reliability evaluation is needed. Drop impact test proposed by JEDEC has been used as a standard method, however, which requires high cost and long time. The drop impact reliability can be indirectly evaluated by using high speed shear test of solder joints. Solder joints formed on 3 kinds of surface finishes OSP (Organic Solderability Preservation), ENIG (Electroless Nickel Immersion Gold) and ENEPIG (Electroless Nickel Electroless Palladium Immersion Gold) was investigated. The shear strength was analysed with the morphology change of intermetallic compound (IMC) layer according to reflow number. The layer thickness of IMC was increased with the increase of reflow number, which resulted in the decrease of the high speed shear strength and impact energy. The order of the high speed shear strength and impact energy was ENEPIG > ENIG > OSP after the 1st reflow, and ENEPIG > OSP > ENIG after 8th reflow.

  • PDF

A Study on the Need for Separation of Software Completeness Appraisal and Software Ready-made Appraisal (소프트웨어 완성도 감정과 기성고 감정 분리 필요성에 대한 고찰)

  • Kim, DoWan
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.11-17
    • /
    • 2021
  • In this study, problems of software completeness appraisal are pointed out and their solutions are presented by analyzing appraisal cases and judicial precedents. Completeness appraisal, ready-made appraisal, defect appraisal, and cost appraisal have been classified as and have been evaluated with extant software completeness appraisals. From a legal point of view, and in judicial precedents, however, there is a big difference between the definition of completeness and the completion rate. This is because the degree of completeness is evaluated under the premise that the software's development is complete, whereas the ready-made appraisal inspects the development progress of unfinished software. Often, in cases involving software completion rate, the total completion level is calculated by weighting each step of the software development process. However, completeness evaluations use the software's realization-operation as its sole criterion. In addition, another issue not addressed in existing software completeness appraisal cases is that there is no mention of who is responsible for software defects, whereas in case law, the responsible party is determined by finding who caused the dispute. In this paper, we systematically classify these problems, and present a novel evaluation method that separates software completeness evaluations from software completion evaluations.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

A Study on the Performance Evaluation of G2B Procurement Process Innovation by Using MAS: Korea G2B KONEPS Case (멀티에이전트시스템(MAS)을 이용한 G2B 조달 프로세스 혁신의 효과평가에 관한 연구 : 나라장터 G2B사례)

  • Seo, Won-Jun;Lee, Dae-Cheor;Lim, Gyoo-Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.157-175
    • /
    • 2012
  • It is difficult to evaluate the performance of process innovation of e-procurement which has large scale and complex processes. The existing evaluation methods for measuring the effects of process innovation have been mainly done with statistically quantitative methods by analyzing operational data or with qualitative methods by conducting surveys and interviews. However, these methods have some limitations to evaluate the effects because the performance evaluation of e-procurement process innovation should consider the interactions among participants who are active either directly or indirectly through the processes. This study considers the e-procurement process as a complex system and develops a simulation model based on MAS(Multi-Agent System) to evaluate the effects of e-procurement process innovation. Multi-agent based simulation allows observing interaction patterns of objects in virtual world through relationship among objects and their behavioral mechanism. Agent-based simulation is suitable especially for complex business problems. In this study, we used Netlogo Version 4.1.3 as a MAS simulation tool which was developed in Northwestern University. To do this, we developed a interaction model of agents in MAS environment. We defined process agents and task agents, and assigned their behavioral characteristics. The developed simulation model was applied to G2B system (KONEPS: Korea ON-line E-Procurement System) of Public Procurement Service (PPS) in Korea and used to evaluate the innovation effects of the G2B system. KONEPS is a successfully established e-procurement system started in the year 2002. KONEPS is a representative e-Procurement system which integrates characteristics of e-commerce into government for business procurement activities. KONEPS deserves the international recognition considering the annual transaction volume of 56 billion dollars, daily exchanges of electronic documents, users consisted of 121,000 suppliers and 37,000 public organizations, and the 4.5 billion dollars of cost saving. For the simulation, we analyzed the e-procurement of process of KONEPS into eight sub processes such as 'process 1: search products and acquisition of proposal', 'process 2 : review the methods of contracts and item features', 'process 3 : a notice of bid', 'process 4 : registration and confirmation of qualification', 'process 5 : bidding', 'process 6 : a screening test', 'process 7 : contracts', and 'process 8 : invoice and payment'. For the parameter settings of the agents behavior, we collected some data from the transactional database of PPS and some information by conducting a survey. The used data for the simulation are 'participants (government organizations, local government organizations and public institutions)', 'the number of bidding per year', 'the number of total contracts', 'the number of shopping mall transactions', 'the rate of contracts between bidding and shopping mall', 'the successful bidding ratio', and the estimated time for each process. The comparison was done for the difference of time consumption between 'before the innovation (As-was)' and 'after the innovation (As-is).' The results showed that there were productivity improvements in every eight sub processes. The decrease ratio of 'average number of task processing' was 92.7% and the decrease ratio of 'average time of task processing' was 95.4% in entire processes when we use G2B system comparing to the conventional method. Also, this study found that the process innovation effect will be enhanced if the task process related to the 'contract' can be improved. This study shows the usability and possibility of using MAS in process innovation evaluation and its modeling.

A Study on the Digital Drawing of Archaeological Relics Using Open-Source Software (오픈소스 소프트웨어를 활용한 고고 유물의 디지털 실측 연구)

  • LEE Hosun;AHN Hyoungki
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.1
    • /
    • pp.82-108
    • /
    • 2024
  • With the transition of archaeological recording method's transition from analog to digital, the 3D scanning technology has been actively adopted within the field. Research on the digital archaeological digital data gathered from 3D scanning and photogrammetry is continuously being conducted. However, due to cost and manpower issues, most buried cultural heritage organizations are hesitating to adopt such digital technology. This paper aims to present a digital recording method of relics utilizing open-source software and photogrammetry technology, which is believed to be the most efficient method among 3D scanning methods. The digital recording process of relics consists of three stages: acquiring a 3D model, creating a joining map with the edited 3D model, and creating an digital drawing. In order to enhance the accessibility, this method only utilizes open-source software throughout the entire process. The results of this study confirms that in terms of quantitative evaluation, the deviation of numerical measurement between the actual artifact and the 3D model was minimal. In addition, the results of quantitative quality analysis from the open-source software and the commercial software showed high similarity. However, the data processing time was overwhelmingly fast for commercial software, which is believed to be a result of high computational speed from the improved algorithm. In qualitative evaluation, some differences in mesh and texture quality occurred. In the 3D model generated by opensource software, following problems occurred: noise on the mesh surface, harsh surface of the mesh, and difficulty in confirming the production marks of relics and the expression of patterns. However, some of the open source software did generate the quality comparable to that of commercial software in quantitative and qualitative evaluations. Open-source software for editing 3D models was able to not only post-process, match, and merge the 3D model, but also scale adjustment, join surface production, and render image necessary for the actual measurement of relics. The final completed drawing was tracked by the CAD program, which is also an open-source software. In archaeological research, photogrammetry is very applicable to various processes, including excavation, writing reports, and research on numerical data from 3D models. With the breakthrough development of computer vision, the types of open-source software have been diversified and the performance has significantly improved. With the high accessibility to such digital technology, the acquisition of 3D model data in archaeology will be used as basic data for preservation and active research of cultural heritage.

A Study of Image Quality Improvement Through Changes in Posture and Kernel Value in Neck CT Scanning (경부 CT검사 시 Kernel 값과 검사자세 변화를 통한 화질개선에 관한 연구)

  • Kim, Hyeon-Ju;Chung, Woo-Jun;Cho, Jae-Hwan
    • Journal of the Korean Society of Radiology
    • /
    • v.5 no.2
    • /
    • pp.59-66
    • /
    • 2011
  • There is a difficulty because of classifying the anatomical structure in the neck CT scan by the beam hardening artifact no more than disease and it including the 6, 7 number cervical spine and intervertebral disk. In case of enforcing the neck CT scan cause of the inner diameter of beam artifact tried to be inquired by the image evaluation according to the change of the image evaluation according to the direction of the shoulder joint applying the variation method of a posture and location and Kernel value and it was most appropriate, the lion tax and Kernel value try to be searched for through an experiment. Somatom Sensation 16 (Siemens, Enlarge, Germany) equipment was used in a patient 30 people coming to the hospital for the neck CT scan. A workstation used the AW 4.4 version (GE, USA). According to a direction and location of the shoulder joint, the patient posture gave a change to the direction of the shoulder joint as the group S it gave a change as three postures and placed the both arms comfortably and helps a group N and augmented unipolar left in the wealthy merchant and group P it memorized the both hands and ordered the eversion and drops below to the utmost and enforced a scan. By using a reconstructing method as the second opinion, it gave and reconstructed the Kernel value a change based on scan data with B 10 (very smooth), B 20 (smooth), B 30 (medium smooth), B 40 (medium), B 50 (medium sharp), B 60 (sharp), and B 70 (very sharp). By using image data which gave the change of the examination posture and change of the Kernel value and are obtained, we analyzed through the noise value measurement and image evaluation of. The outside wire eversion orders the both hands and the examination posture is cost in the neck CT scan with the group P it drops below to the utmost. And in case of when reconstructing with B 40 (medium) or B 50 (medium sharp) being most analyzed into the inappropriate posture and Kernel value and applying the Kernel value to a clinical, it is considered to be very useful.

An Empirical Study on the Improvement of In Situ Soil Remediation Using Plasma Blasting, Pneumatic Fracturing and Vacuum Suction (플라즈마 블라스팅, 공압파쇄, 진공추출이 활용된 지중 토양정화공법의 정화 개선 효과에 대한 실증연구)

  • Jae-Yong Song;Geun-Chun Lee;Cha-Won Kang;Eun-Sup Kim;Hyun-Shic Jang;Bo-An Jang;Yu-Chul Park
    • The Journal of Engineering Geology
    • /
    • v.33 no.1
    • /
    • pp.85-103
    • /
    • 2023
  • The in-situ remediation of a solidified stratum containing a large amount of fine-texture material like clay or organic matter in contaminated soil faces limitations such as increased remediation cost resulting from decreased purification efficiency. Even if the soil conditions are good, remediation generally requires a long time to complete because of non-uniform soil properties and low permeability. This study assessed the remediation effect and evaluated the field applicability of a methodology that combines pneumatic fracturing, vacuum extraction, and plasma blasting (the PPV method) to improve the limitations facing existing underground remediation methods. For comparison, underground remediation was performed over 80 days using the experimental PPV method and chemical oxidation (the control method). The control group showed no decrease in the degree of contamination due to the poor delivery of the soil remediation agent, whereas the PPV method clearly reduced the degree of contamination during the remediation period. Remediation effect, as assessed by the reduction of the highest TPH (Total Petroleum Hydrocarbons) concentration by distance from the injection well, was uncleared in the control group, whereas the PPV method showed a remediation effect of 62.6% within a 1 m radius of the injection well radius, 90.1% within 1.1~2.0 m, and 92.1% within 2.1~3.0 m. When evaluating the remediation efficiency by considering the average rate of TPH concentration reduction by distance from the injection well, the control group was not clear; in contrast, the PPV method showed 53.6% remediation effect within 1 m of the injection well, 82.4% within 1.1~2.0 m, and 68.7% within 2.1~3.0 m. Both ways of considering purification efficiency (based on changes in TPH maximum and average contamination concentration) found the PPV method to increase the remediation effect by 149.0~184.8% compared with the control group; its average increase in remediation effect was ~167%. The time taken to reduce contamination by 80% of the initial concentration was evaluated by deriving a correlation equation through analysis of the TPH concentration: the PPV method could reduce the purification time by 184.4% compared with chemical oxidation. However, the present evaluation of a single site cannot be equally applied to all strata, so additional research is necessary to explore more clearly the proposed method's effect.

Timing Verification of AUTOSAR-compliant Diesel Engine Management System Using Measurement-based Worst-case Execution Time Analysis (측정기반 최악실행시간 분석 기법을 이용한 AUTOSAR 호환 승용디젤엔진제어기의 실시간 성능 검증에 관한 연구)

  • Park, Inseok;Kang, Eunhwan;Chung, Jaesung;Sohn, Jeongwon;Sunwoo, Myoungho;Lee, Kangseok;Lee, Wootaik;Youn, Jeamyoung;Won, Donghoon
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.5
    • /
    • pp.91-101
    • /
    • 2014
  • In this study, we presented a timing verification method for a passenger car diesel engine management system (EMS) using measurement-based worst-case execution time (WCET) analysis. In order to cope with AUTOSAR-compliant software architecture, a development process model is proposed. In the process model, a runnable is regarded as a test unit and its temporal behavior (i.e. maximum observed execution time, MOET) is obtained along with on-target functionality evaluation results during online unit test. Furthermore, a cost-effective framework for online unit test is proposed. Because the runtime environment layer and the standard calibration environment are utilized to implement test interface, additional resource consumption of the target processor is minimized. Using the proposed development process model and unit test framework, the MOETs of 86 runnables for diesel EMS are obtained with 213 unit test cases. Using the obtained MOETs of runnables, the WCETs of tasks are estimated and the schedulability is evaluated. From the schedulability analysis results, the problems of the initially designed schedule table is recognized and it is fixed by redesigning of the runnable mapping and task offset. Through the various test scenarios, the proposed method is validated.

Design and Evaluation of a Flow Rotate Divider for Sampling Runoff Plots. (토양 유실량 및 유출수량 측정을 위한 회전분할집수기의 평가)

  • Zhang, Yong-Seon;Park, Chan-Won;Lee, Gye-Jun;Lee, Jeong-Tae;Jin, Yong-Ik;Hwang, Seon-Woong
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.41 no.6
    • /
    • pp.374-378
    • /
    • 2008
  • For the standard method of collecting the run-off, it is consumed the high cost and much effort to install and to manage this instrument. Because the all the soil and water from reservoir tank must be eliminate after their measurement of amount of soil loss and run-off and installed the reservoir tank at regular size in the experimental field. Therefore, objective of this study was to compare its efficacy between the standard method and a flow rotate divider for ontinuously collecting and measuring the soil loss and run-off in order to conveniently conduct the field experiment of the lysimeters. For collecting the sampling of soil loss and run-off from agricultural land with invariable ratio, a flow rotate divider was consisted with a 8 blades of round plate sloped in order to collect the invariable ratio of soil and water at lowest part from round plate by the law of gravity. For comparing its accuracy in the batch scale experiment, it shown that there was significantly a positive linear corelation ($r=0.997^{***}$) between flowing and sampling amounts with adjusting the range from 1 to $10L\;min^{-1}$ with flowing rate. In collecting ratio in the field experiment, it observed that the more its accuracy had, the more soil loss and run-off.