• Title/Summary/Keyword: New Technology Evaluation

Search Result 2,212, Processing Time 0.034 seconds

The Effects of Discrepancy in Reconstruction Algorithm between Patient Data and Normal Database in AutoQuant Evaluation: Focusing on Half-Time Scan Algorithm in Myocardial SPECT (심근 관류 스펙트에서 Half-Time Scan과 새로운 재구성법이 적용된 정상군 데이터를 기반으로 한 정량적 분석 결과의 차이 비교)

  • Lee, Hyung-Jin;Do, Yong-Ho;Cho, Seong-Wook;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.122-126
    • /
    • 2014
  • Purpose: The new reconstruction algorithms (NRA) provided by vendor aim to shorten the acquisition scan time. Whereas depending on the installed version AutoQuant program used for myocardial SPECT quantitative analysis did not contain the normal data that NRA is applied. Thus, the purpose of this paper is to compare the results according to AutoQuant versions in myocardial SPECT applied NRA and half-time scan (HT). Materials and Methods: Rest Tl and stress MIBI data of total 80 (40 men, 40 women) patients were gathered. Data were applied HT acquisition and ASTONISH (Philips) software which is NRA. Modified autoquant of SNUH and old version of AutoQuant (full-time scan) provided by company were compared. Comparison groups were classified as coronary artery disease (CAD), 24 hrs delay and almost normal patients who have a simple pain patient. Perfusion distribution aspect, summed stress score (SSS), summed rest score (SRS), extent and total perfusion deficit (TPD) of each 25 patient who have above diseases were compared and evaluated. Results: The case of CAD, when using re-edited AutoQuant (HT) SSS and SRS showed about 30% reduction (P<0.0001), Extent showed about 38% reduction and TPD showed about 30% reduction in the tendency (P<0.0001). In the score of the perfusion, especially on the part of infero-medium, infero-apical, lateral-medium and lateral-apical regions were the biggest change. The case of the 24 hrs delay patient SRS (P=0.042), Extent (P=0.018) and TPD (P=0.0024) showed about 13-18% reduction. And the case of simple pain patient, comparison of 4 results showed about 5-7% reduction. Conclusion: This study was started based on expectation that results could be affected by normal patient data. Normal patient data is possible to change by race and gender. It was proved that combination of new reconstruction algorithm for reducing scan time and analysis program according to scan protocol with NRA could also be affected to results. Clinical usefulness of gated myocardial SPECT is possibly increased if each hospital properly collects normal patient data for their scan acquisition protocol.

  • PDF

Recent Progress in Air-Conditioning and Refrigeration Research : A Review of Papers Published in the Korean Journal of Air-Conditioning and Refrigeration Engineering in 2009 (설비공학 분야의 최근 연구 동향 : 2009년 학회지 논문에 대한 종합적 고찰)

  • Han, Hwa-Taik;Lee, Dae-Young;Kim, Seo Young;Choi, Jong-Min;Baik, Yong-Kyu;Kwon, Young-Chul
    • Korean Journal of Air-Conditioning and Refrigeration Engineering
    • /
    • v.22 no.7
    • /
    • pp.492-507
    • /
    • 2010
  • This article reviews the papers published in the Korean Journal of Air-Conditioning and Refrigeration Engineering during 2009. It is intended to understand the status of current research in the areas of heating, cooling, ventilation, sanitation, and indoor environments of buildings and plant facilities. Conclusions are as follows. (1) Research trends of thermal and fluid engineering have been surveyed as groups of general thermal and fluid flow, fluid machinery and piping, and new and renewable energy. Various topics were covered in the field of general thermal and fluid flow such as an expander, a capillary tube, the flow of micro-channel water blocks, the friction and anti-wear characteristics of nano oils with mixtures of refrigerant oils, etc. Research issues mainly focused on the design of micro-pumps and fans, the heat resistance reliability of axial smoke exhaust fans, and hood systems in the field of fluid machinery and piping. Studies on ground water sources were executed concerning two well type geothermal heat pumps and multi-heat pumps in the field of new and renewable energy. (2) Research works on heat transfer area have been reviewed in the categories of heat transfer characteristics and industrial heat exchangers. Researches on heat transfer characteristics included the heat transfer in thermoelectric cooling systems, refrigerants, evaporators, dryers, desiccant rotors. In the area of industrial heat exchangers, researches on high temperature ceramic heat exchangers, plate heat exchangers, frosting on fins of heat exchangers were performed. (3) In the field of refrigeration, papers were presented on alternative refrigerants, system improvements, and the utilization of various energy sources. Refrigeration systems with alternative refrigerants such as hydrocarbons, mixed refrigerants, and $CO_2$ were studied. Efforts to improve the performance of refrigeration systems were made applying various ideas of suction line heat exchangers, subcooling bypass lines and gas injection systems. Studies on heat pump systems using unutilized energy sources such as river water, underground water, and waste heat were also reported. (4) Research trend in the field of mechanical building facilities has been found to be mainly focused on field applications rather than performance improvements. In the area of cogeneration systems, papers on energy and economic analysis, LCC analysis and cost estimating were reported. Studies on ventilation and heat recovery systems introduced the effect on fire and smoke control, and energy reduction. Papers on district cooling and heating systems dealt with design capacity evaluation, application plan and field application. Also, the maintenance and management of building service equipments were presented for HVAC systems. (5) In the field of architectural environment, various studies were carried to improve indoor air quality and to analyze the heat load characteristics of buildings by energy simulation. These studies helped to understand the physics related to building load characteristics and to improve the quality of architectural environment where human beings reside in.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

The Case on Valuation of IT Enterprise (IT 기업의 가치평가 사례연구)

  • Lee, Jae-Il;Yang, Hae-Sul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.8 no.4
    • /
    • pp.881-893
    • /
    • 2007
  • IT(Information Technology)-based industries have caused a recent digital revolution and the appearance of various types' information service, being largely expanded toward info-communication device company, info-communication service company, software company etc.. Therefore, the needs to evaluate the company value of IT business for M&A or liquidation are growing tremendously. Unlike other industries, however, IT industry has a short lift cycle and so it doesn't have not only a company value-evaluating model for general businesses but the objective one for IT companies yet. So, this thesis analyzes various value-evaluating technique and newly rising ROV. DCF, the change method of company's cash flow including tangible assets into future value, had been applied during the past industrialization economy era and has been persuasively applied to the present. However, the DCF valuation has no option but to make many mistakes because IT companies have more intangible assets than tangible assets. Accordingly, it is ROV, recognized as the new method of evaluating companies' various options normally and quantitatively, that is brought up recently. But the evaluation on the companies' various options is too subjective and theoretical up to now and due to the lack of objective ground and options, it's not possible to be applied to reality. In this thesis, it is found that ROV is more accurate than DCF, comparing DCF and ROV through four examples. As the options applied to ROV are excessively limited, we tried to develop ROV into a new method by deriving five invisible value factors within IT companies. Therefore, on this occasion, we should set up the basic valuation methods on IT companies and should research and develop an effective and various valuation methods suitable to each company like an internet-based company, a S/W developing enterprise, a network-related company among IT companies.

  • PDF

Derivation of Green Infrastructure Planning Factors for Reducing Particulate Matter - Using Text Mining - (미세먼지 저감을 위한 그린인프라 계획요소 도출 - 텍스트 마이닝을 활용하여 -)

  • Seok, Youngsun;Song, Kihwan;Han, Hyojoo;Lee, Junga
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.5
    • /
    • pp.79-96
    • /
    • 2021
  • Green infrastructure planning represents landscape planning measures to reduce particulate matter. This study aimed to derive factors that may be used in planning green infrastructure for particulate matter reduction using text mining techniques. A range of analyses were carried out by focusing on keywords such as 'particulate matter reduction plan' and 'green infrastructure planning elements'. The analyses included Term Frequency-Inverse Document Frequency (TF-IDF) analysis, centrality analysis, related word analysis, and topic modeling analysis. These analyses were carried out via text mining by collecting information on previous related research, policy reports, and laws. Initially, TF-IDF analysis results were used to classify major keywords relating to particulate matter and green infrastructure into three groups: (1) environmental issues (e.g., particulate matter, environment, carbon, and atmosphere), target spaces (e.g., urban, park, and local green space), and application methods (e.g., analysis, planning, evaluation, development, ecological aspect, policy management, technology, and resilience). Second, the centrality analysis results were found to be similar to those of TF-IDF; it was confirmed that the central connectors to the major keywords were 'Green New Deal' and 'Vacant land'. The results from the analysis of related words verified that planning green infrastructure for particulate matter reduction required planning forests and ventilation corridors. Additionally, moisture must be considered for microclimate control. It was also confirmed that utilizing vacant space, establishing mixed forests, introducing particulate matter reduction technology, and understanding the system may be important for the effective planning of green infrastructure. Topic analysis was used to classify the planning elements of green infrastructure based on ecological, technological, and social functions. The planning elements of ecological function were classified into morphological (e.g., urban forest, green space, wall greening) and functional aspects (e.g., climate control, carbon storage and absorption, provision of habitats, and biodiversity for wildlife). The planning elements of technical function were classified into various themes, including the disaster prevention functions of green infrastructure, buffer effects, stormwater management, water purification, and energy reduction. The planning elements of the social function were classified into themes such as community function, improving the health of users, and scenery improvement. These results suggest that green infrastructure planning for particulate matter reduction requires approaches related to key concepts, such as resilience and sustainability. In particular, there is a need to apply green infrastructure planning elements in order to reduce exposure to particulate matter.

Designing Mobile Framework for Intelligent Personalized Marketing Service in Interactive Exhibition Space (인터랙티브 전시 환경에서 개인화 마케팅 서비스를 위한 모바일 프레임워크 설계)

  • Bae, Jong-Hwan;Sho, Su-Hwan;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.59-69
    • /
    • 2012
  • As exhibition industry, which is a part of 17 new growth engines of the government, is related to other industries such as tourism, transportation and financial industries. So it has a significant ripple effect on other industries. Exhibition is a knowledge-intensive, eco-friendly and high value-added Industry. Over 13,000 exhibitions are held every year around the world which contributes to getting foreign currency. Exhibition industry is closely related with culture and tourism and could be utilized as local and national development strategies and improve national brand image as well. Many countries try various efforts to invigorate exhibition industry by arranging related laws and support system. In Korea, more than 200 exhibitions are being held every year, but only 2~3 exhibitions are hosted with over 400 exhibitors and except these exhibitions most exhibitions have few foreign exhibitors. The main reason of weakness of domestic trade show is that there are no agencies managing exhibitionrelated statistics and there is no specific and reliable evaluation. This might cause impossibility of providing buyer or seller with reliable data, poor growth of exhibitions in terms of quality and thus service quality of trade shows cannot be improved. Hosting a lot of visitors (Public/Buyer/Exhibitor) is very crucial to the development of domestic exhibition industry. In order to attract many visitors, service quality of exhibition and visitor's satisfaction should be enhanced. For this purpose, a variety of real-time customized services through digital media and the services for creating new customers and retaining existing customers should be provided. In addition, by providing visitors with personalized information services they could manage their time and space efficiently avoiding the complexity of exhibition space. Exhibition industry can have competitiveness and industrial foundation through building up exhibition-related statistics, creating new information and enhancing research ability. Therefore, this paper deals with customized service with visitor's smart-phone at the exhibition space and designing mobile framework which enables exhibition devices to interact with other devices. Mobile server framework is composed of three different systems; multi-server interaction, server, client, display device. By making knowledge pool of exhibition environment, the accumulated data for each visitor can be provided as personalized service. In addition, based on the reaction of visitors each of all information is utilized as customized information and so the cyclic chain structure is designed. Multiple interaction server is designed to have functions of event handling, interaction process between exhibition device and visitor's smart-phone and data management. Client is an application processed by visitor's smart-phone and could be driven on a variety of platforms. Client functions as interface representing customized service for individual visitors and event input and output for simultaneous participation. Exhibition device consists of display system to show visitors contents and information, interaction input-output system to receive event from visitors and input toward action and finally the control system to connect above two systems. The proposed mobile framework in this paper provides individual visitors with customized and active services using their information profile and advanced Knowledge. In addition, user participation service is suggested as well by using interaction connection system between server, client, and exhibition devices. Suggested mobile framework is a technology which could be applied to culture industry such as performance, show and exhibition. Thus, this builds up the foundation to improve visitor's participation in exhibition and bring about development of exhibition industry by raising visitor's interest.

A Study on Users' Resistance toward ERP in the Pre-adoption Context (ERP 도입 전 구성원의 저항)

  • Park, Jae-Sung;Cho, Yong-Soo;Koh, Joon
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.77-100
    • /
    • 2009
  • Information Systems (IS) is an essential tool for any organizations. The last decade has seen an increasing body of knowledge on IS usage. Yet, IS often fails because of its misuse or non-use. In general, decisions regarding the selection of a system, which involve the evaluation of many IS vendors and an enormous initial investment, are made not through the consensus of employees but through the top-down decision making by top managers. In situations where the selected system does not satisfy the needs of the employees, the forced use of the selected IS will only result in their resistance to it. Many organizations have been either integrating dispersed legacy systems such as archipelago or adopting a new ERP (Enterprise Resource Planning) system to enhance employee efficiency. This study examines user resistance prior to the adoption of the selected IS or ERP system. As such, this study identifies the importance of managing organizational resistance that may appear in the pre-adoption context of an integrated IS or ERP system, explores key factors influencing user resistance, and investigates how prior experience with other integrated IS or ERP systems may change the relationship between the affecting factors and user resistance. This study focuses on organizational members' resistance and the affecting factors in the pre-adoption context of an integrated IS or ERP system rather than in the context of an ERP adoption itself or ERP post-adoption. Based on prior literature, this study proposes a research model that considers six key variables, including perceived benefit, system complexity, fitness with existing tasks, attitude toward change, the psychological reactance trait, and perceived IT competence. They are considered as independent variables affecting user resistance toward an integrated IS or ERP system. This study also introduces the concept of prior experience (i.e., whether a user has prior experience with an integrated IS or ERP system) as a moderating variable to examine the impact of perceived benefit and attitude toward change in user resistance. As such, we propose eight hypotheses with respect to the model. For the empirical validation of the hypotheses, we developed relevant instruments for each research variable based on prior literature and surveyed 95 professional researchers and the administrative staff of the Korea Photonics Technology Institute (KOPTI). We examined the organizational characteristics of KOPTI, the reasons behind their adoption of an ERP system, process changes caused by the introduction of the system, and employees' resistance/attitude toward the system at the time of the introduction. The results of the multiple regression analysis suggest that, among the six variables, perceived benefit, complexity, attitude toward change, and the psychological reactance trait significantly influence user resistance. These results further suggest that top management should manage the psychological states of their employees in order to minimize their resistance to the forced IS, even in the new system pre-adoption context. In addition, the moderating variable-prior experience was found to change the strength of the relationship between attitude toward change and system resistance. That is, the effect of attitude toward change in user resistance was significantly stronger in those with prior experience than those with no prior experience. This result implies that those with prior experience should be identified and provided with some type of attitude training or change management programs to minimize their resistance to the adoption of a system. This study contributes to the IS field by providing practical implications for IS practitioners. This study identifies system resistance stimuli of users, focusing on the pre-adoption context in a forced ERP system environment. We have empirically validated the proposed research model by examining several significant factors affecting user resistance against the adoption of an ERP system. In particular, we find a clear and significant role of the moderating variable, prior ERP usage experience, in the relationship between the affecting factors and user resistance. The results of the study suggest the importance of appropriately managing the factors that affect user resistance in organizations that plan to introduce a new ERP system or integrate legacy systems. Moreover, this study offers to practitioners several specific strategies (in particular, the categorization of users by their prior usage experience) for alleviating the resistant behaviors of users in the process of the ERP adoption before a system becomes available to them. Despite the valuable contributions of this study, there are also some limitations which will be discussed in this paper to make the study more complete and consistent.

Evaluation of CODsed Analytical Methods for Domestic Freshwater Sediments: Comparison of Reliability and Correlationship between CODMn and CODCr Methods (국내 담수퇴적물의 CODsed 분석방법 평가: CODMn법과 CODCr법의 신뢰성 및 상관성 비교)

  • Choi, Jiyeon;Oh, Sanghwa;Park, Jeong-Hun;Hwang, Inseong;Oh, Jeong-Eun;Hur, Jin;Shin, Hyun-Sang;Huh, In-Ae;Kim, Young-Hoon;Shin, Won Sik
    • Journal of Environmental Science International
    • /
    • v.23 no.2
    • /
    • pp.181-192
    • /
    • 2014
  • In Korea, the chemical oxygen demand($COD_{sed}$) in freshwater sediments has been measured by the potassium permanganate method used for marine sediment because of the absence of authorized analytical method. However, this method has not been fully verified for the freshwater sediment. Therefore, the use or modification of the potassium permanganate method or the development of the new $COD_{sed}$ analytical method may be necessary. In this study, two modified $COD_{sed}$ analytical methods such as the modified potassium permanganate method for $COD_{Mn}$ and the modified closed reflux method using potassium dichromate for $COD_{Cr}$ were compared. In the preliminary experiment to estimate the capability of the two oxidants for glucose oxidation, $COD_{Mn}$ and $COD_{Cr}$ were about 70% and 100% of theoretical oxygen demand(ThOD), respectively, indicating that $COD_{Cr}$ was very close to the ThOD. The effective titration ranges in $COD_{Mn}$ and $COD_{Cr}$ were 3.2 to 7.5 mL and 1.0 to 5.0 mL for glucose, 4.3 to 7.5 mL and 1.4 to 4.3 mL for lake sediment, and 2.5 to 5.8 mL and 3.6 to 4.5 mL for river sediment, respectively, within 10% errors. For estimating $COD_{sed}$ recovery(%) in glucose-spiked sediment after aging for 1 day, the mass balances of the $COD_{Mn}$ and $COD_{Cr}$ among glucose, sediments and glucose-spiked sediments were compared. The recoveries of $COD_{Mn}$ and $COD_{Cr}$ were 78% and 78% in glucose-spiked river sediments, 91% and 86% in glucose-spiked lake sediments, 97% and 104% in glucose-spiked sand, and 134% and 107% in glucose-spiked clay, respectively. In conclusion, both methods have high confidence levels in terms of analytical methodology but show significant different $COD_{sed}$ concentrations due to difference in the oxidation powers of the oxidants.

Development of an Integrated General Model (IGM) System for Comparison of Genetic Gains from Different Bull Selection Strategies for Korean Brown Cattle (Hanwoo)

  • Lee, Jeong-Soo;Kim, Hee-Bal;Kim, Si-Dong
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.24 no.11
    • /
    • pp.1483-1503
    • /
    • 2011
  • To advance the effectiveness of the current Hanwoo improvement system, we developed a general simulation that compared a series of breeding schemes under realistic user circumstances. We call this system the Integrated General Model (IGM) and it allows users to control the breeding schemes and selection methods by manipulating the input parameters. The Current Hanwoo Performance and Progeny Test (CHPPT) scheme was simulated with a Modified Hanwoo Performance and Progeny Test (MHPPT) scheme using a Hanwoo Breeding Farm cow population of the Livestock Improvement Main Center (LOMC) of the National Agricultural Cooperatives Federation (NACF). To compare the two schemes, a new method, the Simple Hanwoo Performance Test (SHPT), which uses ultrasound technology for measuring the carcass traits of live animals, was developed. These three models, including the CHPPT, incorporated three types of selection criteria: phenotype (PH), true breeding value (TBV), and estimated breeding value (EBV). The simulation was scheduled to mimic an actual Hanwoo breeding program; thus, the simulation was run to include the years 1983-2020 for each breeding method and was replicated 10 times. The parameters for simulation were derived from the literature. Approximately 642,000 animals were simulated per replication for the CHPPT scheme; 129,000 animals were simulated for the MHPPT scheme and 112,000 animals for the SHPT scheme. Throughout the 38-year simulation, all estimated parameters of each simulated population, regardless of population size, showed results similar to the input parameters. The deviations between input and output values for the parameters in the large populations were statistically acceptable. In this study, we integrated three simulated models, including the CHPPT, in an attempt to achieve the greatest genetic gains within major economic traits including body weight at 12 months of age (BW12), body weight at 24 months of age (BW24), average daily gain from 6 to 12 months (ADG), carcass weight (CWT), carcass longissimus muscle area (CLMA), carcass marbling score (CMS), ultrasound scanned longissimus muscle area (ULMA), and ultrasound scanned marbling score (UMS).

Exploring How to Develop Teaching & Learning Materials to Create New Problems for Invention ('문제 만들기' 활동을 통한 발명 교수·학습자료 개발 방향 탐색)

  • Kang, Kyoung-Kyoon;Lee, Gun-hwan;Park, Seong-Won
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.9
    • /
    • pp.290-301
    • /
    • 2017
  • This research aimed to develop problem creating worksheets as a teaching & learning material for problem solving activities and assess its effectiveness. Activity worksheets for creative problem development were established. The effectiveness of the problem-creating classes taught to gifted students in invention was evaluated. In addition, effective strategies for encouraging problem creating and question making in teaching & learning processes were explored. The creative problem identification activity consisted of 5 steps, which are idea creation, convergence, execution, and evaluation. The results showed that elementary and middle school students taught in the classes using this problem-identification worksheet were highly satisfied with the program. This study concluded that it requires an educational environment, government level collaboration, and support to create a mature social atmosphere and educational environment motivating students to keep asking questions and identify problems. Through continual modification, additional ongoing efforts to increase the credibility and the quality of the worksheets as a creative problem solving and learning tool will be needed.