• Title/Summary/Keyword: Problem analysis

Search Result 16,343, Processing Time 0.053 seconds

Characteristics and Changes in Scientific Empathy during Students' Productive Disciplinary Engagement in Science (학생들의 생산적 과학 참여에서 발현되는 과학공감의 특성과 변화 분석)

  • Heesun, Yang;Seong-Joo, Kang
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.1
    • /
    • pp.11-27
    • /
    • 2024
  • This study aimed to investigate the role of scientific empathy in influencing students' productive disciplinary engagement in scientific activities and analyze the key factors of scientific empathy that manifest during this process. Twelve fifth-grade students were divided into three subgroups based on their general empathic abilities. Lessons promoting productive disciplinary engagement, integrating design thinking processes, were conducted. Subgroup discourse analysis during idea generation and prototype stages, two of five problem-solving steps, enabled observation of scientific empathy and practice aspects. The results showed that applying scientific empathy effectively through design thinking facilitated students' productive disciplinary engagement in science. In the idea generation stage, we observed an initial increase followed by a decrease in scientific empathy and practice utterances, while during the prototyping stage, utterance frequency increased, particularly in the later part. However, subgroups with lower empathic abilities displayed decreased discourse frequency in scientific empathy and practice during the prototype stage due to a lack of collaborative communication. Across all empathic ability levels, the students articulated all five key factors of scientific empathy through their utterances in situations involving productive science engagement. In the high empathic ability subgroup, empathic understanding and concern were emphasized, whereas in the low empathic ability subgroup, sensitivity, scientific imagination, and situational interest, factors of empathizing with the research object, were prominent. These results indicate that experiences of scientific empathy with research objects, beyond general empathetic abilities, serve as a distinct and crucial factor in stimulating diverse participation and sustaining students' productive engagement in scientific activities during science classes. By suggesting the potential multidimensional impact of scientific empathy on productive disciplinary engagement, this study contributes to discussions on the theoretical structure and stability of scientific empathy in science education.

The Development and Validation of a Core Competency Scale for Startup Talent : Focusing on ICT Sector Employees (스타트업 핵심인재 역량 척도 개발 및 타당화 : 정보통신기술(ICT)분야 종사자를 대상으로)

  • Han, Chae-yeon;Ha, Gyu-young
    • Journal of Venture Innovation
    • /
    • v.7 no.3
    • /
    • pp.183-228
    • /
    • 2024
  • This study aimed to develop a competency evaluation scale tailored to the specific needs of key talent in the ICT startup sector. Existing competency assessment tools are mostly designed for environments in large corporations or traditional small and medium-sized enterprises, failing to adequately reflect the dynamic requirements of rapidly evolving startups. For startups, where a small number of individuals directly impact company success, key talent is a critical asset. Accordingly, this study sought to create a scale that measures the competencies suited to the challenges and opportunities faced by startups, helping domestic startups establish more effective talent management strategies. The research initially selected 71 items through a literature review and in-depth interviews. Based on expert feedback that emphasized the need for more precise and clear descriptions, the item descriptions were revised, and a total of 65 items were developed through four rounds of content validation. Following preliminary and main surveys, a final set of 58 items was developed. The main survey conducted further factor analysis based on the three broad competency factors?knowledge, skills, and attitude?identified in the preliminary survey. As a result, 10 latent factors emerged: 6 items for task comprehension, 6 items for practical experience (tacit knowledge), 6 items for collaboration, 9 items for management and problem-solving, 9 items for practical skills, 4 items for self-direction, 5 items for goal orientation, 5 items for adaptability, 5 items for relationship orientation, and 3 items for organizational loyalty. The developed scale comprehensively covers the multifaceted nature of competencies, allowing for a thorough evaluation of essential skills such as technical ability, teamwork, innovation, and leadership, which are critical for startups. Therefore, the scale provides a tool that helps startup managers objectively and accurately assess candidates' competencies. It also supports the growth of employees within startups, maximizing the overall organizational performance. By utilizing this tool, startups can build a strong internal talent pool and continuously enhance employees' competencies, thereby strengthening organizational competitiveness. In conclusion, the competency evaluation scale developed in this study is a customized tool that aligns with the characteristics of startups and plays a crucial role in securing sustainable competitiveness in rapidly changing market environments. Additionally, it offers practical guidance to support the successful growth of domestic startups and help them maintain their competitive edge in the market, contributing to the development of the startup ecosystem and the growth of the national economy.

Exhibition Hall Lighting Design that Fulfill High CRI Based on Natural Light Characteristics - Focusing on CRI Ra, R9, R12 (자연광 특성 기반 고연색성 실현 전시관 조명 설계 - CRI Ra, R9, R12를 중심으로)

  • Ji-Young Lee;Seung-Teak Oh;Jae-Hyun Lim
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.65-72
    • /
    • 2024
  • To faithfully represent the intention of the work in the exhibition space, lighting that provides high color reproduction like natural light is required. Thus, many lighting technologies have been introduced to improve CRI, but most of them only evaluated the general color rendering index (CRI Ra), which considers eight pastel colors. Natural light provides excellent color rendering performance for all colors, including red and blue, expressed by color rendering index of R9 and R12, but most artificial lighting has the problem that color rendering performance such as R9 and R12 is significantly lower than that of natural light. Recently, lighting technology that provides CRI at the level of natural light is required to realistically express the colors of works including primary colors but related research is very insufficient. Therefore this paper proposes exhibition hall lighting that fulfills CRI with a focus on CRI Ra, R9, and R12 based on the characteristics of natural light. First reinforcement wavelength bands for improving R9 and R12 are selected through analysis of the actual measurement SPD of natural and artificial lighting. Afterward virtual SPDs with a peak wavelength within the reinforcement wavelength band are created and then SPD combination conditions that satisfy CRI Ra≥95, R9, and R12≥90 are derived through combination simulation with a commercial LED light source. Through this, after specifying two types of light sources with 405,630nm peak wavelength that had the greatest impact on the improvement of R9 and R12, the exhibition hall lighting applied with two W/C White LEDs is designed and a control Index DB of the lighting is constructed. Afterward experiments with the proposed method showed that it was possible to achieve high CRI at the level of natural light with average CRI Ra 96.5, R9 96.2, and R12 94.0 under the conditions of illuminance (300-1,000 Lux) and color temperature (3,000-5,000K).

A Study on the long-term Hemodialysis patient중s hypotension and preventation from Blood loss in coil during the Hemodialysis (장기혈액투석환자의 투석중 혈압하강과 Coil내 혈액손실 방지를 위한 기초조사)

  • 박순옥
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.2
    • /
    • pp.83-104
    • /
    • 1981
  • Hemodialysis is essential treatment for the chronic renal failure patient's long-term cure and for the patient management before and after kidney transplantation. It sustains the endstage renal failure patient's life which didn't get well despite strict regimen and furthermore it becomes an essential treatment to maintain civil life. Bursing implementation in hemodialysis may affect the significant effect on patient's life. The purpose of this study was to obtain the basic data to solve the hypotension problem encountable to patient and the blood loss problem affecting hemodialysis patient'a anemic states by incomplete rinsing of blood in coil through all process of hemodialysis. The subjects for this study were 44 patients treated hemodialysis 691 times in the hemodialysis unit, The .data was collected at Gang Nam 51. Mary's Hospital from January 1, 1981 to April 30, 1981 by using the direct observation method and the clinical laboratory test for laboratory data and body weight and was analysed by the use of analysis of Chi-square, t-test and anlysis of varience. The results obtained an follows; A. On clinical laboratory data and other data by dialysis Procedure. The average initial body weight was 2.37 ± 0.97kg, and average body weight after every dialysis was 2.33 ± 0.9kg. The subject's average hemoglobin was 7.05±1.93gm/dl and average hematocrit was 20.84± 3.82%. Average initial blood pressure was 174.03±23,75mmHg and after dialysis was 158.45±25.08mmHg. The subject's average blood ion due to blood sample for laboratory data was 32.78±13.49cc/ month. The subject's average blood replacement for blood complementation was 1.31 ±0.88 pint/ month for every patient. B. On the hypotensive state and the coping approaches occurrence rate of hypotension was 28.08%. It was 194 cases among 691 times. 1. In degrees of initial blood pressure, the most 36.6% was in the group of 150-179mmHg, and in degrees of hypotension during dialysis, the most 28.9% in the group of 40-50mmHg, especially if the initial blood pressure was under 180mmHg, 59.8% clinical symptoms appeared in the group of“above 20mmHg of hypotension”. If initial blood pressure was above 180mmHg, 34.2% of clinical symptoms were appeared in the group of“above 40mmHg of hypotension”. These tendencies showed the higher initial blood pressure and the stronger degree of hypotension, these results showed statistically singificant differences. (P=0.0000) 2. Of the occuring times of hypotension,“after 3 hrs”were 29.4%, the longer the dialyzing procedure, the stronger degree of hypotension ann these showed statistically significant differences. (P=0.0142). 3. Of the dispersion of symptoms observed, sweat and flush were 43.3%, and Yawning, and dizziness 37.6%. These were the important symptoms implying hypotension during hemodialysis accordingly. Strages of procedures in coping with hypotension were as follows ; 45.9% were recovered by reducing the blood flow rate from 200cc/min to 1 00cc/min, and by reducing venous pressure to 0-30mmHg. 33.51% were recovered by controling (adjusting) blood flow rate and by infusion of 300cc of 0,9% Normal saline. 4.1% were recovered by infusion of over 300cc of 0.9% normal saline. 3.6% by substituting Nor-epinephiine, 5.7% by substituting blood transfusion, and 7,2% by substituting Albumin were recovered. And the stronger the degree of symptoms observed in hypotention, the more the treatments required for recovery and these showed statistically significant differences (P=0.0000). C. On the effects of the changes of blood pressure and osmolality by albumin and hemofiltration. 1. Changes of blood pressure in the group which didn't required treatment in hypotension and the group required treatment, were averaged 21.5mmHg and 44.82mmHg. So the difference in the latter was bigger than the former and these showed statistically significant difference (P=0.002). On the changes of osmolality, average mean were 12.65mOsm, and 17.57mOsm. So the difference was bigger in the latter than in the former but these not showed statistically significance (P=0.323). 2. Changes of blood pressure in the group infused albumin and in the group didn't required treatment in hypotension, were averaged 30mmHg and 21.5mmHg. So there was no significant differences and it showed no statistical significance (P=0.503). Changes of osmolality were averaged 5.63mOsm and 12.65mOsm. So the difference was smaller in the former but these was no stitistical significance (P=0.287). Changes of blood pressure in the group infused Albumin and in the group required treatment in hypotension were averaged 30mmHg and 44.82mmHg. So the difference was smaller in the former but there is no significant difference (P=0.061). Changes of osmolality were averaged 8.63mOsm, and 17.59mOsm. So the difference were smaller in the former but these not showed statistically significance (P=0.093). 3. Changes of blood pressure in the group iutplemented hemofiltration and in the Uoup didn't required treatment in hypotension were averaged 22mmHg and 21.5mmHg. So there was no significant differences and also these showed no statistical significance (P=0.320). Changes of osmolality were averaged 0.4mOsm and 12.65mOsm. So the difference was smaller in the former but these not showed statistical significance(P=0.199). Changes of blood pressure in the group implemented hemofiltration and in the group required treatment in hypotension were averaged 22mmHg and 44.82mmHg. So the difference was smatter in the former and these showed statistically significant differences (P=0.035). Changes of osmolality were averaged 0.4mOsm and 17.59mOsm. So the difference was smaller in the former but these not showed statistical significance (P=0.086). D. On the changes of body weight, and blood pressure, between the group of hemofiltration and hemodialysis. 1, Changes of body weight in the group implemented hemofiltration and hemodialysis were averaged 3.340 and 3.320. So there was no significant differences and these showed no statistically significant difference, (P=0.185) but standard deviation of body weight averaged in comparison with standard difference of body weight was statistically significant difference (P=0.0000). Change of blood Pressure in the group implemented hemofiltration and hemodialysis were averaged 17.81mmHg and 19.47mmHg. So there was no significant differences and these showed no statistically significant difference (P=0.119), But in comparison with standard deviation about difference of blood pressure was statistically significant difference. (P=0.0000). E. On the blood infusion method in coil after hemodialysis and residual blood losing method in coil. 1, On comparing and analysing Hct of residual blood in coil by factors influencing blood infusion method. Infusion method of saline 200cc reduced residual blood in coil after the quantitative comparison of Saline Occ, 50cc, 100cc, 200cc and the differences showed statistical significance (p < 0.001). Shaking Coil method reduced residual blood in Coil in comparison of Shaking Coil method and Non-Shaking Coil method this showed statistically significant difference (P < 0.05). Adjusting pressure in Coil at OmmHg method reduced residual blood in Coil in comparison of adjusting pressure in Coil at OmmHg and 200mmHg, and this showed statistically significant difference (P < 0.001). 2. Comparing blood infusion method divided into 10 methods in Coil with every factor respectively, there was seldom difference in group of choosing Saline 100cc infusion between Coil at OmmHg. The measured quantity of blood loss was averaged 13.49cc. Shaking Coil method in case of choosing saline 50cc infusion while adjusting pressure in coil at OmmHg was the most effective to reduce residual blood. The measured quantity of blood loss was averaged 15.18cc.

  • PDF

무령왕릉보존에 있어서의 지질공학적 고찰

  • 서만철;최석원;구민호
    • Proceedings of the KSEEG Conference
    • /
    • 2001.05b
    • /
    • pp.42-63
    • /
    • 2001
  • The detail survey on the Songsanri tomb site including the Muryong royal tomb was carried out during the period from May 1 , 1996 to April 30, 1997. A quantitative analysis was tried to find changes of tomb itself since the excavation. Main subjects of the survey are to find out the cause of infiltration of rain water and groundwater into the tomb and the tomb site, monitoring of the movement of tomb structure and safety, removal method of the algae inside the tomb, and air controlling system to solve high humidity condition and dew inside the tomb. For these purposes, detail survery inside and outside the tombs using a electronic distance meter and small airplane, monitoring of temperature and humidity, geophysical exploration including electrical resistivity, geomagnetic, gravity and georadar methods, drilling, measurement of physical and chemical properties of drill core and measurement of groundwater permeability were conducted. We found that the center of the subsurface tomb and the center of soil mound on ground are different 4.5 meter and 5 meter for the 5th tomb and 7th tomb, respectively. The fact has caused unequal stress on the tomb structure. In the 7th tomb (the Muryong royal tomb), 435 bricks were broken out of 6025 bricks in 1972, but 1072 bricks are broken in 1996. The break rate has been increased about 250% for just 24 years. The break rate increased about 290% in the 6th tomb. The situation in 1996 is the result for just 24 years while the situation in 1972 was the result for about 1450 years. Status of breaking of bircks represents that a severe problem is undergoing. The eastern wall of the Muryong royal tomb is moving toward inside the tomb with the rate of 2.95 mm/myr in rainy season and 1.52 mm/myr in dry season. The frontal wall shows biggest movement in the 7th tomb having a rate of 2.05 mm/myr toward the passage way. The 6th tomb shows biggest movement among the three tombs having the rate of 7.44mm/myr and 3.61mm/myr toward east for the high break rate of bricks in the 6th tomb. Georadar section of the shallow soil layer represents several faults in the top soil layer of the 5th tomb and 7th tomb. Raninwater flew through faults tnto the tomb and nearby ground and high water content in nearby ground resulted in low resistance and high humidity inside tombs. High humidity inside tomb made a good condition for algae living with high temperature and moderate light source. The 6th tomb is most severe situation and the 7th tomb is the second in terms of algae living. Artificial change of the tomb environment since the excavation, infiltration of rain water and groundwater into the tombsite and bad drainage system had resulted in dangerous status for the tomb structure. Main cause for many problems including breaking of bricks, movement of tomb walls and algae living is infiltration of rainwater and groundwater into the tomb site. Therefore, protection of the tomb site from high water content should be carried out at first. Waterproofing method includes a cover system over the tomvsith using geotextile, clay layer and geomembrane and a deep trench which is 2 meter down to the base of the 5th tomb at the north of the tomv site. Decrease and balancing of soil weight above the tomb are also needed for the sfety of tomb structures. For the algae living inside tombs, we recommend to spray K101 which developed in this study on the surface of wall and then, exposure to ultraviolet light sources for 24 hours. Air controlling system should be changed to a constant temperature and humidity system for the 6th tomb and the 7th tomb. It seems to much better to place the system at frontal room and to ciculate cold air inside tombs to solve dew problem. Above mentioned preservation methods are suggested to give least changes to tomb site and to solve the most fundmental problems. Repairing should be planned in order and some special cares are needed for the safety of tombs in reparing work. Finally, a monitoring system measuring tilting of tomb walls, water content, groundwater level, temperature and humidity is required to monitor and to evaluate the repairing work.

  • PDF

Home Economics teachers' concern on creativity and personality education in Home Economics classes: Based on the concerns based adoption model(CBAM) (가정과 교사의 창의.인성 교육에 대한 관심과 실행에 대한 인식 - CBAM 모형에 기초하여-)

  • Lee, In-Sook;Park, Mi-Jeong;Chae, Jung-Hyun
    • Journal of Korean Home Economics Education Association
    • /
    • v.24 no.2
    • /
    • pp.117-134
    • /
    • 2012
  • The purpose of this study was to identify the stage of concern, the level of use, and the innovation configuration of Home Economics teachers regarding creativity and personality education in Home Economics(HE) classes. The survey questionnaires were sent through mails and e-mails to middle-school HE teachers in the whole country selected by systematic sampling and convenience sampling. Questionnaires of the stages of concern and the levels of use developed by Hall(1987) were used in this study. 187 data were used for the final analysis by using SPSS/window(12.0) program. The results of the study were as following: First, for the stage of concerns of HE teachers on creativity and personality education, the information stage of concerns(85.51) was the one with the highest response rate and the next high in the following order: the management stage of concerns(81.88), the awareness stage of concerns(82.15), the refocusing stage of concerns(68.80), the collaboration stage of concerns(61.97), and the consequence stage of concerns(59.76). Second, the levels of use of HE teachers on creativity and personality education was highest with the mechanical levels(level 3; 21.4%) and the next high in the following order: the orientation levels of use(level 1; 20.9%), the refinement levels(level 5; 17.1%), the non-use levels(level 0; 15.0%), the preparation levels(level 2; 10.2%), the integration levels(level 6; 5.9%), the renewal levels(level 7; 4.8%), the routine levels(level 4; 4.8%). Third, for the innovation configuration of HE teachers on creativity and personality education, more than half of the HE teachers(56.1%) mainly focused on personality education in their HE classes; 31.0% of the HE teachers performed both creativity and personality education; a small number of teachers(6.4%) focused on creativity education; the same number of teachers(6.4%) responded that they do not focus on neither of the two. Examining the level and type of performance HE teachers applied, the average score on the performance of creativity and personality education was 3.76 out of 5.00 and the mean of creativity component was 3.59 and of personality component was 3.94, higher than standard. For the creativity education, openness/sensitivity(3.97) education was performed most and the next most in the following order: problem-solving skill(3.79), curiosity/interest(3.73), critical thinking(3.63), problem-finding skill(3.61), originality(3.57), analogy(3.47), fluency/adaptability(3.46), precision(3.46), imagination(3.37), and focus/sympathy(3.37). For the personality education, the following components were performed in order from most to least: power of execution(4.07), cooperation/consideration/just(4.06), self-management skill(4.04), civic consciousness(4.04), career development ability(4.03), environment adaptability(3.95), responsibility/ownership(3.94), decision making(3.89), trust/honesty/promise(3.88), autonomy(3.86), and global competency(3.55). Regarding what makes performing creativity and personality education difficult, most HE teachers(64.71%) chose the lack of instructional materials and 40.11% of participants chose the lack of seminar and workshop opportunity. 38.5% chose the difficulty of developing an evaluation criteria or an evaluation tool while 25.67% responded that they do not know any means of performing creativity and personality education. Regarding the better way to support for creativity and personality education, the HE teachers chose in order from most to least: 'expansion of hands-on activities for students related to education on creativity and personality'(4.34), 'development of HE classroom culture putting emphasis on creativity and personality'(4.29), 'a proper curriculum on creativity and personality education that goes along with students' developmental stages'(4.27), 'securing enough human resource and number of professors who will conduct creativity and personality education'(4.21), 'establishment of the concept and value of the education on creativity and personality'(4.09), and 'educational promotion on creativity and personality education supported by local communities and companies'(3.94).

  • PDF

A Study on Interactions of Competitive Promotions Between the New and Used Cars (신차와 중고차간 프로모션의 상호작용에 대한 연구)

  • Chang, Kwangpil
    • Asia Marketing Journal
    • /
    • v.14 no.1
    • /
    • pp.83-98
    • /
    • 2012
  • In a market where new and used cars are competing with each other, we would run the risk of obtaining biased estimates of cross elasticity between them if we focus on only new cars or on only used cars. Unfortunately, most of previous studies on the automobile industry have focused on only new car models without taking into account the effect of used cars' pricing policy on new cars' market shares and vice versa, resulting in inadequate prediction of reactive pricing in response to competitors' rebate or price discount. However, there are some exceptions. Purohit (1992) and Sullivan (1990) looked into both new and used car markets at the same time to examine the effect of new car model launching on the used car prices. But their studies have some limitations in that they employed the average used car prices reported in NADA Used Car Guide instead of actual transaction prices. Some of the conflicting results may be due to this problem in the data. Park (1998) recognized this problem and used the actual prices in his study. His work is notable in that he investigated the qualitative effect of new car model launching on the pricing policy of the used car in terms of reinforcement of brand equity. The current work also used the actual price like Park (1998) but the quantitative aspect of competitive price promotion between new and used cars of the same model was explored. In this study, I develop a model that assumes that the cross elasticity between new and used cars of the same model is higher than those amongst new cars and used cars of the different model. Specifically, I apply the nested logit model that assumes the car model choice at the first stage and the choice between new and used cars at the second stage. This proposed model is compared to the IIA (Independence of Irrelevant Alternatives) model that assumes that there is no decision hierarchy but that new and used cars of the different model are all substitutable at the first stage. The data for this study are drawn from Power Information Network (PIN), an affiliate of J.D. Power and Associates. PIN collects sales transaction data from a sample of dealerships in the major metropolitan areas in the U.S. These are retail transactions, i.e., sales or leases to final consumers, excluding fleet sales and including both new car and used car sales. Each observation in the PIN database contains the transaction date, the manufacturer, model year, make, model, trim and other car information, the transaction price, consumer rebates, the interest rate, term, amount financed (when the vehicle is financed or leased), etc. I used data for the compact cars sold during the period January 2009- June 2009. The new and used cars of the top nine selling models are included in the study: Mazda 3, Honda Civic, Chevrolet Cobalt, Toyota Corolla, Hyundai Elantra, Ford Focus, Volkswagen Jetta, Nissan Sentra, and Kia Spectra. These models in the study accounted for 87% of category unit sales. Empirical application of the nested logit model showed that the proposed model outperformed the IIA (Independence of Irrelevant Alternatives) model in both calibration and holdout samples. The other comparison model that assumes choice between new and used cars at the first stage and car model choice at the second stage turned out to be mis-specfied since the dissimilarity parameter (i.e., inclusive or categroy value parameter) was estimated to be greater than 1. Post hoc analysis based on estimated parameters was conducted employing the modified Lanczo's iterative method. This method is intuitively appealing. For example, suppose a new car offers a certain amount of rebate and gains market share at first. In response to this rebate, a used car of the same model keeps decreasing price until it regains the lost market share to maintain the status quo. The new car settle down to a lowered market share due to the used car's reaction. The method enables us to find the amount of price discount to main the status quo and equilibrium market shares of the new and used cars. In the first simulation, I used Jetta as a focal brand to see how its new and used cars set prices, rebates or APR interactively assuming that reactive cars respond to price promotion to maintain the status quo. The simulation results showed that the IIA model underestimates cross elasticities, resulting in suggesting less aggressive used car price discount in response to new cars' rebate than the proposed nested logit model. In the second simulation, I used Elantra to reconfirm the result for Jetta and came to the same conclusion. In the third simulation, I had Corolla offer $1,000 rebate to see what could be the best response for Elantra's new and used cars. Interestingly, Elantra's used car could maintain the status quo by offering lower price discount ($160) than the new car ($205). In the future research, we might want to explore the plausibility of the alternative nested logit model. For example, the NUB model that assumes choice between new and used cars at the first stage and brand choice at the second stage could be a possibility even though it was rejected in the current study because of mis-specification (A dissimilarity parameter turned out to be higher than 1). The NUB model may have been rejected due to true mis-specification or data structure transmitted from a typical car dealership. In a typical car dealership, both new and used cars of the same model are displayed. Because of this fact, the BNU model that assumes brand choice at the first stage and choice between new and used cars at the second stage may have been favored in the current study since customers first choose a dealership (brand) then choose between new and used cars given this market environment. However, suppose there are dealerships that carry both new and used cars of various models, then the NUB model might fit the data as well as the BNU model. Which model is a better description of the data is an empirical question. In addition, it would be interesting to test a probabilistic mixture model of the BNU and NUB on a new data set.

  • PDF

The Characteristics and Performances of Manufacturing SMEs that Utilize Public Information Support Infrastructure (공공 정보지원 인프라 활용한 제조 중소기업의 특징과 성과에 관한 연구)

  • Kim, Keun-Hwan;Kwon, Taehoon;Jun, Seung-pyo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.1-33
    • /
    • 2019
  • The small and medium sized enterprises (hereinafter SMEs) are already at a competitive disadvantaged when compared to large companies with more abundant resources. Manufacturing SMEs not only need a lot of information needed for new product development for sustainable growth and survival, but also seek networking to overcome the limitations of resources, but they are faced with limitations due to their size limitations. In a new era in which connectivity increases the complexity and uncertainty of the business environment, SMEs are increasingly urged to find information and solve networking problems. In order to solve these problems, the government funded research institutes plays an important role and duty to solve the information asymmetry problem of SMEs. The purpose of this study is to identify the differentiating characteristics of SMEs that utilize the public information support infrastructure provided by SMEs to enhance the innovation capacity of SMEs, and how they contribute to corporate performance. We argue that we need an infrastructure for providing information support to SMEs as part of this effort to strengthen of the role of government funded institutions; in this study, we specifically identify the target of such a policy and furthermore empirically demonstrate the effects of such policy-based efforts. Our goal is to help establish the strategies for building the information supporting infrastructure. To achieve this purpose, we first classified the characteristics of SMEs that have been found to utilize the information supporting infrastructure provided by government funded institutions. This allows us to verify whether selection bias appears in the analyzed group, which helps us clarify the interpretative limits of our study results. Next, we performed mediator and moderator effect analysis for multiple variables to analyze the process through which the use of information supporting infrastructure led to an improvement in external networking capabilities and resulted in enhancing product competitiveness. This analysis helps identify the key factors we should focus on when offering indirect support to SMEs through the information supporting infrastructure, which in turn helps us more efficiently manage research related to SME supporting policies implemented by government funded institutions. The results of this study showed the following. First, SMEs that used the information supporting infrastructure were found to have a significant difference in size in comparison to domestic R&D SMEs, but on the other hand, there was no significant difference in the cluster analysis that considered various variables. Based on these findings, we confirmed that SMEs that use the information supporting infrastructure are superior in size, and had a relatively higher distribution of companies that transact to a greater degree with large companies, when compared to the SMEs composing the general group of SMEs. Also, we found that companies that already receive support from the information infrastructure have a high concentration of companies that need collaboration with government funded institution. Secondly, among the SMEs that use the information supporting infrastructure, we found that increasing external networking capabilities contributed to enhancing product competitiveness, and while this was no the effect of direct assistance, we also found that indirect contributions were made by increasing the open marketing capabilities: in other words, this was the result of an indirect-only mediator effect. Also, the number of times the company received additional support in this process through mentoring related to information utilization was found to have a mediated moderator effect on improving external networking capabilities and in turn strengthening product competitiveness. The results of this study provide several insights that will help establish policies. KISTI's information support infrastructure may lead to the conclusion that marketing is already well underway, but it intentionally supports groups that enable to achieve good performance. As a result, the government should provide clear priorities whether to support the companies in the underdevelopment or to aid better performance. Through our research, we have identified how public information infrastructure contributes to product competitiveness. Here, we can draw some policy implications. First, the public information support infrastructure should have the capability to enhance the ability to interact with or to find the expert that provides required information. Second, if the utilization of public information support (online) infrastructure is effective, it is not necessary to continuously provide informational mentoring, which is a parallel offline support. Rather, offline support such as mentoring should be used as an appropriate device for abnormal symptom monitoring. Third, it is required that SMEs should improve their ability to utilize, because the effect of enhancing networking capacity through public information support infrastructure and enhancing product competitiveness through such infrastructure appears in most types of companies rather than in specific SMEs.

A Study of Equipment Accuracy and Test Precision in Dual Energy X-ray Absorptiometry (골밀도검사의 올바른 질 관리에 따른 임상적용과 해석 -이중 에너지 방사선 흡수법을 중심으로-)

  • Dong, Kyung-Rae;Kim, Ho-Sung;Jung, Woon-Kwan
    • Journal of radiological science and technology
    • /
    • v.31 no.1
    • /
    • pp.17-23
    • /
    • 2008
  • Purpose : Because there is a difference depending on the environment as for an inspection equipment the important part of bone density scan and the precision/accuracy of a tester, the management of quality must be made systematically. The equipment failure caused by overload effect due to the aged equipment and the increase of a patient was made frequently. Thus, the replacement of equipment and additional purchases of new bonedensity equipment caused a compatibility problem in tracking patients. This study wants to know whether the clinical changes of patient's bonedensity can be accurately and precisely reflected when used it compatiblly like the existing equipment after equipment replacement and expansion. Materials and methods : Two equipments of GE Lunar Prodigy Advance(P1 and P2) and the Phantom HOLOGIC Spine Road(HSP) were used to measure equipment precision. Each device scans 20 times so that precision data was acquired from the phantom(Group 1). The precision of a tester was measured by shooting twice the same patient, every 15 members from each of the target equipment in 120 women(average age 48.78, 20-60 years old)(Group 2). In addition, the measurement of the precision of a tester and the cross-calibration data were made by scanning 20 times in each of the equipment using HSP, based on the data obtained from the management of quality using phantom(ASP) every morning (Group 3). The same patient was shot only once in one equipment alternately to make the measurement of the precision of a tester and the cross-calibration data in 120 women(average age 48.78, 20-60 years old)(Group 4). Results : It is steady equipment according to daily Q.C Data with $0.996\;g/cm^2$, change value(%CV) 0.08. The mean${\pm}$SD and a %CV price are ALP in Group 1(P1 : $1.064{\pm}0.002\;g/cm^2$, $%CV=0.190\;g/cm^2$, P2 : $1.061{\pm}0.003\;g/cm^2$, %CV=0.192). The mean${\pm}$SD and a %CV price are P1 : $1.187{\pm}0.002\;g/cm^2$, $%CV=0.164\;g/cm^2$, P2 : $1.198{\pm}0.002\;g/cm^2$, %CV=0.163 in Group 2. The average error${\pm}$2SD and %CV are P1 - (spine: $0.001{\pm}0.03\;g/cm^2$, %CV=0.94, Femur: $0.001{\pm}0.019\;g/cm^2$, %CV=0.96), P2 - (spine: $0.002{\pm}0.018\;g/cm^2$, %CV=0.55, Femur: $0.001{\pm}0.013\;g/cm^2$, %CV=0.48) in Group 3. The average error${\pm}2SD$, %CV, and r value was spine : $0.006{\pm}0.024\;g/cm^2$, %CV=0.86, r=0.995, Femur: $0{\pm}0.014\;g/cm^2$, %CV=0.54, r=0.998 in Group 4. Conclusion: Both LUNAR ASP CV% and HOLOGIC Spine Phantom are included in the normal range of error of ${\pm}2%$ defined in ISCD. BMD measurement keeps a relatively constant value, so showing excellent repeatability. The Phantom has homogeneous characteristics, but it has limitations to reflect the clinical part including variations in patient's body weight or body fat. As a result, it is believed that quality control using Phantom will be useful to check mis-calibration of the equipment used. A value measured a patient two times with one equipment, and that of double-crossed two equipment are all included within 2SD Value in the Bland - Altman Graph compared results of Group 3 with Group 4. The r value of 0.99 or higher in Linear regression analysis(Regression Analysis) indicated high precision and correlation. Therefore, it revealed that two compatible equipment did not affect in tracking the patients. Regular testing equipment and capabilities of a tester, then appropriate calibration will have to be achieved in order to calculate confidential BMD.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.