• Title/Summary/Keyword: Data Code

Search Result 4,199, Processing Time 0.031 seconds

Exploratory Case Study for Key Successful Factors of Producy Service System (Product-Service System(PSS) 성공과 실패요인에 관한 탐색적 사례 연구)

  • Park, A-Rum;Jin, Dong-Su;Lee, Kyoung-Jun
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.255-277
    • /
    • 2011
  • Product Service System(PSS), which is an integrated combination of product and service, provides new value to customer and makes companies sustainable as well. The objective of this paper draws Critical Successful Factors(CSF) of PSS through multiple case study. First, we review various concepts and types in PSS and Platform business literature currently available on this topic. Second, after investigating various cases with the characteristics of PSS and platform business, we select four cases of 'iPod of Apple', 'Kindle of Amazon', 'Zune of Microsoft', and 'e-book reader of Sony'. Then, the four cases are categorized as successful and failed cases according to criteria of case selection and PSS classification. We consider two methodologies for the case selection, i.e., 'Strategies for the Selection of Samples and Cases' proposed by Bent(2006) and the seven case selection procedures proposed by Jason and John(2008). For case selection, 'Stratified sample and Paradigmatic cases' is adopted as one of several options for sampling. Then, we use the seven case selection procedures such as 'typical', 'diverse', 'extreme', 'deviant', 'influential', 'most-similar', and 'mostdifferent' and among them only three procedures of 'diverse', 'most?similar', and 'most-different' are applied for the case selection. For PSS classification, the eight PSS types, suggested by Tukker(2004), of 'product related', 'advice and consulancy', 'product lease', 'product renting/sharing', 'product pooling', 'activity management', 'pay per service unit', 'functional result' are utilized. We categorize the four selected cases as a product oriented group because the cases not only sell a product, but also offer service needed during the use phase of the product. Then, we analyze the four cases by using cross-case pattern that Eisenhardt(1991) suggested. Eisenhardt(1991) argued that three processes are required for avoiding reaching premature or even false conclusion. The fist step includes selecting categories of dimensions and finding within-group similarities coupled with intergroup difference. In the second process, pairs of cases are selected and listed. The second step forces researchers to find the subtle similarities and differences between cases. The third process is to divide the data by data source. The result of cross-case pattern indicates that the similarities of iPod and Kindle as successful cases are convenient user interface, successful plarform strategy, and rich contents. The differences between the successful cases are that, wheares iPod has been recognized as the culture code, Kindle has implemented a low price as its main strategy. Meanwhile, the similarities of Zune and PRS series as failed cases are lack of sufficient applications and contents. The differences between the failed cases are that, wheares Zune adopted an undifferentiated strategy, PRS series conducted high-price strategy. From the analysis of the cases, we generate three hypotheses. The first hypothesis assumes that a successful PSS system requires convenient user interface. The second hypothesis assumes that a successful PSS system requires a reciprocal(win/win) business model. The third hypothesis assumes that a successful PSS system requires sufficient quantities of applications and contents. To verify the hypotheses, we uses the cross-matching (or pattern matching) methodology. The methodology matches three key words (user interface, reciprocal business model, contents) of the hypotheses to the previous papers related to PSS, digital contents, and Information System (IS). Finally, this paper suggests the three implications from analyzed results. A successful PSS system needs to provide differentiated value for customers such as convenient user interface, e.g., the simple design of iTunes (iPod) and the provision of connection to Kindle Store without any charge. A successful PSS system also requires a mutually benefitable business model as Apple and Amazon implement a policy that provides a reasonable proft sharing for third party. A successful PSS system requires sufficient quantities of applications and contents.

A Study on Requirement and Degree of the Satisfaction about Cosmeceuticals of Women (우리나라 여성들의 기능성화장품에 대한 요구 및 만족도 연구)

  • Kim Kang-Mi;Kim Ju-Duck
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.30 no.4 s.48
    • /
    • pp.571-582
    • /
    • 2004
  • Recently the well-being, which is regarded as the new cultural code, has brought a new change in the cosmetic industry. The application of the functional products is getting mere and lots of functional cosmetics are now diversifying from the skin-care into the make-up as well as the herbal products. So the future in the market of functional cosmetic products is prospected to be positive. Therefore, cosmetic companies need an approach bases on the concept of the well-being. So to speak, they need to understand the needs of customers accurately from the customers point of view. Also it is a crucial issue that how the unique characteristics of functional cosmetic products as well as the development of products base on the concept of well-being make in balance. In this study, we attempt to inspect the advanced domestic market of the functional products due to the well-being trend and try to propose an option of making an advance it through the customers survey (for example, their need and their satisfaction on the functional products, etc) on the functional cosmetic products. For this purpose, it has been surveyed on adult female customers aged 19 to 60 located in Seoul and Gyeonggi province. 379 questionnaires among 510 were used in the final analysis. Collected data was analyzed using the statistical package for the social science (SPSS) program that can give the information about the general characteristics of the subjects like the frequency and percentage. And we used Cronbach's u reliability test, $x^2\;(chi-square)$ frequency analysis, t-test, and one-wat ANOVA to investigate the customers need, their degree of the satisfaction on the functional products of their own, factors of their perception on the quality on them. We think that the results of our study can act not only as the fundamental data on the customers need, their usage pattern, and their degree of the satisfaction, but also as the important tips of planning the marketing strategies.

Spatio-temporal enhancement of forest fire risk index using weather forecast and satellite data in South Korea (기상 예보 및 위성 자료를 이용한 우리나라 산불위험지수의 시공간적 고도화)

  • KANG, Yoo-Jin;PARK, Su-min;JANG, Eun-na;IM, Jung-ho;KWON, Chun-Geun;LEE, Suk-Jun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.4
    • /
    • pp.116-130
    • /
    • 2019
  • In South Korea, forest fire occurrences are increasing in size and duration due to various factors such as the increase in fuel materials and frequent drying conditions in forests. Therefore, it is necessary to minimize the damage caused by forest fires by appropriately providing the probability of forest fire risk. The purpose of this study is to improve the Daily Weather Index(DWI) provided by the current forest fire forecasting system in South Korea. A new Fire Risk Index(FRI) is proposed in this study, which is provided in a 5km grid through the synergistic use of numerical weather forecast data, satellite-based drought indices, and forest fire-prone areas. The FRI is calculated based on the product of the Fine Fuel Moisture Code(FFMC) optimized for Korea, an integrated drought index, and spatio-temporal weighting approaches. In order to improve the temporal accuracy of forest fire risk, monthly weights were applied based on the forest fire occurrences by month. Similarly, spatial weights were applied using the forest fire density information to improve the spatial accuracy of forest fire risk. In the time series analysis of the number of monthly forest fires and the FRI, the relationship between the two were well simulated. In addition, it was possible to provide more spatially detailed information on forest fire risk when using FRI in the 5km grid than DWI based on administrative units. The research findings from this study can help make appropriate decisions before and after forest fire occurrences.

The Status of Nursing Ethics Education in Korea 4-year-College of Nursing (간호윤리 교육현황 - 4년제 대학교육을 중심으로 -)

  • Han Sung-Suk;Kim Yong-Soon;Um Young-Rhan;Ahn Sung-Hee
    • The Journal of Korean Academic Society of Nursing Education
    • /
    • v.5 no.2
    • /
    • pp.376-387
    • /
    • 1999
  • Purpose : To provide fundamental data to present further direction of education on Nursing Ethics by investigating the status of Nursing Ethics education performed at 4-year-Colleges of Nursing. Korea. Methods : A descriptive survey study The data collected from 28 universities through a questionnaire to examine the status of Nursing Ethics education in Korea. Results : I. Teaching Nursing Ethics class as a independent subject-6(21.4%) universities. 1) The average of 23.67 hours(2 credits) in the total educational hours. 2) Teaching method-theoretical class, discussion of case study, discussion of related issues, presentation of video tapes and discussion, team education, role play, and submission of reports. 3) Education contents-Nursing profession and ethics, the dignity of human life, necessity of bioethics, ethical theory and refutation, code for nurses, ethical issues between nurses and patients, nurses and co-workers, and nurses and nurses 6 universities 4) 5 universities-Included ethical decision making, artificial insemination, external insemination, artificial abortion, organ transplantation, brain death, human subject of study suicide, and euthanasia. II. Teaching Nursing Ethics as an inclusive theme in other subjects-22 (78.57%) universities. 1) Educated in Introduction of Nursing (14 universities), Nursing Management, Nursing Ethics and Philosophy, Special Nursing, Nursing and Law, and Professional Nursing. 2) Educational course-Taught in freshman level at 14 universities, average 9.32 education hours. Conclusion: Showed not only that universities, not operating Nursing Ethics as a independent class, unreasonably operate and assign too many contents in comparing with its education hours and are likely to become only a cramming education but also professors whose major is not Nursing Ethics presently in charge need to take a chance to supplement their knowledge and teaching method.

  • PDF

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Studies on the Contents of Naturally Occurring of Sulfite in Foods (식품중 천연유래 이산화황 함유량에 관한 연구)

  • Kim, Hee-Yun;Lee, Young-Ja;Hong, Ki-Hyoung;Kwon, Yong-Kwan;Ko, Hyun-Sook;Lee, Young-Kyong;Lee, Chul-Won
    • Korean Journal of Food Science and Technology
    • /
    • v.32 no.3
    • /
    • pp.544-549
    • /
    • 2000
  • This study was performed to compare optimum analytical method for the contents of naturally occurring sulfur dioxide in foods and to investigate the contents of sulfur dioxide in foods in order to provide a fundamental data when distinguish between added and naturally occurring sulfur dioxide. The determination of the contents of sulfur dioxide in foods from the 20 kinds, 180 cases of samples has been analyzed by the optimized Monier-Williams method, modified Rankine method and Acid Distillation/Ion Chromatography. As a result of the study, the contents of naturally occurring sulfur dioxide in foods by the optimized Monier-Williams method showed from 1.02 to 43.87 ppm and highly content of 43.87, 15.37, 11.50, 11.21 and 10.60ppm were observed in garlic, platicodon, green onion, cabbage and onion, the others were less than 10.00ppm. The sulfur dioxide contents in green onion and garlic by modified Rankine method were showed to be 2.87 and 6.14ppm, respectively, the others were detected less than 2.50ppm. The contents of sulfur dioxide by Acid Distillation/Ion Chromatography showed 15.43, 9.82, 5.74, 5.37, 2.14 and 0.49ppm in garlic, cabbage, green onion, onion, potato and apple, respectively and the others were not detected. And the contents of sulfur dioxide in green onion, onion, cabbage and garlic showed higher levels of sulfur dioxide in these foods than the others because of the naturally occurring sulfur containing compounds. The optimized Monier-Williams method, which is the of official analytical method of Korean Food Code, was suitable for monitoring of sulfur dioxide of most foods. Acid Distillation/Ion Chromatography was thought to be adequate for sulfur containing foods such as green onion, onion and cabbage. In order to distinguish between added and naturally occurring sulfur dioxide, it is though to be need of the fundamental data for the contents of sulfur dioxide in sulfite-free foods and continue the investigation for it.

  • PDF

The Standard of Judgement on Plagiarism in Research Ethics and the Guideline of Global Journals for KODISA (KODISA 연구윤리의 표절 판단기준과 글로벌 학술지 가이드라인)

  • Hwang, Hee-Joong;Kim, Dong-Ho;Youn, Myoung-Kil;Lee, Jung-Wan;Lee, Jong-Ho
    • Journal of Distribution Science
    • /
    • v.12 no.6
    • /
    • pp.15-20
    • /
    • 2014
  • Purpose - In general, researchers try to abide by the code of research ethics, but many of them are not fully aware of plagiarism, unintentionally committing the research misconduct when they write a research paper. This research aims to introduce researchers a clear and easy guideline at a conference, which helps researchers avoid accidental plagiarism by addressing the issue. This research is expected to contribute building a climate and encouraging creative research among scholars. Research design, data, methodology & Results - Plagiarism is considered a sort of research misconduct along with fabrication and falsification. It is defined as an improper usage of another author's ideas, language, process, or results without giving appropriate credit. Plagiarism has nothing to do with examining the truth or accessing value of research data, process, or results. Plagiarism is determined based on whether a research corresponds to widely-used research ethics, containing proper citations. Within academia, plagiarism goes beyond the legal boundary, encompassing any kind of intentional wrongful appropriation of a research, which was created by another researchers. In summary, the definition of plagiarism is to steal other people's creative idea, research model, hypotheses, methods, definition, variables, images, tables and graphs, and use them without reasonable attribution to their true sources. There are various types of plagiarism. Some people assort plagiarism into idea plagiarism, text plagiarism, mosaic plagiarism, and idea distortion. Others view that plagiarism includes uncredited usage of another person's work without appropriate citations, self-plagiarism (using a part of a researcher's own previous research without proper citations), duplicate publication (publishing a researcher's own previous work with a different title), unethical citation (using quoted parts of another person's research without proper citations as if the parts are being cited by the current author). When an author wants to cite a part that was previously drawn from another source the author is supposed to reveal that the part is re-cited. If it is hard to state all the sources the author is allowed to mention the original source only. Today, various disciplines are developing their own measures to address these plagiarism issues, especially duplicate publications, by requiring researchers to clearly reveal true sources when they refer to any other research. Conclusions - Research misconducts including plagiarism have broad and unclear boundaries which allow ambiguous definitions and diverse interpretations. It seems difficult for researchers to have clear understandings of ways to avoid plagiarism and how to cite other's works properly. However, if guidelines are developed to detect and avoid plagiarism considering characteristics of each discipline (For example, social science and natural sciences might be able to have different standards on plagiarism.) and shared among researchers they will likely have a consensus and understanding regarding the issue. Particularly, since duplicate publications has frequently appeared more than plagiarism, academic institutions will need to provide pre-warning and screening in evaluation processes in order to reduce mistakes of researchers and to prevent duplicate publications. What is critical for researchers is to clearly reveal the true sources based on the common citation rules and to only borrow necessary amounts of others' research.

Verifying the Classification Accuracy for Korea's Standardized Classification System of Research F&E by using LDA(Linear Discriminant Analysis) (선형판별분석(LDA)기법을 적용한 국가연구시설장비 표준분류체계의 분류 정확도 검증)

  • Joung, Seokin;Sawng, Yeongwha;Jeong, Euhduck
    • Management & Information Systems Review
    • /
    • v.39 no.1
    • /
    • pp.35-57
    • /
    • 2020
  • Recently, research F&E(Facilities and Equipment) have become very important as tools and means to lead the development of science and technology. The government has been continuously expanding investment budgets for R&D and research F&E, and the need for efficient operation and systematic management of research F&E built up nationwide has increased. In December 2010, The government developed and completed a standardized classification system for national research F&E. However, accuracy and trust of information classification are suspected because information is collected by a method in which a user(researcher) directly selects and registers a classification code in NTIS. Therefore, in the study, we analyzed linearly using linear discriminant analysis(LDA) and analysis of variance(ANOVA), to measure the classification accuracy for the standardized classification system(8 major-classes, 54 sub-classes, 410 small-classes) of the national research facilities and equipment established in 2010, and revised in 2015. For the analysis, we collected and used the information data(50,271 cases) cumulatively registered in NTIS(National Science and Technology Service) for the past 10 years. This is the first case of scientifically verifying the standardized classification system of the national research facilities and equipment, which is based on information of similar classification systems and a few expert reviews in the in-outside of the country. As a result of this study, the discriminant accuracy of major-classes organized hierarchically by sub-classes and small-classes was 92.2 %, which was very high. However, in post hoc verification through analysis of variance, the discrimination power of two classes out of eight major-classes was rather low. It is expected that the standardized classification system of the national research facilities and equipment will be improved through this study.

Evaluation of Spatial Dose Rate in Working Environment during Non-Destructive Testing using Radioactive Isotopes (방사성동위원소를 이용한 비파괴 검사 시 작업환경 내 공간선량률 평가)

  • Cho, Yong-In;Kim, Jung-Hoon;Bae, Sang-Il
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.373-379
    • /
    • 2022
  • The radiation source used for non-destructive testing have permeability and cause a scattered radiation through collisions of surrounding materials, which causes changes in the surrounding spatial dose. Therefore, this study attempted to evaluate and analyze the distribution of spatial dose by source in the working environment during the non-destructive test using monte carlo simulation. In this study, Using FLUKA, a simulation code, simulates 60Co, 192Ir, and 75Se source used in non-destructive testing, The reliability of the source term was secured by comparing the calculated dose rate with the data of the Health and Physics Association. After that, a non-destructive test in the radiation safety facility(RT-room) was designed to evaluate the spatial dose according to the distance from the source. As a result of the spatial dose evaluation, 75Se source showed the lowest dose distribution in the frontal position and 60Co source showed a dose rate of about 15 times higher than that of 75Se and about 2 times higher than that of 192Ir. In addition, the spatial dose according to the distance tends to decrease according to the distance inverse square law as the distance from the source increases. Exceptionally, 60Co, 192Ir, and 75Se sources confirmed a slight increase within 2 m of position. Based on the results of this study, it is believed that it will be used as supplementary data for safety management of workers in radiation safety facilities during non-destructive testing using radioactive isotopes.

Optimum Size Selection and Machinery Costs Analysis for Farm Machinery Systems - Programming for Personal Computer - (농기계(農機械) 투입모형(投入模型) 설정(設定) 및 기계이용(機械利用) 비용(費用) 분석연구(分析硏究) - PC용(用) 프로그램 개발(開發) -)

  • Lee, W.Y.;Kim, S.R.;Jung, D.H.;Chang, D.I.;Lee, D.H.;Kim, Y.H.
    • Journal of Biosystems Engineering
    • /
    • v.16 no.4
    • /
    • pp.384-398
    • /
    • 1991
  • A computer program was developed to select the optimum size of farm machine and analyze its operation costs according to various farming conditions. It was written in FORTRAN 77 and BASIC languages and can be run on any personal computer having Korean Standard Complete Type and Korean Language Code. The program was developed as a user-friendly type so that users can carry out easily the costs analysis for the whole farm work or respective operation in rice production, and for plowing, rotarying and pest controlling in upland. The program can analyze simultaneously three different machines in plowing & rotarying and two machines in transplanting, pest controlling and harvesting operations. The input data are the sizes of arable lands, possible working days and number of laborers during the opimum working period, and custom rates varying depending on regions and individual farming conditions. We can find out the results such as the selected optimum combination farm machines, the overs and shorts of working days relative to the planned working period, capacities of the machines, break-even points by custom rate, fixed costs for a month, and utilization costs in a hectare.

  • PDF