• Title/Summary/Keyword: Management Evaluation System

Search Result 4,019, Processing Time 0.034 seconds

Exploring Mask Appeal: Vertical vs. Horizontal Fold Flat Masks Using Eye-Tracking (마스크 매력 탐구: 아이트래킹을 활용한 수직 접이형 대 수평 접이형 마스크 비교 분석)

  • Junsik Lee;Nan-Hee Jeong;Ji-Chan Yun;Do-Hyung Park;Se-Bum Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.271-286
    • /
    • 2023
  • The global COVID-19 pandemic has transformed face masks from situational accessories to indispensable items in daily life, prompting a shift in public perception and behavior. While the relaxation of mandatory mask-wearing regulations is underway, a significant number of individuals continue to embrace face masks, turning them into a form of personal expression and identity. This phenomenon has given rise to the Fashion Mask industry, characterized by unique designs and colors, experiencing rapid growth in the market. However, existing research on masks is predominantly focused on their efficacy in preventing infection or exploring attitudes during the pandemic, leaving a gap in understanding consumer preferences for mask design. We address this gap by investigating consumer perceptions and preferences for two prevalent mask designs-horizontal fold flat masks and vertical fold flat masks. Through a comprehensive approach involving surveys and eye-tracking experiments, we aim to unravel the subtle differences in how consumers perceive these designs. Our research questions focus on determining which design is more appealing and exploring the reasons behind any observed differences. The study's findings reveal a clear preference for vertical fold flat masks, which are not only preferred but also perceived as unique, sophisticated, three-dimensional, and lively. The eye-tracking analysis provides insights into the visual attention patterns associated with mask designs, highlighting the pivotal role of the fold line in influencing these patterns. This research contributes to the evolving understanding of masks as a fashion statement and provides valuable insights for manufacturers and marketers in the Fashion Mask industry. The results have implications beyond the pandemic, emphasizing the importance of design elements in sustaining consumer interest in face masks.

A Study on the Impact of SNS Usage Characteristics, Characteristics of Loan Products, and Personal Characteristics on Credit Loan Repayment (SNS 사용특성, 대출특성, 개인특성이 신용대출 상환에 미치는 영향에 관한 연구)

  • Jeong, Wonhoon;Lee, Jaesoon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.18 no.5
    • /
    • pp.77-90
    • /
    • 2023
  • This study aims to investigate the potential of alternative credit assessment through Social Networking Sites (SNS) as a complementary tool to conventional loan review processes. It seeks to discern the impact of SNS usage characteristics and loan product attributes on credit loan repayment. To achieve this objective, we conducted a binomial logistic regression analysis examining the influence of SNS usage patterns, loan characteristics, and personal attributes on credit loan conditions, utilizing data from Company A's credit loan program, which integrates SNS data into its actual loan review processes. Our findings reveal several noteworthy insights. Firstly, with respect to profile photos that reflect users' personalities and individual characteristics, individuals who choose to upload photos directly connected to their personal lives, such as images of themselves, their private circles (e.g., family and friends), and photos depicting social activities like hobbies, which tend to be favored by individuals with extroverted tendencies, as well as character and humor-themed photos, which are typically favored by individuals with conscientious traits, demonstrate a higher propensity for diligently repaying credit loans. Conversely, the utilization of photos like landscapes or images concealing one's identity did not exhibit a statistically significant causal relationship with loan repayment. Furthermore, a positive correlation was observed between the extent of SNS usage and the likelihood of loan repayment. However, the level of SNS interaction did not exert a significant effect on the probability of loan repayment. This observation may be attributed to the passive nature of the interaction variable, which primarily involves expressing sympathy for other users' comments rather than generating original content. The study also unveiled the statistical significance of loan duration and the number of loans, representing key characteristics of loan portfolios, in influencing credit loan repayment. This underscores the importance of considering loan duration and the quantity of loans as crucial determinants in the design of microcredit products. Among the personal characteristic variables examined, only gender emerged as a significant factor. This implies that the loan program scrutinized in this analysis does not exhibit substantial discrimination based on age and credit scores, as its customer base predominantly consists of individuals in their twenties and thirties with low credit scores, who encounter challenges in securing loans from traditional financial institutions. This research stands out from prior studies by empirically exploring the relationship between SNS usage and credit loan repayment while incorporating variables not typically addressed in existing credit rating research, such as profile pictures. It underscores the significance of harnessing subjective, unstructured information from SNS for loan screening, offering the potential to mitigate the financial disadvantages faced by borrowers with low credit scores or those ensnared in short-term liquidity constraints due to limited credit history a group often referred to as "thin filers." By utilizing such information, these individuals can potentially reduce their credit costs, whereas they are supposed to accrue a more substantial financial history through credit transactions under conventional credit assessment system.

  • PDF

Re-evaluation of Cultural Heritage Preservation Committee Activities in 1961 (1961년 문화재보존위원회 활동 재평가)

  • OH Chunyoung
    • Korean Journal of Heritage: History & Science
    • /
    • v.57 no.2
    • /
    • pp.144-166
    • /
    • 2024
  • The Cultural Heritage Committee is an important organization that has been deliberating on important matters related to the preservation of cultural properties in the Republic of Korea for more than 60 years since 1962. The Cultural Heritage Preservation Committee was active in 1961, which was a short period of about a year, but the minutes prepared at the time confirmed that it had the following meanings. First of all, legally, it was meaningful in that the concept of cultural property or intangible cultural property was used for the first time in Korea in laws and regulations on the term of office of professional members. These matters became the basis for the operation of the current Cultural Heritage Protection Act and the Cultural Heritage Committee. The following confirms that, unlike previously known activities, they were active despite political upheaval at the time. In spite of rapid regime change at the time, the committee had no change in its members, and the meetings continued without interruption. At that time, there was an exclusive relationship between different groups in relation to the preservation of cultural heritage, and this relationship was confirmed by the minutes that disappeared with the establishment of the Cultural Heritage Management Bureau, which integrated these groups. Finally, the form of the minutes prepared then shows the form of documentation at the time, where it is confirmed that the traditional documentation format is changing into a new form. It can be good research material in terms of modern and contemporary bibliography. As discussed earlier, the Cultural Heritage Conservation Committee of 1961 has historical significance in terms of legal and actual activities. The reason why the committee's activities were low valued is presumed to be that the minutes and related documents prepared at the time were not organized well due to the lack of a related administrative system. The minutes of the Cultural Heritage Conservation Committee record various facts about cultural heritage policies and decisions at that time. Therefore, analysis and research on these contents can reveal more facts about the cultural heritage policies and perceptions of that time.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

An Intervention Study on Integration of Family Planning and Maternal/Infant Care Services in Rural Korea (가족계획과 모자보건 통합을 위한 조산원의 투입효과 분석 -서산지역의 개입연구 평가보고-)

  • Bang, Sook;Han, Seung-Hyun;Lee, Chung-Ja;Ahn, Moon-Young;Lee, In-Sook;Kim, Eun-Shil;Kim, Chong-Ho
    • Journal of Preventive Medicine and Public Health
    • /
    • v.20 no.1 s.21
    • /
    • pp.165-203
    • /
    • 1987
  • This project was a service-cum-research effort with a quasi-experimental study design to examine the health benefits of an integrated Family Planning (FP)/Maternal & Child health (MCH) Service approach that provides crucial factors missing in the present on-going programs. The specific objectives were: 1) To test the effectiveness of trained nurse/midwives (MW) assigned as change agents in the Health Sub-Center (HSC) to bring about the changes in the eight FP/MCH indicators, namely; (i)FP/MCH contacts between field workers and their clients (ii) the use of effective FP methods, (iii) the inter-birth interval and/or open interval, (iv) prenatal care by medically qualified personnel, (v) medically supervised deliveries, (vi) the rate of induced abortion, (vii) maternal and infant morbidity, and (viii) preinatal & infant mortality. 2) To measure the integrative linkage (contacts) between MW & HSC workers and between HSC and clients. 3) To examine the organizational or administrative factors influencing integrative linkage between health workers. Study design; The above objectives called for quasi-experimental design setting up a study and control area with and without a midwife. An active intervention program (FP/MCH minimum 'package' program) was conducted for a 2 year period from June 1982-July 1984 in Seosan County and 'before and after' surveys were conducted to measure the change. Service input; This study was undertaken by the Soonchunhyang University in collaboration with WHO. After a baseline survery in 1981, trained nurses/midwives were introduced into two health sub-centers in a rural setting (Seosan county) for a 2 year period from 1982 to 1984. A major service input was the establishment of midwifery services in the existing health delivery system with emphasis on nurse/midwife's role as the link between health workers (nurse aids) and village health workers, and the referral of risk patients to the private physician (OBGY specialist). An evaluation survey was made in August 1984 to assess the effectiveness of this alternative integrated approach in the study areas in comparison with the control area which had normal government services. Method of evaluation; a. In this study, the primary objective was first to examine to what extent the FP/MCH package program brought about changes in the pre-determined eight indicators (outcome and impact measures) and the following relationship was first analyzed; b. Nevertheless, this project did not automatically accept the assumption that if two or more activities were integrated, the results would automatically be better than a non-integrated or categorical program. There is a need to assess the 'integration process' itself within the package program. The process of integration was measured in terms of interactive linkages, or the quantity & quality of contacts between workers & clients and among workers. Intergrative linkages were hypothesized to be influenced by organizational factors at the HSC clinic level including HSC goals, sltrurture, authority, leadership style, resources, and personal characteristics of HSC staff. The extent or degree of integration, as measured by the intensity of integrative linkages, was in turn presumed to influence programme performance. Thus as indicated diagrammatically below, organizational factors constituted the independent variables, integration as the intervening variable and programme performance with respect to family planning and health services as the dependent variable: Concerning organizational factors, however, due to the limited number of HSCs (2 in the study area and 3 in the control area), they were studied by participatory observation of an anthropologist who was independent of the project. In this observation, we examined whether the assumed integration process actually occurred or not. If not, what were the constraints in producing an effective integration process. Summary of Findings; A) Program effects and impact 1. Effects on FP use: During this 2 year action period, FP acceptance increased from 58% in 1981 to 78% in 1984 in both the study and control areas. This increase in both areas was mainly due to the new family planning campaign driven by the Government for the same study period. Therefore, there was no increment of FP acceptance rate due to additional input of MW to the on-going FP program. But in the study area, quality aspects of FP were somewhat improved, having a better continuation rate of IUDs & pills and more use of effective Contraceptive methods in comparison with the control area. 2. Effects of use of MCH services: Between the study and control areas, however, there was a significant difference in maternal and child health care. For example, the coverage of prenatal care was increased from 53% for 1981 birth cohort to 75% for 1984 birth cohort in the study area. In the control area, the same increased from 41% (1981) to 65% (1984). It is noteworthy that almost two thirds of the recent birth cohort received prenatal care even in the control area, indicating that there is a growing demand of MCH care as the size of family norm becomes smaller 3. There has been a substantive increase in delivery care by medical professions in the study area, with an annual increase rate of 10% due to midwives input in the study areas. The project had about two times greater effect on postnatal care (68% vs. 33%) at delivery care(45.2% vs. 26.1%). 4. The study area had better reproductive efficiency (wanted pregancies with FP practice & healthy live births survived by one year old) than the control area, especially among women under 30 (14.1% vs. 9.6%). The proportion of women who preferred the 1st trimester for their first prenatal care rose significantly in the study area as compared to the control area (24% vs 13%). B) Effects on Interactive Linkage 1. This project made a contribution in making several useful steps in the direction of service integration, namely; i) The health workers have become familiar with procedures on how to work together with each other (especially with a midwife) in carrying out their work in FP/MCH and, ii) The health workers have gotten a feeling of the usefulness of family health records (statistical integration) in identifying targets in their own work and their usefulness in caring for family health. 2. On the other hand, because of a lack of required organizational factors, complete linkage was not obtained as the project intended. i) In regards to the government health worker's activities in terms of home visiting there was not much difference between the study & control areas though the MW did more home visiting than Government health workers. ii) In assessing the service performance of MW & health workers, the midwives balanced their workload between 40% FP, 40% MCH & 20% other activities (mainly immunization). However, $85{\sim}90%$ of the services provided by the health workers were other than FP/MCH, mainly for immunizations such as the encephalitis campaign. In the control area, a similar pattern was observed. Over 75% of their service was other than FP/MCH. Therefore, the pattern shows the health workers are a long way from becoming multipurpose workers even though the government is pushing in this direction. 3. Villagers were much more likely to visit the health sub-center clinic in the study area than in the control area (58% vs.31%) and for more combined care (45% vs.23%). C) Organization factors (admistrative integrative issues) 1. When MW (new workers with higher qualification) were introduced to HSC, it was noted that there were conflicts between the existing HSC workers (Nurse aids with less qualification than MW) and the MW for the beginning period of the project. The cause of the conflict was studied by an anthropologist and it was pointed out that these functional integration problems stemmed from the structural inadequacies of the health subcenter organization as indicated below; i) There is still no general consensus about the objectives and goals of the project between the project staff and the existing health workers. ii) There is no formal linkage between the responsibility of each member's job in the health sub-center. iii) There is still little chance for midwives to play a catalytic role or to establish communicative networks between workers in order to link various knowledge and skills to provide better FP/MCH services in the health sub-center. 2. Based on the above findings the project recommended to the County Chief (who has power to control the administrative staff and the technical staff in his county) the following ; i) In order to solve the conflicts between the individual roles and functions in performing health care activities, there must be goals agreed upon by both. ii) The health sub·center must function as an autonomous organization to undertake the integration health project. In order to do that, it is necessary to support administrative considerations, and to establish a communication system for supervision and to control of the health sub-centers. iii) The administrative organization, tentatively, must be organized to bind the health worker's midwive's and director's jobs by an organic relationship in order to achieve the integrative system under the leadership of health sub-center director. After submitting this observation report, there has been better understanding from frequent meetings & communication between HW/MW in FP/MCH work as the program developed. Lessons learned from the Seosan Project (on issues of FP/MCH integration in Korea); 1) A majority or about 80% of the couples are now practicing FP. As indicated by the study, there is a growing demand from clients for the health system to provide more MCH services than FP in order to maintain the achieved small size of family through FP practice. It is fortunate to see that the government is now formulating a MCH policy for the year 2,000 and revising MCH laws and regulations to emphasize more MCH care for achieving a small size family through family planning practice. 2) Goal consensus in FP/MCH shouBd be made among the health workers It administrators, especially to emphasize the need of care of 'wanted' child. But there is a long way to go to realize the 'real' integration of FP into MCH in Korea, unless there is a structural integration FP/MCH because a categorical FP is still first priority to reduce the rate of population growth for economic reasons but not yet for health/welfare reasons in practice. 3) There should be more financial allocation: (i) a midwife should be made available to help to promote the MCH program and coordinate services, (in) there should be a health sub·center director who can provide leadership training for managing the integrated program. There is a need for 'organizational support', if the decision of integration is made to obtain benefit from both FP & MCH. In other words, costs should be paid equally to both FP/MCH. The integration slogan itself, without the commitment of paying such costs, is powerless to advocate it. 4) Need of management training for middle level health personnel is more acute as the Government has already constructed 90 MCH centers attached to the County Health Center but without adequate manpower, facilities, and guidelines for integrating the work of both FP and MCH. 5) The local government still considers these MCH centers only as delivery centers to take care only of those visiting maternity cases. The MCH center should be a center for the managment of all pregnancies occurring in the community and the promotion of FP with a systematic and effective linkage of resources available in the county such as i.e. Village Health Worker, Community Health Practitioner, Health Sub-center Physicians & Health workers, Doctors and Midwives in MCH center, OBGY Specialists in clinics & hospitals as practiced by the Seosan project at primary health care level.

  • PDF

Requirement and Perception of Parents on the Subject of Home Economics in Middle School (중학교 가정교과에 대한 학부모의 인식 및 요구도)

  • Shin Hyo-Shick;Park Mi-Soog
    • Journal of Korean Home Economics Education Association
    • /
    • v.18 no.3 s.41
    • /
    • pp.1-22
    • /
    • 2006
  • The purpose of this study is that I should look for a desirous directions about home economics by studying the requirements and perception of the high school parents who have finished the course of home economics. It was about 600 parents whom I have searched Seoul-Pusan, Ganwon. Ghynggi province, Choongcheong-Gyungsang province, Cheonla and Jeju province of 600, I chose only 560 as apparently suitable research. The questions include 61 requirements about home economics and one which we never fail to keep among the contents, whenever possible and one about the perception of home economics aims 11 about the perception of home economics courses and management. The collections were analyzed frequency, percent, mean. standard deviation t-test by using SAS program. The followings is the summary result of studying of it. 1. All the boys and girls learning together about the Idea of healthy lives and desirous human formulation and knowledge together are higher. 2. Among the teaching purposes of home economics, the item of the scientific principle and knowledge for improvements of home life shows 15.7% below average value. 3. The recognition degree about the quality of home economics is highly related with the real life, and about the system. we recognize lacking in periods and contents of home economics field and about guiding content, accomplishment and application qualities are higher regardless of sex. 4. The important term which we should emphasize in the subject of home economics is family part. 5. Among the needs of home economic requirement in freshman, in the middle unit, their growth and development are higher than anything else, representing 4.11, and by contrast the basic principle and actuality is 3.70, which is lowest among them. 6. In the case of second grade requirement of home economics content for parents in the middle unit young man and consuming life is 4.09 highest. 7. In the case of 3rd grade requirement of economics contents in the middle unit the choice of coming direction and job ethics is highest 4.16, and preparing meals and evaluation is lowest 3.50.

  • PDF

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.