• Title/Summary/Keyword: 서비스 정보의 양

Search Result 926, Processing Time 0.028 seconds

The Influence of Ring-Back-Tone(RBT) on Evaluation of the Phone-call Receiver's Personality II -a comparison study between unknown and known people as the receivers- (통화 연결 음악이 통화 상대자의 개성 판단에 끼치는 영향 II -통화상대자를 알고 있는 경우와 모르는 경우에 대한 비교를 중심으로-)

  • Jeong, Sang-Hoon;Suk, Hyeon-Jeong
    • Science of Emotion and Sensibility
    • /
    • v.11 no.3
    • /
    • pp.313-324
    • /
    • 2008
  • The purpose of the study is to investigate the influence of Ring-Back-Tone(RBT) music on the evaluation of the phone-call receiver's personality in dimensions of Openness, Extroversion, and Neuroticism. In a preliminary test, the subjects listened to 17 RBT music stimuli in random order and assessed the personality associated with liking for each music(N=15). Among the 17 RBTs, three were selected to be used in Experiments I and II and they were distinguished from each other in terms of the three personality dimensions(p<0.001). In Experiment I, the subjects were divided into four groups and were asked to make a call to interview an unknown receiver(N=60). Different RBT music was installed depending on the group to which each subject belonged. It was found that different RBT influences the caller's evaluation of the receiver's personality, supporting Hypothesis 1(p<0.001). Moreover, the ratings of the receiver's were highly correlated with those of the RBT music stimuli in terms of Openness(r=0.722, p<0.001) and Extroversion(r=0.753, p<0.001). In Experiment II, an identical experiment design was applied for a new group of subjects who were acquainted with the receiver(N=40). It was hypothesized that previous knowledge about a person would weaken the RBT effect. The results showed that RBT exerted no effect on the evaluation of the receiver's personality when the caller knew the receiver. It was also found that 12 personality traits, where each of the three personality dimensions is described by four traits, facilitated assessment of the character of the RBT music as well as the personality of the receiver.

  • PDF

Cognition and Satisfaction of Customer in Home-delivered Meal (가정배달급식에 대한 고객의 인식 및 만족도 조사)

  • 김혜영;류시현
    • Korean journal of food and cookery science
    • /
    • v.19 no.4
    • /
    • pp.529-538
    • /
    • 2003
  • The objectives of this study were to measure customers' cognition and overall satisfaction, and to identify relatively important attributes for the overall satisfaction, of home-delivered meals. Questionnaires were distributed to 243 customers. The statistical data analyses were completed by x$^2$-tests, ANOV A, factor analysis, reliability analysis and regression analysis using SPSS version 10. 56.6% of customers get obtained information from the internet, with 31.3% of these using this method at least once a week, but 72.9% of customers used this method less than once per years. The major reasons for ordering home-delivered meals were tired of cooking, more economical and no time to cook. The results were significantly different in relation to age, occupation and monthly income. The major reasons for hesitation about ordering home-delivered meals were meals should be prepared in households, not sanitary and the use of too many artificial flavors. The results for this factor were significantly different in relation to gender, age and monthly income(p<0.01). The most preferred kinds of home-delivery meals were Korean soup (guk), stew, soup (tang), speciality dishes and party dishes. The customer's cognition of kindness of the delivery staff was highest, with food temperature being the lowest among the options. The food and service level factors were derived from a factor based analysis of customer's cognition towards home-delivered meals. The customer's cognition of food taste, food quantity, kindness of delivery staff and packaging container shape were significantly different according to the use frequency and use period. The packaging method, sanitation, kindness of delivery staff, price and taste were the most relatively important attributes for overall satisfaction with home-delivered meals.

Creation of Actual CCTV Surveillance Map Using Point Cloud Acquired by Mobile Mapping System (MMS 점군 데이터를 이용한 CCTV의 실질적 감시영역 추출)

  • Choi, Wonjun;Park, Soyeon;Choi, Yoonjo;Hong, Seunghwan;Kim, Namhoon;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1361-1371
    • /
    • 2021
  • Among smart city services, the crime and disaster prevention sector accounted for the highest 24% in 2018. The most important platform for providing real-time situation information is CCTV (Closed-Circuit Television). Therefore, it is essential to create the actual CCTV surveillance coverage to maximize the usability of CCTV. However, the amount of CCTV installed in Korea exceeds one million units, including those operated by the local government, and manual identification of CCTV coverage is a time-consuming and inefficient process. This study proposed a method to efficiently construct CCTV's actual surveillance coverage and reduce the time required for the decision-maker to manage the situation. For this purpose, first, the exterior orientation parameters and focal lengths of the pre-installed CCTV cameras, which are difficult to access, were calculated using the point cloud data of the MMS (Mobile Mapping System), and the FOV (Field of View) was calculated accordingly. Second, using the FOV result calculated in the first step, CCTV's actual surveillance coverage area was constructed with 1 m, 2 m, 3 m, 5 m, and 10 m grid interval considering the occluded regions caused by the buildings. As a result of applying our approach to 5 CCTV images located in Uljin-gun, Gyeongsnagbuk-do the average re-projection error was about 9.31 pixels. The coordinate difference between calculated CCTV and location obtained from MMS was about 1.688 m on average. When the grid length was 3 m, the surveillance coverage calculated through our research matched the actual surveillance obtained from visual inspection with a minimum of 70.21% to a maximum of 93.82%.

The Origin of Records and Archives in the United States and the Formation of Archival System: Focusing on the Period from the Early 17th Century to the Mid 20th (미국의 기록(records) 및 아카이브즈(archives)의 역사적 기원과 관리·보존의 역사 17세기 초부터 20세기 중반까지를 중심으로)

  • Lee, Seon Ok
    • The Korean Journal of Archival Studies
    • /
    • no.80
    • /
    • pp.43-88
    • /
    • 2024
  • The National Archives and Records Administration (NARA) is a relatively quiet latecomer to the traditional archives of the Western world. Although the United States lacks a long history of organized public records·archives management, it has developed a modern system optimized for the American historical context. This system focuses on the systematic management and preservation of the vast amount of modern records produced and collected during the tumultuous 20th century. As a result, NARA has established a modern archival system that is optimized for the American historical context. The U.S. public records·archives management system is based on the principle that records·archives are the property of the American people and belong to the public. This concept originated during the British colonial era when records were used to safeguard the rights of the colonies as self-governing citizens. For Americans, records and archives have long been a symbol of the nation's identity, serving as a means of protecting individual freedoms, rights, and democracy throughout the country's history. It is natural, therefore, that American life and history should be documented, and that the recorded past should be managed and preserved for the nation's present and future. The public records·archives management system in the United States is the result of a convergence of theories, practices, lessons learned, and ideas that have been shaped by the country's history, philosophies, and values about records, and its unique experience with records management. This paper traces the origins of records and archives in the United States in a historical context to understand the organic relationship between American life and records. It examines the process of forming a modern public records management system that is both uniquely American and universal to the American context without falling into the two forms of traditions that reflect the uniqueness of American history.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

A Study on Industries's Leading at the Stock Market in Korea - Gradual Diffusion of Information and Cross-Asset Return Predictability- (산업의 주식시장 선행성에 관한 실증분석 - 자산간 수익률 예측 가능성 -)

  • Kim Jong-Kwon
    • Proceedings of the Safety Management and Science Conference
    • /
    • 2004.11a
    • /
    • pp.355-380
    • /
    • 2004
  • I test the hypothesis that the gradual diffusion of information across asset markets leads to cross-asset return predictability in Korea. Using thirty-six industry portfolios and the broad market index as our test assets, I establish several key results. First, a number of industries such as semiconductor, electronics, metal, and petroleum lead the stock market by up to one month. In contrast, the market, which is widely followed, only leads a few industries. Importantly, an industry's ability to lead the market is correlated with its propensity to forecast various indicators of economic activity such as industrial production growth. Consistent with our hypothesis, these findings indicate that the market reacts with a delay to information in industry returns about its fundamentals because information diffuses only gradually across asset markets. Traditional theories of asset pricing assume that investors have unlimited information-processing capacity. However, this assumption does not hold for many traders, even the most sophisticated ones. Many economists recognize that investors are better characterized as being only boundedly rational(see Shiller(2000), Sims(2201)). Even from casual observation, few traders can pay attention to all sources of information much less understand their impact on the prices of assets that they trade. Indeed, a large literature in psychology documents the extent to which even attention is a precious cognitive resource(see, eg., Kahneman(1973), Nisbett and Ross(1980), Fiske and Taylor(1991)). A number of papers have explored the implications of limited information- processing capacity for asset prices. I will review this literature in Section II. For instance, Merton(1987) develops a static model of multiple stocks in which investors only have information about a limited number of stocks and only trade those that they have information about. Related models of limited market participation include brennan(1975) and Allen and Gale(1994). As a result, stocks that are less recognized by investors have a smaller investor base(neglected stocks) and trade at a greater discount because of limited risk sharing. More recently, Hong and Stein(1999) develop a dynamic model of a single asset in which information gradually diffuses across the investment public and investors are unable to perform the rational expectations trick of extracting information from prices. Hong and Stein(1999). My hypothesis is that the gradual diffusion of information across asset markets leads to cross-asset return predictability. This hypothesis relies on two key assumptions. The first is that valuable information that originates in one asset reaches investors in other markets only with a lag, i.e. news travels slowly across markets. The second assumption is that because of limited information-processing capacity, many (though not necessarily all) investors may not pay attention or be able to extract the information from the asset prices of markets that they do not participate in. These two assumptions taken together leads to cross-asset return predictability. My hypothesis would appear to be a very plausible one for a few reasons. To begin with, as pointed out by Merton(1987) and the subsequent literature on segmented markets and limited market participation, few investors trade all assets. Put another way, limited participation is a pervasive feature of financial markets. Indeed, even among equity money managers, there is specialization along industries such as sector or market timing funds. Some reasons for this limited market participation include tax, regulatory or liquidity constraints. More plausibly, investors have to specialize because they have their hands full trying to understand the markets that they do participate in

  • PDF