• Title/Summary/Keyword: Analysis of application

Search Result 20,806, Processing Time 0.054 seconds

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

The Process of Establishing a Japanese-style Garden and Embodying Identity in Modern Japan (일본 근대 시기 일본풍 정원의 확립과정과 정체성 구현)

  • An, Joon-Young;Jun, Da-Seul
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.41 no.3
    • /
    • pp.59-66
    • /
    • 2023
  • This study attempts to examine the process of establishing a Japanese-style garden in the modern period through the perspectives of garden designers, spatial composition, spatial components, and materials used in their works, and to use it as data for embodying the identity of Korean garden. The results are as follows: First, by incorporating elements associated with Koreanness into the modern garden culture, there are differences in location, presence, and subjectivity when compared to Japan. This reflects Japan's relatively seamless cultural continuity compared to Korea's cultural disconnection during the modern period. Second, prior to the modern period, Japan's garden culture spread and continued to develop throughout the country without significant interruptions. However, during the modern period, the Meiji government promoted the policy of 'civilization and enlightenment (Bunmei-kaika, 文明開化)' and introduced advanced European and American civilizations, leading to the popularity of Western-style architectural techniques. Unfortunately, the rapid introduction of Western culture caused the traditional Japanese culture to be overshadowed. In 1879, British architect Josiah Condor guided Japanese architects and introduced atelier and traditional designs of Japanese gardens into the design. The garden style of Ogawa Jihei VII, a garden designer in Kyoto during the Meiji and Taisho periods, was accepted by influential political and business leaders who sought to preserve Japan's traditional culture. And a protection system of garden was established through the preparation of various laws and regulations. Third, as a comprehensive analysis of Japanese modern gardens, the examination of garden designers, Japanese components, materials, elements, and the Japanese-style showed that Yamagata Aritomo, Ogawa Jihei VII, and Mirei Shigemori were representative garden designers who preserved the Japanese-style in their gardens. They introduced features such as the creation of a Daejicheon(大池泉) garden, which involves a large pond on a spacious land, as well as the naturalistic borrowed scenery method and water flow. Key components of Japanese-style gardens include the use of turf, winding garden paths, and the variation of plant species. Fourth, an analysis of the Japanese-style elements in the target sites revealed that the use of flowing water had the highest occurrence at 47.06% among the individual elements of spatial composition. Daejicheon and naturalistic borrowed scenery were also shown. The use of turf and winding paths were at 65.88% and 78.82%, respectively. The alteration of tree species was relatively less common at 28.24% compared to the application of turf or winding paths. Fifth, it is essential to discover more gardens from the modern period and meticulously document the creators or owners of the gardens, the spatial composition, spatial components, and materials used. This information will be invaluable in uncovering the identity of our own gardens. This study was conducted based on the analysis of the process of establishing the Japanese-style during Japan's modern period, utilizing examples of garden designers and gardens. While this study has limitations, such as the absence of in-depth research and more case studies or specific techniques, it sets the stage for future exploration.

Comparative analysis of status of safety accidents and importance-performance analysis (IPA) about precautions of safety accidents by employment type of industry foodservices in Jeonbuk area (전북지역 산업체급식소 조리종사자의 고용형태에 따른 안전사고 실태 및 안전사고 예방관리에 대한 중요도와 수행도 분석)

  • So, Hee;Rho, Jeong Ok
    • Journal of Nutrition and Health
    • /
    • v.50 no.4
    • /
    • pp.402-414
    • /
    • 2017
  • Purpose: The purpose of the study was to evaluate the status of safety accidents and importance-performance analysis (IPA) between regular and non-regular employees in industry foodservices. Methods: The participants were regular employees (n = 119) and non-regular employees (n = 163) in industry foodservices in the Jeonbuk area. Demographic characteristics, status of safety accidents, safety education, and importance and performance status were assessed using a self-administered questionnaire. Results: Approximately 66.4% of regular employees and 53.4% of non-regular employees experienced safety accidents (p < 0.05). Types of safety accidents of regular and non-regular employees were mostly burns, and causes were mostly from their own negligence. Approximately 98.3% of regular employees and 95.1% of non-regular employees experienced safety education. Approximately 88.9% of regular employees and 96.8% of non-regular employees received safety education from dietitians. Approximately 41.9% of regular employees and 50.0% of non-regular employees had difficulty applying the contents of safety education due to lack of time during work. As a result of IPA, regular and non-regular employees were aware of the importance of the following and performed them well: 'Clean the floor of the work place', 'Arrange in the work area', 'Wear safety shoes', 'Check for heater cord', and 'Safety cooking when using oil'. On the other hand, they were not aware of the importance of the following and performed them insufficiently: 'Check for the MSDS', 'Aware of chemical signs', 'Wear protection gloves etc.', 'Do stretching exercise', and 'Using ancillary tools'. Conclusion: Therefore, it is necessary to improve the consciousness of dietitians for effective application of safety education contents, development of contents, especially MSDS, and related things.

Application of LCA on Lettuce Cropping System by Bottom-up Methodology in Protected Cultivation (시설상추 농가를 대상으로 하는 bottom-up 방식 LCA 방법론의 농업적 적용)

  • Ryu, Jong-Hee;Kim, Kye-Hoon;Kim, Gun-Yeob;So, Kyu-Ho;Kang, Kee-Kyung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.44 no.6
    • /
    • pp.1195-1206
    • /
    • 2011
  • This study was conducted to apply LCA (Life cycle assessment) methodology to lettuce (Lactuca sativa L.) production systems in Namyang-ju as a case study. Five lettuce growing farms with three different farming systems (two farms with organic farming system, one farm with a system without agricultural chemicals and two farms with conventional farming system) were selected at Namyangju city of Gyeonggi-province in Korea. The input data for LCA were collected by interviewing with the farmers. The system boundary was set at a cropping season without heating and cooling system for reducing uncertainties in data collection and calculation. Sensitivity analysis was carried out to find out the effect of type and amount of fertilizer and energy use on GHG (Greenhouse Gas) emission. The results of establishing GTG (Gate-to-Gate) inventory revealed that the quantity of fertilizer and energy input had the largest value in producing 1 kg lettuce, the amount of pesticide input the smallest. The amount of electricity input was the largest in all farms except farm 1 which purchased seedlings from outside. The quantity of direct field emission of $CO_2$, $CH_4$ and $N_2O$ from farm 1 to farm 5 were 6.79E-03 (farm 1), 8.10E-03 (farm 2), 1.82E-02 (farm 3), 7.51E-02 (farm 4) and 1.61E-02 (farm 5) kg $kg^{-1}$ lettuce, respectively. According to the result of LCI analysis focused on GHG, it was observed that $CO_2$ emission was 2.92E-01 (farm 1), 3.76E-01 (farm 2), 4.11E-01 (farm 3), 9.40E-01 (farm 4) and $5.37E-01kg\;CO_2\;kg^{-1}\;lettuce$ (farm 5), respectively. Carbon dioxide contribute to the most GHG emission. Carbon dioxide was mainly emitted in the process of energy production, which occupied 67~91% of $CO_2$ emission from every production process from 5 farms. Due to higher proportion of $CO_2$ emission from production of compound fertilizer in conventional crop system, conventional crop system had lower proportion of $CO_2$ emission from energy production than organic crop system did. With increasing inorganic fertilizer input, the process of lettuce cultivation covered higher proportion in $N_2O$ emission. Therefore, farms 1 and 2 covered 87% of total $N_2O$ emission; and farm 3 covered 64%. The carbon footprints from farm 1 to farm 5 were 3.40E-01 (farm 1), 4.31E-01 (farm 2), 5.32E-01 (farm 3), 1.08E+00 (farm 4) and 6.14E-01 (farm 5) kg $CO_2$-eq. $kg^{-1}$ lettuce, respectively. Results of sensitivity analysis revealed the soybean meal was the most sensitive among 4 types of fertilizer. The value of compound fertilizer was the least sensitive among every fertilizer imput. Electricity showed the largest sensitivity on $CO_2$ emission. However, the value of $N_2O$ variation was almost zero.

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Optimization and Development of Prediction Model on the Removal Condition of Livestock Wastewater using a Response Surface Method in the Photo-Fenton Oxidation Process (Photo-Fenton 산화공정에서 반응표면분석법을 이용한 축산폐수의 COD 처리조건 최적화 및 예측식 수립)

  • Cho, Il-Hyoung;Chang, Soon-Woong;Lee, Si-Jin
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.30 no.6
    • /
    • pp.642-652
    • /
    • 2008
  • The aim of our research was to apply experimental design methodology in the optimization condition of Photo-Fenton oxidation of the residual livestock wastewater after the coagulation process. The reactions of Photo-Fenton oxidation were mathematically described as a function of parameters amount of Fe(II)($x_1$), $H_2O_2(x_2)$ and pH($x_3$) being modeled by the use of the Box-Behnken method, which was used for fitting 2nd order response surface models and was alternative to central composite designs. The application of RSM using the Box-Behnken method yielded the following regression equation, which is an empirical relationship between the removal(%) of livestock wastewater and test variables in coded unit: Y = 79.3 + 15.61x$_1$ - 7.31x$_2$ - 4.26x$_3$ - 18x$_1{^2}$ - 10x$_2{^2}$ - 11.9x$_3{^2}$ + 2.49x$_1$x$_2$ - 4.4x$_2$x$_3$ - 1.65x$_1$x$_3$. The model predicted also agreed with the experimentally observed result(R$^2$ = 0.96) The results show that the response of treatment removal(%) in Photo-Fenton oxidation of livestock wastewater were significantly affected by the synergistic effect of linear terms(Fe(II)($x_1$), $H_2O_2(x_2)$, pH(x$_3$)), whereas Fe(II) $\times$ Fe(II)(x$_1{^2}$), $H_2O_2$ $\times$ $H_2O_2$(x$_2{^2}$) and pH $\times$ pH(x$_3{^2}$) on the quadratic terms were significantly affected by the antagonistic effect. $H_2O_2$ $\times$ pH(x$_2$x$_3$) had also a antagonistic effect in the cross-product term. The estimated ridge of the expected maximum response and optimal conditions for Y using canonical analysis were 84 $\pm$ 0.95% and (Fe(II)(X$_1$) = 0.0146 mM, $H_2O_2$(X$_2$) = 0.0867 mM and pH(X$_3$) = 4.704, respectively. The optimal ratio of Fe/H$_2O_2$ was also 0.17 at the pH 4.7.

A Study on Chinese Traditional Auspicious Fish Pattern Application in Corperate Identity Design (중국 전통 길상 어(魚)문양을 응용한 중국 기업의 아이덴티티 디자인 동향)

  • ZHANG, JINGQIU
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.349-382
    • /
    • 2018
  • China is a great civilization which is a combination of various ethnic groups with long history change. As one of these important components of traditional culture, the lucky shape has been going through the ideological upheaval of the history change of China. Up to now, it has become the important parts which can stimulate the emotion of Chinese nation. The lucky shape becomes the basis of the rich traditional culture by long history of the Chinese nation. Even say it is the centre of this traditional culture resource. The lucky shape is a way of expressing the Chinese history and national emotions. It is the important part of people's living habits, emotion, as well as the cultural background. What's more, it has the value of beliefs of Surname totem. Meanwhile, it also has the function of passing on information. The symbol of information finally was created by the being of lucky shape to indicate its conceptual content. There are various kinds of lucky shapes. It will have its limitations when researching all kinds of them professionally. So, here the lucky shape of FISH will be researched. The shape of fish is the first good shape created by the Chinese nation. It is about 6000 years. Its special shape and lucky meaning embody the peculiar inherent culture and intension of the Chinese nation. It's the important component of the Chinese traditional culture. The traditional shape of fish was focused on the continuation of history and the patterns recognition, etc. It seldom indicated the meaning of the shape into the using of the modern design. So by searching the lucky meaning & the way of fish shape, the purpose of the search is to explore the real analysis of value of the fish shape in the modern enterprise identity design. The way of search is through the development of the history, the evolvement and the meaning of lucky of the traditional fish shape to analyse the symbolic meaning and the cultural meaning from all levels in nation, culture, art and life, etc. And by using the huge living example of the enterprise identity design of the traditional shape of the fish to analyse that how it works in positive way by those enterprise which is based on the trust with good image. In the modern Chinese enterprise identity design, the lucky image will be reinterpreted in the modern way. It will be proofed by the national perceptual knowledge of the consumer and the way of enlarge the goodwill of corporate image. It will be the conclusion. The traditional fish shape is the important core of modern design.So this search is taken through the instance of the design of enterprise image of the traditional fish shape to analysis the idea of the majority Chinese people of the traditional luck and the influence of corporation which based on trust and credibility. In modern image design of Chinese corporation, the auspicious sign reappear. The question survey is taken by people through the perceptual knowledge of the consumer and the cognition the enterprise image. According the result, people can speculate the improvement of consumer's recognition and the possibility of development of traditional concept.

Pareto Ratio and Inequality Level of Knowledge Sharing in Virtual Knowledge Collaboration: Analysis of Behaviors on Wikipedia (지식 공유의 파레토 비율 및 불평등 정도와 가상 지식 협업: 위키피디아 행위 데이터 분석)

  • Park, Hyun-Jung;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.19-43
    • /
    • 2014
  • The Pareto principle, also known as the 80-20 rule, states that roughly 80% of the effects come from 20% of the causes for many events including natural phenomena. It has been recognized as a golden rule in business with a wide application of such discovery like 20 percent of customers resulting in 80 percent of total sales. On the other hand, the Long Tail theory, pointing out that "the trivial many" produces more value than "the vital few," has gained popularity in recent times with a tremendous reduction of distribution and inventory costs through the development of ICT(Information and Communication Technology). This study started with a view to illuminating how these two primary business paradigms-Pareto principle and Long Tail theory-relates to the success of virtual knowledge collaboration. The importance of virtual knowledge collaboration is soaring in this era of globalization and virtualization transcending geographical and temporal constraints. Many previous studies on knowledge sharing have focused on the factors to affect knowledge sharing, seeking to boost individual knowledge sharing and resolve the social dilemma caused from the fact that rational individuals are likely to rather consume than contribute knowledge. Knowledge collaboration can be defined as the creation of knowledge by not only sharing knowledge, but also by transforming and integrating such knowledge. In this perspective of knowledge collaboration, the relative distribution of knowledge sharing among participants can count as much as the absolute amounts of individual knowledge sharing. In particular, whether the more contribution of the upper 20 percent of participants in knowledge sharing will enhance the efficiency of overall knowledge collaboration is an issue of interest. This study deals with the effect of this sort of knowledge sharing distribution on the efficiency of knowledge collaboration and is extended to reflect the work characteristics. All analyses were conducted based on actual data instead of self-reported questionnaire surveys. More specifically, we analyzed the collaborative behaviors of editors of 2,978 English Wikipedia featured articles, which are the best quality grade of articles in English Wikipedia. We adopted Pareto ratio, the ratio of the number of knowledge contribution of the upper 20 percent of participants to the total number of knowledge contribution made by the total participants of an article group, to examine the effect of Pareto principle. In addition, Gini coefficient, which represents the inequality of income among a group of people, was applied to reveal the effect of inequality of knowledge contribution. Hypotheses were set up based on the assumption that the higher ratio of knowledge contribution by more highly motivated participants will lead to the higher collaboration efficiency, but if the ratio gets too high, the collaboration efficiency will be exacerbated because overall informational diversity is threatened and knowledge contribution of less motivated participants is intimidated. Cox regression models were formulated for each of the focal variables-Pareto ratio and Gini coefficient-with seven control variables such as the number of editors involved in an article, the average time length between successive edits of an article, the number of sections a featured article has, etc. The dependent variable of the Cox models is the time spent from article initiation to promotion to the featured article level, indicating the efficiency of knowledge collaboration. To examine whether the effects of the focal variables vary depending on the characteristics of a group task, we classified 2,978 featured articles into two categories: Academic and Non-academic. Academic articles refer to at least one paper published at an SCI, SSCI, A&HCI, or SCIE journal. We assumed that academic articles are more complex, entail more information processing and problem solving, and thus require more skill variety and expertise. The analysis results indicate the followings; First, Pareto ratio and inequality of knowledge sharing relates in a curvilinear fashion to the collaboration efficiency in an online community, promoting it to an optimal point and undermining it thereafter. Second, the curvilinear effect of Pareto ratio and inequality of knowledge sharing on the collaboration efficiency is more sensitive with a more academic task in an online community.