• Title/Summary/Keyword: Attempts

Search Result 6,217, Processing Time 0.036 seconds

A Study on the Standard Land Price and Just Compensation (공공수용 적정보상지가에 관한 분석)

  • LEE, Hojun;KIM, Hyungtai;JEONG, Dongho
    • KDI Journal of Economic Policy
    • /
    • v.34 no.3
    • /
    • pp.1-29
    • /
    • 2012
  • Based on the spatial and land price data of innovation cities and their periphery areas in Korea, this study examines the degree and timing of changes in land price in relation to projects concerning innovation city. The study result confirms that the current system is inconsistent with the principle of restitution of development gain and therefore, this study attempts to seek improvement measures so that the current system can better fit the principle. The analysis reveals that most innovation cities, excluding Sinseo-dong of Daegu and Ujeong-dong of Ulsan, recorded a statistically significant increase in land prices since 2005, compared to those of their neighboring areas. It can be said that the information related to projects concerning innovation city was reflected in the land price since 2005. However, the standard land price pursuant to Article 70 of the Land Compensation Act is the officially assessed land price released on 1st of January 2007, and this official land price was actually applied to the compensation process. Therefore, estimating the compensation amount for land expropriation based on this land price will contradict the principle of restitution of development gain. In other words, despite the fact that development-related information was already reflected in land prices of innovation cities from 2005 to the end of 2006, the compensation process were carried out without institutional arrangements or efforts to exclude such reflection. To solve this problem, this study makes two suggestions. First, it is necessary to cast aside the limitations of the official land price that can be retroactively applied in accordance with Paragraph 5 of Article 70 of the Land Compensation Act, and instead apply the land price which is the most latest but deemed to have no reflection of development gains. Based on this revised standard land price, if the compensation amount is corrected by the average inflation rate and the average rate of increase in land price during the period until the time of the recognized land price, the amount would better satisfy the principle of restitution of development gain. Second, it is necessary to clearly stipulate the standards of development gains being reflected on the land price by including it in the secondary legislation. Under the current system, it is highly likely that appraiser's arbitrary interpretation on development gains is included in the process of calculating the amount of compensation for land expropriation. In this regard, it is necessary to improve the standards on determining whether development gains are reflected based on the results of this academic research and the existing guidelines for appraisal of compensation for land expropriation published by the Korea Association of Property Appraisers.

  • PDF

Technical Inefficiency in Korea's Manufacturing Industries (한국(韓國) 제조업(製造業)의 기술적(技術的) 효율성(效率性) : 산업별(産業別) 기술적(技術的) 효율성(效率性)의 추정(推定))

  • Yoo, Seong-min;Lee, In-chan
    • KDI Journal of Economic Policy
    • /
    • v.12 no.2
    • /
    • pp.51-79
    • /
    • 1990
  • Research on technical efficiency, an important dimension of market performance, had received little attention until recently by most industrial organization empiricists, the reason being that traditional microeconomic theory simply assumed away any form of inefficiency in production. Recently, however, an increasing number of research efforts have been conducted to answer questions such as: To what extent do technical ineffciencies exist in the production activities of firms and plants? What are the factors accounting for the level of inefficiency found and those explaining the interindustry difference in technical inefficiency? Are there any significant international differences in the levels of technical efficiency and, if so, how can we reconcile these results with the observed pattern of international trade, etc? As the first in a series of studies on the technical efficiency of Korea's manufacturing industries, this paper attempts to answer some of these questions. Since the estimation of technical efficiency requires the use of plant-level data for each of the five-digit KSIC industries available from the Census of Manufactures, one may consture the findings of this paper as empirical evidence of technical efficiency in Korea's manufacturing industries at the most disaggregated level. We start by clarifying the relationship among the various concepts of efficiency-allocative effciency, factor-price efficiency, technical efficiency, Leibenstein's X-efficiency, and scale efficiency. It then becomes clear that unless certain ceteris paribus assumptions are satisfied, our estimates of technical inefficiency are in fact related to factor price inefficiency as well. The empirical model employed is, what is called, a stochastic frontier production function which divides the stochastic term into two different components-one with a symmetric distribution for pure white noise and the other for technical inefficiency with an asymmetric distribution. A translog production function is assumed for the functional relationship between inputs and output, and was estimated by the corrected ordinary least squares method. The second and third sample moments of the regression residuals are then used to yield estimates of four different types of measures for technical (in) efficiency. The entire range of manufacturing industries can be divided into two groups, depending on whether or not the distribution of estimated regression residuals allows a successful estimation of technical efficiency. The regression equation employing value added as the dependent variable gives a greater number of "successful" industries than the one using gross output. The correlation among estimates of the different measures of efficiency appears to be high, while the estimates of efficiency based on different regression equations seem almost uncorrelated. Thus, in the subsequent analysis of the determinants of interindustry variations in technical efficiency, the choice of the regression equation in the previous stage will affect the outcome significantly.

  • PDF

A Study on the Effect of Using Sentiment Lexicon in Opinion Classification (오피니언 분류의 감성사전 활용효과에 대한 연구)

  • Kim, Seungwoo;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.133-148
    • /
    • 2014
  • Recently, with the advent of various information channels, the number of has continued to grow. The main cause of this phenomenon can be found in the significant increase of unstructured data, as the use of smart devices enables users to create data in the form of text, audio, images, and video. In various types of unstructured data, the user's opinion and a variety of information is clearly expressed in text data such as news, reports, papers, and various articles. Thus, active attempts have been made to create new value by analyzing these texts. The representative techniques used in text analysis are text mining and opinion mining. These share certain important characteristics; for example, they not only use text documents as input data, but also use many natural language processing techniques such as filtering and parsing. Therefore, opinion mining is usually recognized as a sub-concept of text mining, or, in many cases, the two terms are used interchangeably in the literature. Suppose that the purpose of a certain classification analysis is to predict a positive or negative opinion contained in some documents. If we focus on the classification process, the analysis can be regarded as a traditional text mining case. However, if we observe that the target of the analysis is a positive or negative opinion, the analysis can be regarded as a typical example of opinion mining. In other words, two methods (i.e., text mining and opinion mining) are available for opinion classification. Thus, in order to distinguish between the two, a precise definition of each method is needed. In this paper, we found that it is very difficult to distinguish between the two methods clearly with respect to the purpose of analysis and the type of results. We conclude that the most definitive criterion to distinguish text mining from opinion mining is whether an analysis utilizes any kind of sentiment lexicon. We first established two prediction models, one based on opinion mining and the other on text mining. Next, we compared the main processes used by the two prediction models. Finally, we compared their prediction accuracy. We then analyzed 2,000 movie reviews. The results revealed that the prediction model based on opinion mining showed higher average prediction accuracy compared to the text mining model. Moreover, in the lift chart generated by the opinion mining based model, the prediction accuracy for the documents with strong certainty was higher than that for the documents with weak certainty. Most of all, opinion mining has a meaningful advantage in that it can reduce learning time dramatically, because a sentiment lexicon generated once can be reused in a similar application domain. Additionally, the classification results can be clearly explained by using a sentiment lexicon. This study has two limitations. First, the results of the experiments cannot be generalized, mainly because the experiment is limited to a small number of movie reviews. Additionally, various parameters in the parsing and filtering steps of the text mining may have affected the accuracy of the prediction models. However, this research contributes a performance and comparison of text mining analysis and opinion mining analysis for opinion classification. In future research, a more precise evaluation of the two methods should be made through intensive experiments.

A Spatial Statistical Approach to Migration Studies: Exploring the Spatial Heterogeneity in Place-Specific Distance Parameters (인구이동 연구에 대한 공간통계학적 접근: 장소특수적 거리 패러미터의 추출과 공간적 패턴 분석)

  • Lee, Sang-Il
    • Journal of the Korean association of regional geographers
    • /
    • v.7 no.3
    • /
    • pp.107-120
    • /
    • 2001
  • This study is concerned with providing a reliable procedure of calibrating a set of places specific distance parameters and with applying it to U.S. inter-State migration flows between 1985 and 1900. It attempts to conform to recent advances in quantitative geography that are characterized by an integration of ESDA(exploratory spatial data analysis) and local statistics. ESDA aims to detect the spatial clustering and heterogeneity by visualizing and exploring spatial patterns. A local statistic is defined as a statistically processed value given to each location as opposed to a global statistic that only captures an average trend across a whole study region. Whereas a global distance parameter estimates an averaged level of the friction of distance, place-specific distance parameters calibrate spatially varying effects of distance. It is presented that a poisson regression with an adequately specified design matrix yields a set of either origin-or destination-specific distance parameters. A case study demonstrates that the proposed model is a reliable device of measuring a spatial dimension of migration, and that place-specific distance parameters are spatially heterogeneous as well as spatially clustered.

  • PDF

Solid Waste Disposal Site Selection in Rural Area: Youngyang-Gun, Kyungpook (농촌지역 쓰레기 매립장 입지선정에 관한 연구 -경상북도 영양군을 사례로-)

  • Park, Soon-Ho
    • Journal of the Korean association of regional geographers
    • /
    • v.3 no.1
    • /
    • pp.63-80
    • /
    • 1997
  • This study attempts to establish the criteria of site selection for establishing solid waste disposal facility, to determine optimal solid waste disposal sites with the criteria, and to examine the suitability of the selected sites. The Multi-Criteria Evaluation(MCE) module in Idrisi is used to determine optimal sites for solid waste disposal. The MCE combines the information from several criteria in interval and/or ratio scale to form a single index of evaluation without leveling down the data scale into ordinal scale. The summary of this study is as follows: First, the considerable criteria are selected through reviewing the literature and the availability of data: namely, percent of slope, fault lines, bedrock characteristics, major residential areas, reservoirs of water supply, rivers, inundated area, roads, and tourist resorts. Second, the criteria maps of nine factors have been developed. Each factor map is standardized and multiplies by its weight, and then the results are summed. After all of the factors have been incorporated, the resulting suitability map is multiplied by each of the constraint in turn to "zero out" unsuitable area. The unsuitable areas are discovered in urban district and its adjacencies, and mountain region as well as river, roads, resort area and their adjacency districts. Third, the potential sites for establishing waste disposal facilities are twenty five districts in Youngyang-gun. Five districts are located in Subi-myun Sinam-ri, nine districts in Chunggi-myun Haehwa-ri and Moojin-ri, and eleven districts in Sukbo-myun Posan-ri. The first highest score of suitability for waste disposal sites is shown at number eleven district in Chunggi-myun Moojin-ri and the second highest one is discovered at number twenty one district in Sukbo-myun Posan-ri that is followed by number nine district in Chunggi-myun Haehwa-ri, number seventeen and twenty three in Sukbo-myun Posan-ri, and number two in Subi-myun Sinam-ri. The first lowest score is found in number six district in Chunggi-myun Haehwa-ri, and the second lowest one is number five district in Subi-myun Sinam-ri. Finally, the Geographic Information System (GIS) helps to select optimal sites with more objectively and to minimize conflict in the determination of waste disposal sites. It is important to present several potential sites with objective criteria for establishing waste disposal facilities and to discover characteristics of each potential site as a result of that final sites of waste disposal are determined through considering thought of residents. This study has a limitation of criteria as a result of the restriction of availability of data such as underground water, soil texture and mineralogy, and thought of residents. To improve selection of optimal sites for a waste disposal facility, more wide rage of spatial and non-spatial data base should be constructed.

  • PDF

A Study on the Artistic Techniques of the Chinese Early Cartoons -Focusing on Lian Huan Hua(連環畵) and - (중국 초기 만화 예술기법 연구 - 연환화 작품 <산향거변>과 <백모녀>를 중심으로-)

  • Lurenjing, Lurenjing
    • Cartoon and Animation Studies
    • /
    • s.39
    • /
    • pp.451-472
    • /
    • 2015
  • Lian Huan Hua(連環畵) has occurred in the history of China Cartoon was initially developed as a unique literary style of Chinese painting and narrative combined. Also Lian Huan Hua are also tend to form once the fusion 1920s was also a very creative fashion cartoon style. This is also referred to as chain cartoon. In 1950-1960, China 's Lian Huan Hua also mature a 'golden age' legal group reaches to indicate the unique formal features developed independently. This work is a dramatic expression of Lian Huan Hua narrative, shows a more realistic representation techniques and art forms such as portraiture is a very big breakthrough was achieved artistic maturity of the work increased significantly. by He You Zhi(賀友直) and by Hau San Chuan(華三川) is a masterpiece of artistry and maturity in the period leading side. Chapter 2 looked at the origin and development of Chinese Lian Huan Hua, it was seen by the fact that China achieved new progress in Lian Huan Hua upset every time the combination of content and form, In addition, the work of 1950-1960 in the development process of China's Lian Huan Hua confirmed the fact that they won the biggest achievement in artistry and maturity surface. Therefore, Chapter 3 how 'golden age' masterpiece of and the dramatic narrative of expression by analyzing a specific angle in the multifaceted image of the , realistic portraiture, such as the acquisition of Chinese concrete artistry Lian Huan Hua I want to show. Analysis of the figures depicting nature, landscape screens. consisted of highlights and background and techniques of utilization, production methods. The purpose of this research work is to identify two conditions of great Lian Huan Hua through analysis of concrete work and painting techniques such as framing and directing the Lian Huan Hua's artistic achievements is to investigate the influence of China in the early comics. These two works are focused on a realistic view of life and put out was to create a more effective representation of information it attempts to pass a new production techniques and will have the significance. Also completed was a new style absorbed throughout the aesthetic advantages are compelling own personality writers of Eastern and Western paintings are remarkable in that its performance. But the difference in the two works represent all types and painting techniques has a the mood of common China's lives.

A Study on the Impact Factors of Contents Diffusion in Youtube using Integrated Content Network Analysis (일반영향요인과 댓글기반 콘텐츠 네트워크 분석을 통합한 유튜브(Youtube)상의 콘텐츠 확산 영향요인 연구)

  • Park, Byung Eun;Lim, Gyoo Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.19-36
    • /
    • 2015
  • Social media is an emerging issue in content services and in current business environment. YouTube is the most representative social media service in the world. YouTube is different from other conventional content services in its open user participation and contents creation methods. To promote a content in YouTube, it is important to understand the diffusion phenomena of contents and the network structural characteristics. Most previous studies analyzed impact factors of contents diffusion from the view point of general behavioral factors. Currently some researchers use network structure factors. However, these two approaches have been used separately. However this study tries to analyze the general impact factors on the view count and content based network structures all together. In addition, when building a content based network, this study forms the network structure by analyzing user comments on 22,370 contents of YouTube not based on the individual user based network. From this study, we re-proved statistically the causal relations between view count and not only general factors but also network factors. Moreover by analyzing this integrated research model, we found that these factors affect the view count of YouTube according to the following order; Uploader Followers, Video Age, Betweenness Centrality, Comments, Closeness Centrality, Clustering Coefficient and Rating. However Degree Centrality and Eigenvector Centrality affect the view count negatively. From this research some strategic points for the utilizing of contents diffusion are as followings. First, it is needed to manage general factors such as the number of uploader followers or subscribers, the video age, the number of comments, average rating points, and etc. The impact of average rating points is not so much important as we thought before. However, it is needed to increase the number of uploader followers strategically and sustain the contents in the service as long as possible. Second, we need to pay attention to the impacts of betweenness centrality and closeness centrality among other network factors. Users seems to search the related subject or similar contents after watching a content. It is needed to shorten the distance between other popular contents in the service. Namely, this study showed that it is beneficial for increasing view counts by decreasing the number of search attempts and increasing similarity with many other contents. This is consistent with the result of the clustering coefficient impact analysis. Third, it is important to notice the negative impact of degree centrality and eigenvector centrality on the view count. If the number of connections with other contents is too much increased it means there are many similar contents and eventually it might distribute the view counts. Moreover, too high eigenvector centrality means that there are connections with popular contents around the content, and it might lose the view count because of the impact of the popular contents. It would be better to avoid connections with too powerful popular contents. From this study we analyzed the phenomenon and verified diffusion factors of Youtube contents by using an integrated model consisting of general factors and network structure factors. From the viewpoints of social contribution, this study might provide useful information to music or movie industry or other contents vendors for their effective contents services. This research provides basic schemes that can be applied strategically in online contents marketing. One of the limitations of this study is that this study formed a contents based network for the network structure analysis. It might be an indirect method to see the content network structure. We can use more various methods to establish direct content network. Further researches include more detailed researches like an analysis according to the types of contents or domains or characteristics of the contents or users, and etc.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

A Case Study on Forecasting Inbound Calls of Motor Insurance Company Using Interactive Data Mining Technique (대화식 데이터 마이닝 기법을 활용한 자동차 보험사의 인입 콜량 예측 사례)

  • Baek, Woong;Kim, Nam-Gyu
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.99-120
    • /
    • 2010
  • Due to the wide spread of customers' frequent access of non face-to-face services, there have been many attempts to improve customer satisfaction using huge amounts of data accumulated throughnon face-to-face channels. Usually, a call center is regarded to be one of the most representative non-faced channels. Therefore, it is important that a call center has enough agents to offer high level customer satisfaction. However, managing too many agents would increase the operational costs of a call center by increasing labor costs. Therefore, predicting and calculating the appropriate size of human resources of a call center is one of the most critical success factors of call center management. For this reason, most call centers are currently establishing a department of WFM(Work Force Management) to estimate the appropriate number of agents and to direct much effort to predict the volume of inbound calls. In real world applications, inbound call prediction is usually performed based on the intuition and experience of a domain expert. In other words, a domain expert usually predicts the volume of calls by calculating the average call of some periods and adjusting the average according tohis/her subjective estimation. However, this kind of approach has radical limitations in that the result of prediction might be strongly affected by the expert's personal experience and competence. It is often the case that a domain expert may predict inbound calls quite differently from anotherif the two experts have mutually different opinions on selecting influential variables and priorities among the variables. Moreover, it is almost impossible to logically clarify the process of expert's subjective prediction. Currently, to overcome the limitations of subjective call prediction, most call centers are adopting a WFMS(Workforce Management System) package in which expert's best practices are systemized. With WFMS, a user can predict the volume of calls by calculating the average call of each day of the week, excluding some eventful days. However, WFMS costs too much capital during the early stage of system establishment. Moreover, it is hard to reflect new information ontothe system when some factors affecting the amount of calls have been changed. In this paper, we attempt to devise a new model for predicting inbound calls that is not only based on theoretical background but also easily applicable to real world applications. Our model was mainly developed by the interactive decision tree technique, one of the most popular techniques in data mining. Therefore, we expect that our model can predict inbound calls automatically based on historical data, and it can utilize expert's domain knowledge during the process of tree construction. To analyze the accuracy of our model, we performed intensive experiments on a real case of one of the largest car insurance companies in Korea. In the case study, the prediction accuracy of the devised two models and traditional WFMS are analyzed with respect to the various error rates allowable. The experiments reveal that our data mining-based two models outperform WFMS in terms of predicting the amount of accident calls and fault calls in most experimental situations examined.

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.