• Title/Summary/Keyword: System characteristics

Search Result 35,450, Processing Time 0.068 seconds

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

A Methodology of Customer Churn Prediction based on Two-Dimensional Loyalty Segmentation (이차원 고객충성도 세그먼트 기반의 고객이탈예측 방법론)

  • Kim, Hyung Su;Hong, Seung Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.111-126
    • /
    • 2020
  • Most industries have recently become aware of the importance of customer lifetime value as they are exposed to a competitive environment. As a result, preventing customers from churn is becoming a more important business issue than securing new customers. This is because maintaining churn customers is far more economical than securing new customers, and in fact, the acquisition cost of new customers is known to be five to six times higher than the maintenance cost of churn customers. Also, Companies that effectively prevent customer churn and improve customer retention rates are known to have a positive effect on not only increasing the company's profitability but also improving its brand image by improving customer satisfaction. Predicting customer churn, which had been conducted as a sub-research area for CRM, has recently become more important as a big data-based performance marketing theme due to the development of business machine learning technology. Until now, research on customer churn prediction has been carried out actively in such sectors as the mobile telecommunication industry, the financial industry, the distribution industry, and the game industry, which are highly competitive and urgent to manage churn. In addition, These churn prediction studies were focused on improving the performance of the churn prediction model itself, such as simply comparing the performance of various models, exploring features that are effective in forecasting departures, or developing new ensemble techniques, and were limited in terms of practical utilization because most studies considered the entire customer group as a group and developed a predictive model. As such, the main purpose of the existing related research was to improve the performance of the predictive model itself, and there was a relatively lack of research to improve the overall customer churn prediction process. In fact, customers in the business have different behavior characteristics due to heterogeneous transaction patterns, and the resulting churn rate is different, so it is unreasonable to assume the entire customer as a single customer group. Therefore, it is desirable to segment customers according to customer classification criteria, such as loyalty, and to operate an appropriate churn prediction model individually, in order to carry out effective customer churn predictions in heterogeneous industries. Of course, in some studies, there are studies in which customers are subdivided using clustering techniques and applied a churn prediction model for individual customer groups. Although this process of predicting churn can produce better predictions than a single predict model for the entire customer population, there is still room for improvement in that clustering is a mechanical, exploratory grouping technique that calculates distances based on inputs and does not reflect the strategic intent of an entity such as loyalties. This study proposes a segment-based customer departure prediction process (CCP/2DL: Customer Churn Prediction based on Two-Dimensional Loyalty segmentation) based on two-dimensional customer loyalty, assuming that successful customer churn management can be better done through improvements in the overall process than through the performance of the model itself. CCP/2DL is a series of churn prediction processes that segment two-way, quantitative and qualitative loyalty-based customer, conduct secondary grouping of customer segments according to churn patterns, and then independently apply heterogeneous churn prediction models for each churn pattern group. Performance comparisons were performed with the most commonly applied the General churn prediction process and the Clustering-based churn prediction process to assess the relative excellence of the proposed churn prediction process. The General churn prediction process used in this study refers to the process of predicting a single group of customers simply intended to be predicted as a machine learning model, using the most commonly used churn predicting method. And the Clustering-based churn prediction process is a method of first using clustering techniques to segment customers and implement a churn prediction model for each individual group. In cooperation with a global NGO, the proposed CCP/2DL performance showed better performance than other methodologies for predicting churn. This churn prediction process is not only effective in predicting churn, but can also be a strategic basis for obtaining a variety of customer observations and carrying out other related performance marketing activities.

Aspect-Based Sentiment Analysis Using BERT: Developing Aspect Category Sentiment Classification Models (BERT를 활용한 속성기반 감성분석: 속성카테고리 감성분류 모델 개발)

  • Park, Hyun-jung;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.1-25
    • /
    • 2020
  • Sentiment Analysis (SA) is a Natural Language Processing (NLP) task that analyzes the sentiments consumers or the public feel about an arbitrary object from written texts. Furthermore, Aspect-Based Sentiment Analysis (ABSA) is a fine-grained analysis of the sentiments towards each aspect of an object. Since having a more practical value in terms of business, ABSA is drawing attention from both academic and industrial organizations. When there is a review that says "The restaurant is expensive but the food is really fantastic", for example, the general SA evaluates the overall sentiment towards the 'restaurant' as 'positive', while ABSA identifies the restaurant's aspect 'price' as 'negative' and 'food' aspect as 'positive'. Thus, ABSA enables a more specific and effective marketing strategy. In order to perform ABSA, it is necessary to identify what are the aspect terms or aspect categories included in the text, and judge the sentiments towards them. Accordingly, there exist four main areas in ABSA; aspect term extraction, aspect category detection, Aspect Term Sentiment Classification (ATSC), and Aspect Category Sentiment Classification (ACSC). It is usually conducted by extracting aspect terms and then performing ATSC to analyze sentiments for the given aspect terms, or by extracting aspect categories and then performing ACSC to analyze sentiments for the given aspect category. Here, an aspect category is expressed in one or more aspect terms, or indirectly inferred by other words. In the preceding example sentence, 'price' and 'food' are both aspect categories, and the aspect category 'food' is expressed by the aspect term 'food' included in the review. If the review sentence includes 'pasta', 'steak', or 'grilled chicken special', these can all be aspect terms for the aspect category 'food'. As such, an aspect category referred to by one or more specific aspect terms is called an explicit aspect. On the other hand, the aspect category like 'price', which does not have any specific aspect terms but can be indirectly guessed with an emotional word 'expensive,' is called an implicit aspect. So far, the 'aspect category' has been used to avoid confusion about 'aspect term'. From now on, we will consider 'aspect category' and 'aspect' as the same concept and use the word 'aspect' more for convenience. And one thing to note is that ATSC analyzes the sentiment towards given aspect terms, so it deals only with explicit aspects, and ACSC treats not only explicit aspects but also implicit aspects. This study seeks to find answers to the following issues ignored in the previous studies when applying the BERT pre-trained language model to ACSC and derives superior ACSC models. First, is it more effective to reflect the output vector of tokens for aspect categories than to use only the final output vector of [CLS] token as a classification vector? Second, is there any performance difference between QA (Question Answering) and NLI (Natural Language Inference) types in the sentence-pair configuration of input data? Third, is there any performance difference according to the order of sentence including aspect category in the QA or NLI type sentence-pair configuration of input data? To achieve these research objectives, we implemented 12 ACSC models and conducted experiments on 4 English benchmark datasets. As a result, ACSC models that provide performance beyond the existing studies without expanding the training dataset were derived. In addition, it was found that it is more effective to reflect the output vector of the aspect category token than to use only the output vector for the [CLS] token as a classification vector. It was also found that QA type input generally provides better performance than NLI, and the order of the sentence with the aspect category in QA type is irrelevant with performance. There may be some differences depending on the characteristics of the dataset, but when using NLI type sentence-pair input, placing the sentence containing the aspect category second seems to provide better performance. The new methodology for designing the ACSC model used in this study could be similarly applied to other studies such as ATSC.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

A Study on the Identifying OECMs in Korea for Achieving the Kunming-Montreal Global Biodiversity Framework - Focusing on the Concept and Experts' Perception - (쿤밍-몬트리올 글로벌 생물다양성 보전목표 성취를 위한 우리나라 OECM 발굴방향 연구 - 개념 고찰 및 전문가 인식을 중심으로 -)

  • Hag-Young Heo;Sun-Joo Park
    • Korean Journal of Environment and Ecology
    • /
    • v.37 no.4
    • /
    • pp.302-314
    • /
    • 2023
  • This study aims to explore the direction for Korea's effective response to Target 3 (30by30), which can be said to be the core of the Kunming-Montreal Global Biodiversity Framework (K-M GBF) of the Convention on Biological Diversity (CBD), to find the direction of systematic OECM (Other Effective area-based Conservation Measures) discovery at the national level through a survey of global conceptual review and expert perception of OECM. This study examined ① the use of Korean terms related to OECM, ② derivation of determining criteria reflecting global standards, ③ deriving types of potential OECM candidates in Korea, and ④ considerations for OECM identification and reporting to explore the direction for identifying systematic, national-level OECM that complies with global standards and reflects the Korean context. First, there was consensus for using Korean terminology that reflects the concept of OECM rather than simple translations, and it was determined that "nature coexistence area" was the most preferred term (12 people) and had the same context as CBD 2050 Vision of "a world of living in harmony with nature." This study suggests utilizing four criteria (1. No protected areas, 2. Geographic boundaries, 3. Governance/management, and 4. Biodiversity value) that reflect OECM's core characteristics in the first-stage selection process, carrying out the consensus-building process (stage 2) with the relevant agencies, and adding two criteria (3-1 Effectiveness and sustainability of governance and management and 4-1 Long-term conservation) and performing the in-depth diagnosis in stage 3 (full assessment for reporting). The 28 types examined in this study were generally compatible with OECMs (4.45-6.21/7 points, mean 5.24). In particular, the "Conservation Properties (6.21 points)" and "Conservation Agreements (6.07 points)", which are controlled by National Nature Trust, are shown to be the most in line with the OECM concept. They were followed by "Buffer zone of World Natural Heritage (5.77 points)", "Temple Forest (5.73 points)", "Green-belt (Restricted development zones, 5.63 points)", "DMZ (5.60 points)", and "Buffer zone of biosphere reserve (5.50 point)" to have high potential. In the case of "Uninhabited Islands under Absolute Conservation", the response that they conformed to the protected areas (5.83/7 points) was higher than the OECM compatibility (5.52/7 points), it is determined that in the future, it would be preferable to promote the listing of absolute unprotected islands in the Korea Database on Protected Areas (KDPA) along with their surrounding waters (1 km). Based on the results of a global OECM standard review and expert perception survey, 10 items were suggested as considerations when identifying OECM in the Korean context. In the future, continuous research is needed to identify the potential OECMs through site-level assessment regarding these considerations and establish an effective in-situ conservation system at the national level by linking existing protected area systems and identified OECMs.

Effects of climate change on biodiversity and measures for them (생물다양성에 대한 기후변화의 영향과 그 대책)

  • An, Ji Hong;Lim, Chi Hong;Jung, Song Hie;Kim, A Reum;Lee, Chang Seok
    • Journal of Wetlands Research
    • /
    • v.18 no.4
    • /
    • pp.474-480
    • /
    • 2016
  • In this study, formation background of biodiversity and its changes in the process of geologic history, and effects of climate change on biodiversity and human were discussed and the alternatives to reduce the effects of climate change were suggested. Biodiversity is 'the variety of life' and refers collectively to variation at all levels of biological organization. That is, biodiversity encompasses the genes, species and ecosystems and their interactions. It provides the basis for ecosystems and the services on which all people fundamentally depend. Nevertheless, today, biodiversity is increasingly threatened, usually as the result of human activity. Diverse organisms on earth, which are estimated as 10 to 30 million species, are the result of adaptation and evolution to various environments through long history of four billion years since the birth of life. Countlessly many organisms composing biodiversity have specific characteristics, respectively and are interrelated with each other through diverse relationship. Environment of the earth, on which we live, has also created for long years through extensive relationship and interaction of those organisms. We mankind also live through interrelationship with the other organisms as an organism. The man cannot lives without the other organisms around him. Even though so, human beings accelerate mean extinction rate about 1,000 times compared with that of the past for recent several years. We have to conserve biodiversity for plentiful life of our future generation and are responsible for sustainable use of biodiversity. Korea has achieved faster economic growth than any other countries in the world. On the other hand, Korea had hold originally rich biodiversity as it is not only a peninsula country stretched lengthily from north to south but also three sides are surrounded by sea. But they disappeared increasingly in the process of fast economic growth. Korean people have created specific Korean culture by coexistence with nature through a long history of agriculture, forestry, and fishery. But in recent years, the relationship between Korean and nature became far in the processes of introduction of western culture and development of science and technology and specific natural feature born from harmonious combination between nature and culture disappears more and more. Population of Korea is expected to be reduced as contrasted with world population growing continuously. At this time, we need to restore biodiversity damaged in the processes of rapid population growth and economic development in concert with recovery of natural ecosystem due to population decrease. There were grand extinction events of five times since the birth of life on the earth. Modern extinction is very rapid and human activity is major causal factor. In these respects, it is distinguished from the past one. Climate change is real. Biodiversity is very vulnerable to climate change. If organisms did not find a survival method such as 'adaptation through evolution', 'movement to the other place where they can exist', and so on in the changed environment, they would extinct. In this respect, if climate change is continued, biodiversity should be damaged greatly. Furthermore, climate change would also influence on human life and socio-economic environment through change of biodiversity. Therefore, we need to grasp the effects that climate change influences on biodiversity more actively and further to prepare the alternatives to reduce the damage. Change of phenology, change of distribution range including vegetation shift, disharmony of interaction among organisms, reduction of reproduction and growth rates due to odd food chain, degradation of coral reef, and so on are emerged as the effects of climate change on biodiversity. Expansion of infectious disease, reduction of food production, change of cultivation range of crops, change of fishing ground and time, and so on appear as the effects on human. To solve climate change problem, first of all, we need to mitigate climate change by reducing discharge of warming gases. But even though we now stop discharge of warming gases, climate change is expected to be continued for the time being. In this respect, preparing adaptive strategy of climate change can be more realistic. Continuous monitoring to observe the effects of climate change on biodiversity and establishment of monitoring system have to be preceded over all others. Insurance of diverse ecological spaces where biodiversity can establish, assisted migration, and establishment of horizontal network from south to north and vertical one from lowland to upland ecological networks could be recommended as the alternatives to aid adaptation of biodiversity to the changing climate.

Environmental Interpretation on soil mass movement spot and disaster dangerous site for precautionary measures -in Peong Chang Area- (산사태발생지(山沙汰發生地)와 피해위험지(被害危險地)의 환경학적(環境學的) 해석(解析)과 예방대책(豫防對策) -평창지구(平昌地區)를 중심(中心)으로-)

  • Ma, Sang Kyu
    • Journal of Korean Society of Forest Science
    • /
    • v.45 no.1
    • /
    • pp.11-25
    • /
    • 1979
  • There was much mass movement at many different mountain side of Peong Chang area in Kwangwon province by the influence of heavy rainfall through August/4 5, 1979. This study have done with the fact observed through the field survey and the information of the former researchers. The results are as follows; 1. Heavy rainfall area with more than 200mm per day and more than 60mm per hour as maximum rainfall during past 6 years, are distributed in the western side of the connecting line through Hoeng Seong, Weonju, Yeongdong, Muju, Namweon and Suncheon, and of the southern sea side of KeongsangNam-do. The heavy rain fan reason in the above area seems to be influenced by the mouktam range and moving direction of depression. 2. Peak point of heavy rainfall distribution always happen during the night time and seems to cause directly mass movement and serious damage. 3. Soil mass movement in Peongchang break out from the course sandy loam soil of granite group and the clay soil of lime stone and shale. Earth have moved along the surface of both bedrock or also the hardpan in case of the lime stone area. 4. Infiltration seems to be rapid on the both bedrock soil, the former is by the soil texture and the latter is by the crumb structure, high humus content and dense root system in surface soil. 5. Topographic pattern of mass movement spot is mostly the concave slope at the valley head or at the upper part of middle slope which run-off can easily come together from the surrounding slope. Soil profile of mass movement spot has wet soil in the lime stone area and loose or deep soil in the granite area. 6. Dominant slope degree of the soil mass movement site has steep slope, mostly, more than 25 degree and slope position that start mass movement is mostly in the range of the middle slope line to ridge line. 7. Vegetation status of soil mass movement area are mostly fire field agriculture area, it's abandoned grass land, young plantation made on the fire field poor forest of the erosion control site and non forest land composed mainly grass and shrubs. Very rare earth sliding can be found in the big tree stands but mostly from the thin soil site on the un-weatherd bed rock. 8. Dangerous condition of soil mass movement and land sliding seems to be estimated by the several environmental factors, namely, vegetation cover, slope degree, slope shape and position, bed rock and soil profile characteristics etc. 9. House break down are mostly happen on the following site, namely, colluvial cone and fan, talus, foot area of concave slope and small terrace or colluvial soil between valley and at the small river side Dangerous house from mass movement could be interpreted by the aerial photo with reference of the surrounding site condition of house and village in the mountain area 10. As a counter plan for the prevention of mass movement damage the technics of it's risk diagnosis and the field survey should be done, and the mass movement control of prevention should be started with the goverment support as soon as possible. The precautionary measures of house and village protection from mass movement damage should be made and executed and considered the protecting forest making around the house and village. 11. Dangerous or safety of house and village from mass movement and flood damage will be indentified and informed to the village people of mountain area through the forest extension work. 12. Clear cutting activity on the steep granite site, fire field making on the steep slope, house or village construction on the dangerous site and fuel collection in the eroded forest or the steep forest land should be surely prohibited When making the management plan the mass movement, soil erosion and flood problem will be concidered and also included the prevention method of disaster.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

DEVELOPMENT OF STATEWIDE TRUCK TRAFFIC FORECASTING METHOD BY USING LIMITED O-D SURVEY DATA (한정된 O-D조사자료를 이용한 주 전체의 트럭교통예측방법 개발)

  • 박만배
    • Proceedings of the KOR-KST Conference
    • /
    • 1995.02a
    • /
    • pp.101-113
    • /
    • 1995
  • The objective of this research is to test the feasibility of developing a statewide truck traffic forecasting methodology for Wisconsin by using Origin-Destination surveys, traffic counts, classification counts, and other data that are routinely collected by the Wisconsin Department of Transportation (WisDOT). Development of a feasible model will permit estimation of future truck traffic for every major link in the network. This will provide the basis for improved estimation of future pavement deterioration. Pavement damage rises exponentially as axle weight increases, and trucks are responsible for most of the traffic-induced damage to pavement. Consequently, forecasts of truck traffic are critical to pavement management systems. The pavement Management Decision Supporting System (PMDSS) prepared by WisDOT in May 1990 combines pavement inventory and performance data with a knowledge base consisting of rules for evaluation, problem identification and rehabilitation recommendation. Without a r.easonable truck traffic forecasting methodology, PMDSS is not able to project pavement performance trends in order to make assessment and recommendations in the future years. However, none of WisDOT's existing forecasting methodologies has been designed specifically for predicting truck movements on a statewide highway network. For this research, the Origin-Destination survey data avaiiable from WisDOT, including two stateline areas, one county, and five cities, are analyzed and the zone-to'||'&'||'not;zone truck trip tables are developed. The resulting Origin-Destination Trip Length Frequency (00 TLF) distributions by trip type are applied to the Gravity Model (GM) for comparison with comparable TLFs from the GM. The gravity model is calibrated to obtain friction factor curves for the three trip types, Internal-Internal (I-I), Internal-External (I-E), and External-External (E-E). ~oth "macro-scale" calibration and "micro-scale" calibration are performed. The comparison of the statewide GM TLF with the 00 TLF for the macro-scale calibration does not provide suitable results because the available 00 survey data do not represent an unbiased sample of statewide truck trips. For the "micro-scale" calibration, "partial" GM trip tables that correspond to the 00 survey trip tables are extracted from the full statewide GM trip table. These "partial" GM trip tables are then merged and a partial GM TLF is created. The GM friction factor curves are adjusted until the partial GM TLF matches the 00 TLF. Three friction factor curves, one for each trip type, resulting from the micro-scale calibration produce a reasonable GM truck trip model. A key methodological issue for GM. calibration involves the use of multiple friction factor curves versus a single friction factor curve for each trip type in order to estimate truck trips with reasonable accuracy. A single friction factor curve for each of the three trip types was found to reproduce the 00 TLFs from the calibration data base. Given the very limited trip generation data available for this research, additional refinement of the gravity model using multiple mction factor curves for each trip type was not warranted. In the traditional urban transportation planning studies, the zonal trip productions and attractions and region-wide OD TLFs are available. However, for this research, the information available for the development .of the GM model is limited to Ground Counts (GC) and a limited set ofOD TLFs. The GM is calibrated using the limited OD data, but the OD data are not adequate to obtain good estimates of truck trip productions and attractions .. Consequently, zonal productions and attractions are estimated using zonal population as a first approximation. Then, Selected Link based (SELINK) analyses are used to adjust the productions and attractions and possibly recalibrate the GM. The SELINK adjustment process involves identifying the origins and destinations of all truck trips that are assigned to a specified "selected link" as the result of a standard traffic assignment. A link adjustment factor is computed as the ratio of the actual volume for the link (ground count) to the total assigned volume. This link adjustment factor is then applied to all of the origin and destination zones of the trips using that "selected link". Selected link based analyses are conducted by using both 16 selected links and 32 selected links. The result of SELINK analysis by u~ing 32 selected links provides the least %RMSE in the screenline volume analysis. In addition, the stability of the GM truck estimating model is preserved by using 32 selected links with three SELINK adjustments, that is, the GM remains calibrated despite substantial changes in the input productions and attractions. The coverage of zones provided by 32 selected links is satisfactory. Increasing the number of repetitions beyond four is not reasonable because the stability of GM model in reproducing the OD TLF reaches its limits. The total volume of truck traffic captured by 32 selected links is 107% of total trip productions. But more importantly, ~ELINK adjustment factors for all of the zones can be computed. Evaluation of the travel demand model resulting from the SELINK adjustments is conducted by using screenline volume analysis, functional class and route specific volume analysis, area specific volume analysis, production and attraction analysis, and Vehicle Miles of Travel (VMT) analysis. Screenline volume analysis by using four screenlines with 28 check points are used for evaluation of the adequacy of the overall model. The total trucks crossing the screenlines are compared to the ground count totals. L V/GC ratios of 0.958 by using 32 selected links and 1.001 by using 16 selected links are obtained. The %RM:SE for the four screenlines is inversely proportional to the average ground count totals by screenline .. The magnitude of %RM:SE for the four screenlines resulting from the fourth and last GM run by using 32 and 16 selected links is 22% and 31 % respectively. These results are similar to the overall %RMSE achieved for the 32 and 16 selected links themselves of 19% and 33% respectively. This implies that the SELINICanalysis results are reasonable for all sections of the state.Functional class and route specific volume analysis is possible by using the available 154 classification count check points. The truck traffic crossing the Interstate highways (ISH) with 37 check points, the US highways (USH) with 50 check points, and the State highways (STH) with 67 check points is compared to the actual ground count totals. The magnitude of the overall link volume to ground count ratio by route does not provide any specific pattern of over or underestimate. However, the %R11SE for the ISH shows the least value while that for the STH shows the largest value. This pattern is consistent with the screenline analysis and the overall relationship between %RMSE and ground count volume groups. Area specific volume analysis provides another broad statewide measure of the performance of the overall model. The truck traffic in the North area with 26 check points, the West area with 36 check points, the East area with 29 check points, and the South area with 64 check points are compared to the actual ground count totals. The four areas show similar results. No specific patterns in the L V/GC ratio by area are found. In addition, the %RMSE is computed for each of the four areas. The %RMSEs for the North, West, East, and South areas are 92%, 49%, 27%, and 35% respectively, whereas, the average ground counts are 481, 1383, 1532, and 3154 respectively. As for the screenline and volume range analyses, the %RMSE is inversely related to average link volume. 'The SELINK adjustments of productions and attractions resulted in a very substantial reduction in the total in-state zonal productions and attractions. The initial in-state zonal trip generation model can now be revised with a new trip production's trip rate (total adjusted productions/total population) and a new trip attraction's trip rate. Revised zonal production and attraction adjustment factors can then be developed that only reflect the impact of the SELINK adjustments that cause mcreases or , decreases from the revised zonal estimate of productions and attractions. Analysis of the revised production adjustment factors is conducted by plotting the factors on the state map. The east area of the state including the counties of Brown, Outagamie, Shawano, Wmnebago, Fond du Lac, Marathon shows comparatively large values of the revised adjustment factors. Overall, both small and large values of the revised adjustment factors are scattered around Wisconsin. This suggests that more independent variables beyond just 226; population are needed for the development of the heavy truck trip generation model. More independent variables including zonal employment data (office employees and manufacturing employees) by industry type, zonal private trucks 226; owned and zonal income data which are not available currently should be considered. A plot of frequency distribution of the in-state zones as a function of the revised production and attraction adjustment factors shows the overall " adjustment resulting from the SELINK analysis process. Overall, the revised SELINK adjustments show that the productions for many zones are reduced by, a factor of 0.5 to 0.8 while the productions for ~ relatively few zones are increased by factors from 1.1 to 4 with most of the factors in the 3.0 range. No obvious explanation for the frequency distribution could be found. The revised SELINK adjustments overall appear to be reasonable. The heavy truck VMT analysis is conducted by comparing the 1990 heavy truck VMT that is forecasted by the GM truck forecasting model, 2.975 billions, with the WisDOT computed data. This gives an estimate that is 18.3% less than the WisDOT computation of 3.642 billions of VMT. The WisDOT estimates are based on the sampling the link volumes for USH, 8TH, and CTH. This implies potential error in sampling the average link volume. The WisDOT estimate of heavy truck VMT cannot be tabulated by the three trip types, I-I, I-E ('||'&'||'pound;-I), and E-E. In contrast, the GM forecasting model shows that the proportion ofE-E VMT out of total VMT is 21.24%. In addition, tabulation of heavy truck VMT by route functional class shows that the proportion of truck traffic traversing the freeways and expressways is 76.5%. Only 14.1% of total freeway truck traffic is I-I trips, while 80% of total collector truck traffic is I-I trips. This implies that freeways are traversed mainly by I-E and E-E truck traffic while collectors are used mainly by I-I truck traffic. Other tabulations such as average heavy truck speed by trip type, average travel distance by trip type and the VMT distribution by trip type, route functional class and travel speed are useful information for highway planners to understand the characteristics of statewide heavy truck trip patternS. Heavy truck volumes for the target year 2010 are forecasted by using the GM truck forecasting model. Four scenarios are used. Fo~ better forecasting, ground count- based segment adjustment factors are developed and applied. ISH 90 '||'&'||' 94 and USH 41 are used as example routes. The forecasting results by using the ground count-based segment adjustment factors are satisfactory for long range planning purposes, but additional ground counts would be useful for USH 41. Sensitivity analysis provides estimates of the impacts of the alternative growth rates including information about changes in the trip types using key routes. The network'||'&'||'not;based GMcan easily model scenarios with different rates of growth in rural versus . . urban areas, small versus large cities, and in-state zones versus external stations. cities, and in-state zones versus external stations.

  • PDF

A Study on the Costume Style of Civil Servants' Stone Images Erected at Tombs of the Kings for Yi-dynasty (조선왕조(朝鮮王朝) 왕릉(王陵) 문인석상(文人石像)의 복식형태(服飾形態)에 관한 연구)

  • Kwon, Yong-Ok
    • Journal of the Korean Society of Costume
    • /
    • v.4
    • /
    • pp.87-114
    • /
    • 1981
  • A costume reveals the social characteristics of the era in which it is worn, thus we can say that the history of change of the costume is the history of change of the living culture of the era. Since the Three States era, the costume structure of this country had been affected by the costume system of the China's historical dynasties in the form of the grant therefrom because of geographical conditions, which affection was conspicuous for the bureaucrat class, particularly including but not limited to the Kings' familities. Such a grant of the costume for the bureaucrat class (i.e., official uniform) was first given by the Dang-dynasty at the age of Queen Jinduck, the 28th of the Shilla-dynasty. Since then, the costume for the bureaucrats had consecutively been affected as the ages had gone from the unified Shilla, to the Koryo and to the Yi-dynasty. As the full costumes officially used by government officials (generally called "Baek Gwan") in the Yidynasty, there existed Jo-bok, Gong-bok and Sang-bok. Of such official costumes, Gong-bok was worn at the time of conducting official affairs of the dynasty, making a respectful visit for the expression of thanks or meeting diplomatic missions of foreign countries. It appears no study was made yet with regard to the Gong-bok while the studies on the Jo-bok and the Sangbok were made. Therefore, this article is, by rendering a study and research on the styles of costumes of civil servants' stone images erected at the Kings' tombs of the Yi-dynasty, to help the persons concerned understand the Gong-bok, one of the official costume for Baek Kwan of that age and further purports to specifically identify the styles and changes of the Gong-bok, worn by Baek Gwan during the Yi-dynasty, consisting of the Bok-doo (a hat, four angled and two storied with flat top), Po (gown), Dae (belt), and Hol (small and thin plate which was officially held by the government officials in hand, showing the courtesy to and writing brief memorandums before the King) and Hwa (shoes). For that purpose, I investigated by actually visiting the tombs of the Kings of the Yi-dynasty including the Geonwon-neung, the tomb of the first King Tae-jo and the You-neung, the tomb of the 27th King Soon-jong as well as the tombs of the lawful wives and concubines of various Kings, totalling 29 tombs and made reference to relevant books and records. Pursuant. to this study, of the 29 Kings' tombs the costume styles of civil servants' stone images erected at the 26 Kings' tombs are those of Gong-bok for Baek-gwan of the Yi-dynasty wearing Bok-doo as a hat and Ban-ryeong or Dan-ryenog Po as a gown with Dae, holding Hol in hand and wearing shoes. Other than those of the 26 tombs, the costume styles of the Ryu-neung, the tomb of the Moon-jo who was the first son of 23rd King Soon-jo and given the King's title after he died and of the You-neung, the tomb of the 27th King Soon-jong are those of Jobok with Yang-gwan (a sort of hat having stripes erected, which is different from the Bok-doo), and that of the Hong-neung, the tomb of the 26th King Go-jong shows an exceptional one wearing Yang-gwan and Ban-ryeong Po ; these costume styles other than Gongbok remain as the subject for further study. Gong-bok which is the costume style of civil servants' stone images of most of the Kings' tombs had not been changed in its basic structure for about 500 years of the Yi-dynasty and Koryo categorized by the class of officials pursuant to the color of Po and materials of Dae and Hol. Summary of this costume style follows: (1) Gwan-mo (hat). The Gwan-mo style of civil servants' stone images of the 26 Kings' tombs, other than Ryu-neung, Hong-neung and You-neung which have Yang-gwan, out of the 29 Kings' tombs of the Yi-dynasty reveals the Bok-doo with four angled top, having fore-part and back-part divided. Back part of the Bok-doo is double the fore-part in height. The expression of the Gak (wings of the Bokdoo) varies: the Gyo-gak Bok-doo in that the Gaks, roundly arisen to the direction of the top, are clossed each other (tombs of the Kings Tae-jong), the downward style Jeon-gak Bok-doo in that soft Gaks are hanged on the shoulders (tombs of the Kings Joong-jong and Seong-jong) and another types of Jeon-gak Bok-doo having Gaks which arearisen steeply or roundly to the direction of top and the end of which are treated in a rounded or straight line form. At the lower edge one protrusive line distinctly reveals. Exceptionally, there reveals 11 Yang-gwan (gwan having 11 stripes erected) at the Ryu-neung of the King Moon-jo, 9 Yang-gwan at the Hong-neung of the King Go-jong and 11 Yang-gwan at the You-neung of the King Soon-jong; noting that the Yang-gwan of Baek Kwan, granted by the Myeong-dynasty of the China during the Yi-dynasty, was in the shape of 5 Yang-gwan for the first Poom (class) based on the principle of "Yideung Chaegang" (gradual degrading for secondary level), the above-mentioned Yang-gwans are very contrary to the principle and I do not touch such issue in this study, leaving for further study. (2) Po (gown). (a) Git (collar). Collar style of Po was the Ban-ryeong (round collar) having small neck-line in the early stage and was changed to the Dan-ryeong (round collar having deep neck-line) in the middle of the: dynasty. In the Dan-ryeong style of the middle era (shown at the tomb of the King Young-jo); a, thin line such as bias is shown around the internal side edge and the width of collar became wide a little. It is particularly noted that the Ryu-neung established in the middle stage and the You-neung in the later stage show civil servants in Jo-bok with the the Jikryeong (straight collar) Po and in case of the Hong-neung, the Hong-neung, the tomb of the King Go-jong, civil servants, although they wear Yang-gwan, are in the Ban-ryeong Po with Hoo-soo (back embroidery) and Dae and wear shoes as used in the Jo-bok style. As I could not make clear the theoretical basis of why the civil servants' costume styles revealed, at these tombs of the Kings are different from those of other tombs, I left this issue for further study. It is also noted that all the civil servants' stone images show the shape of triangled collar which is revealed over the Godae-git of Po. This triangled collar, I believe, would be the collar of the Cheomri which was worn in the middle of the Po and the underwear, (b) Sleeve. The sleeve was in the Gwan-soo (wide sleeve) style. having the width of over 100 centimeter from the early stage to the later stage arid in the Doo-ri sleeve style having the edge slightly rounded and we can recognize that it was the long sleeve in view of block fold shaped protrusive line, expressed on the arms. At the age of the King Young-jo, the sleeve-end became slightly narrow and as a result, the lower line of the sleeve were shaped curved. We can see another shape of narrow sleeve inside the wide sleeve-end, which should be the sleeve of the Cheom-ri worn under the Gong-bok. (c) Moo. The Moo revealed on the Po of civil servants' stone images at the age of the King Sook-jong' coming to the middle era. Initially the top of the Moo was expressed flat but the Moo was gradually changed to the triangled shape with the acute top. In certain cases, top or lower part of the Moo are not reveald because of wear and tear. (d) Yeomim. Yeomim (folding) of the Po was first expressed on civil servants' stone images of the Won-neung, the tomb of the King Young-jo and we can seemore delicate expression of the Yeomim and Goreum (stripe folding and fixing the lapel of the Po) at the tomb of the Jeongseong-wanghoo, the wife of the King Young-jo, At the age of the King Soon-jo, we can see the shape of Goreum similar to a string rather than the Goreum and the upper part of the Goreum which fixes Yeomim was expressed on the right sleeve. (3) Dae. Dae fixed on the Po was placed half of the length of Po from the shoulders in the early stage. Thereafter, at the age of the King Hyeon-jong it was shown on the slightly upper part. placed around one third of the length of Po. With regard to the design of Dae, all the civil servants' stone images of the Kings' tombs other than those of the Geonwon-neung of the King Tae-jo show single or double protrusive line expressed at the edge of Dae and in the middle of such lines, cloud pattern, dangcho (a grass) pattern, chrysanthemum pattern or other various types of flowery patterns were designed. Remaining portion of the waist Dae was hanged up on the back, which was initially expressed as directed from the left to the right but thereafter expressed. without orderly fashion,. to the direction of the left from the right and vice versa, Dae was in the shape of Yaja Dae. In this regard, an issue of when or where such a disorderly fashion of the direction of the remaining portion of waist Dae was originated is also presented to be clarified. In case of the Ryuneung, Hong-neung and You-neung which have civil servants' stone images wearing exceptional costume (Jo-bok), waist Dae of the Ryu-neung and Hong-neung are designed in the mixture of dual cranes pattern, cosecutive beaded pattern and chrvsenthemum pattern and that of You-neung is designed in cloud pattern. (4) Hol. Although materials of the Hol held in hand of civil servants' stone images are not identifiable, those should be the ivory Hol as all the Baek Gwan's erected as stone images should be high class officials. In the styles, no significant changes were found, however the Hol's expressed on civil servants' stone images of the Yi-dynasty were shaped in round top and angled bottom or round top and bottom. Parcicularly, at the age of the King Young-jo the Hol was expressed in the peculiar type with four angles all cut off. (5) Hwa (shoes). As the shoes expressed on civil servants' stone images are covered with the lower edges of the Po, the styles thereof are not exactly identifiable. However, reading the statement "black leather shoes for the first class (1 Poom) to ninth class (9 Poom)," recorded in the Gyeongkook Daejon, we can believe that the shoes were worn. As the age went on, the front tips of the shoes were soared and particularly, at the Hong-neung of the King Go-jong the shoes were obviously expressed with modern sense as the country were civilized.

  • PDF