• Title/Summary/Keyword: 이용복잡성

Search Result 3,987, Processing Time 0.053 seconds

Studies on the Interpretative Classification of Paddy Soils in Korea I : A Study on the Classification of Sandy Paddy Soils (우리나라 답토양(畓土壌)의 실용적분류(実用的分類)에 관(関)한 연구(硏究) -제1보(第一報) 사질답(砂質畓) 분류(分類)에 관(関)하여)

  • Jung, Yeun-Tae;Yang, Euy-Seog;Park, Rae-Kyung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.15 no.2
    • /
    • pp.128-140
    • /
    • 1982
  • The distribution and practical classification of sandy paddy soils, which have the most extensive acreage among low productive paddy soils in Korea and have distinctive improvement effects, were studied to propose a tentative new classification system of sandy textured paddy soils as a means of improving the "Paddy Soil Type Classification" scheme used. The results are summarized as follows; 1. The potential productivity of sandy textured paddy soils was about 86% of normal paddy and the coefficient of variation was relatively high indicating that the properties of soils included were not sufficiently homogeneous. 2. As the poorly drained and halomorphic (> 16 mmhos/cm of E.C. at $25^{\circ}C$) sandy soils are not included in the "Sandy Soil" type according to the criteria of "Soil Type Classification", the recommendation of "adding clay earth" become complicated, and the soil type have to change when the salts washed away or due to ground water table fluctuations. 3. Coarse textured soils were entirely included in the "Sandy Soils" in the tentative criteria of sandy soil classification proposed, and the sandy soils were subdivided into 4 subtypes that is "Oxidized leaching sandy paddy", Red-ox. intergrading sandy paddy", "Reduced accumulating sandy paddy" and "Reduced halomorphic sandy paddy". The system of sandy soil classification proposed were consisted of following categories; Type (Sandy paddy)-Sub-type (4)-Texture family (5)-Soil series (48). 4. The variation of productivities according to the proposed scheme was more homogenized than that of the present device. 5. The total extent of sandy paddy soils was 409, 902 ha (32.3% of total paddy) according to the present classification system, but the extent reached 492,983 ha (38.9%) by the proposed system. The provinces of Gyeong-gi (88.923ha), Jeon-bug (69.717 ha), Gyeong-bug (55.390 ha) have extensive acreage of sandy paddy soils, and the provinces that had high ratio of sandy paddy soils were Gang-weon (58.9%), Gyeong-gi (50.5%), Chung-bug (48.5%), Jeon-bug (41.0%) etc. The ratio was increased by the proposed scheme, e.g. 71.4% in the case of Gang-weon prov. 6. According to the suitability group of paddy soils, the sandy soils mostly belong to 3 class (69.1%) and 4 class (29.2%). Coarse loamy textural family (59.2%) and coarse silty (16.1 %) soils were dominantly distributed. 7. The "Red-ox. intergrading subtype" of sandy paddy pertinent to 49.6% (245,012 ha) while the "Oxidized leaching sub-type" reaches to 33.5% (64,890 ha) and the remained 16.9% (83,081ha) belong to "Reduced accumulating sub-type (14.0%) and "Reduced halomorphic sub-type (2.9%)" according to the proposed scheme.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

The Research on Recommender for New Customers Using Collaborative Filtering and Social Network Analysis (협력필터링과 사회연결망을 이용한 신규고객 추천방법에 대한 연구)

  • Shin, Chang-Hoon;Lee, Ji-Won;Yang, Han-Na;Choi, Il Young
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.19-42
    • /
    • 2012
  • Consumer consumption patterns are shifting rapidly as buyers migrate from offline markets to e-commerce routes, such as shopping channels on TV and internet shopping malls. In the offline markets consumers go shopping, see the shopping items, and choose from them. Recently consumers tend towards buying at shopping sites free from time and place. However, as e-commerce markets continue to expand, customers are complaining that it is becoming a bigger hassle to shop online. In the online shopping, shoppers have very limited information on the products. The delivered products can be different from what they have wanted. This case results to purchase cancellation. Because these things happen frequently, they are likely to refer to the consumer reviews and companies should be concerned about consumer's voice. E-commerce is a very important marketing tool for suppliers. It can recommend products to customers and connect them directly with suppliers with just a click of a button. The recommender system is being studied in various ways. Some of the more prominent ones include recommendation based on best-seller and demographics, contents filtering, and collaborative filtering. However, these systems all share two weaknesses : they cannot recommend products to consumers on a personal level, and they cannot recommend products to new consumers with no buying history. To fix these problems, we can use the information which has been collected from the questionnaires about their demographics and preference ratings. But, consumers feel these questionnaires are a burden and are unlikely to provide correct information. This study investigates combining collaborative filtering with the centrality of social network analysis. This centrality measure provides the information to infer the preference of new consumers from the shopping history of existing and previous ones. While the past researches had focused on the existing consumers with similar shopping patterns, this study tried to improve the accuracy of recommendation with all shopping information, which included not only similar shopping patterns but also dissimilar ones. Data used in this study, Movie Lens' data, was made by Group Lens research Project Team at University of Minnesota to recommend movies with a collaborative filtering technique. This data was built from the questionnaires of 943 respondents which gave the information on the preference ratings on 1,684 movies. Total data of 100,000 was organized by time, with initial data of 50,000 being existing customers and the latter 50,000 being new customers. The proposed recommender system consists of three systems : [+] group recommender system, [-] group recommender system, and integrated recommender system. [+] group recommender system looks at customers with similar buying patterns as 'neighbors', whereas [-] group recommender system looks at customers with opposite buying patterns as 'contraries'. Integrated recommender system uses both of the aforementioned recommender systems to recommend movies that both recommender systems pick. The study of three systems allows us to find the most suitable recommender system that will optimize accuracy and customer satisfaction. Our analysis showed that integrated recommender system is the best solution among the three systems studied, followed by [-] group recommended system and [+] group recommender system. This result conforms to the intuition that the accuracy of recommendation can be improved using all the relevant information. We provided contour maps and graphs to easily compare the accuracy of each recommender system. Although we saw improvement on accuracy with the integrated recommender system, we must remember that this research is based on static data with no live customers. In other words, consumers did not see the movies actually recommended from the system. Also, this recommendation system may not work well with products other than movies. Thus, it is important to note that recommendation systems need particular calibration for specific product/customer types.

Implementation of integrated monitoring system for trace and path prediction of infectious disease (전염병의 경로 추적 및 예측을 위한 통합 정보 시스템 구현)

  • Kim, Eungyeong;Lee, Seok;Byun, Young Tae;Lee, Hyuk-Jae;Lee, Taikjin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.69-76
    • /
    • 2013
  • The incidence of globally infectious and pathogenic diseases such as H1N1 (swine flu) and Avian Influenza (AI) has recently increased. An infectious disease is a pathogen-caused disease, which can be passed from the infected person to the susceptible host. Pathogens of infectious diseases, which are bacillus, spirochaeta, rickettsia, virus, fungus, and parasite, etc., cause various symptoms such as respiratory disease, gastrointestinal disease, liver disease, and acute febrile illness. They can be spread through various means such as food, water, insect, breathing and contact with other persons. Recently, most countries around the world use a mathematical model to predict and prepare for the spread of infectious diseases. In a modern society, however, infectious diseases are spread in a fast and complicated manner because of rapid development of transportation (both ground and underground). Therefore, we do not have enough time to predict the fast spreading and complicated infectious diseases. Therefore, new system, which can prevent the spread of infectious diseases by predicting its pathway, needs to be developed. In this study, to solve this kind of problem, an integrated monitoring system, which can track and predict the pathway of infectious diseases for its realtime monitoring and control, is developed. This system is implemented based on the conventional mathematical model called by 'Susceptible-Infectious-Recovered (SIR) Model.' The proposed model has characteristics that both inter- and intra-city modes of transportation to express interpersonal contact (i.e., migration flow) are considered. They include the means of transportation such as bus, train, car and airplane. Also, modified real data according to the geographical characteristics of Korea are employed to reflect realistic circumstances of possible disease spreading in Korea. We can predict where and when vaccination needs to be performed by parameters control in this model. The simulation includes several assumptions and scenarios. Using the data of Statistics Korea, five major cities, which are assumed to have the most population migration have been chosen; Seoul, Incheon (Incheon International Airport), Gangneung, Pyeongchang and Wonju. It was assumed that the cities were connected in one network, and infectious disease was spread through denoted transportation methods only. In terms of traffic volume, daily traffic volume was obtained from Korean Statistical Information Service (KOSIS). In addition, the population of each city was acquired from Statistics Korea. Moreover, data on H1N1 (swine flu) were provided by Korea Centers for Disease Control and Prevention, and air transport statistics were obtained from Aeronautical Information Portal System. As mentioned above, daily traffic volume, population statistics, H1N1 (swine flu) and air transport statistics data have been adjusted in consideration of the current conditions in Korea and several realistic assumptions and scenarios. Three scenarios (occurrence of H1N1 in Incheon International Airport, not-vaccinated in all cities and vaccinated in Seoul and Pyeongchang respectively) were simulated, and the number of days taken for the number of the infected to reach its peak and proportion of Infectious (I) were compared. According to the simulation, the number of days was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days when vaccination was not considered. In terms of the proportion of I, Seoul was the highest while Pyeongchang was the lowest. When they were vaccinated in Seoul, the number of days taken for the number of the infected to reach at its peak was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days. In terms of the proportion of I, Gangneung was the highest while Pyeongchang was the lowest. When they were vaccinated in Pyeongchang, the number of days was the fastest in Seoul with 37 days and the slowest in Pyeongchang with 43 days. In terms of the proportion of I, Gangneung was the highest while Pyeongchang was the lowest. Based on the results above, it has been confirmed that H1N1, upon the first occurrence, is proportionally spread by the traffic volume in each city. Because the infection pathway is different by the traffic volume in each city, therefore, it is possible to come up with a preventive measurement against infectious disease by tracking and predicting its pathway through the analysis of traffic volume.

Changes in Serum IGF-I and Spermatogenesis Analysed by Flow Cytometry in Growing Male Rabbit (성장 중인 수토끼에서 혈청 IGF-I 수준과 Flow Cytometry 측정에 의한 정자 형성의 변화)

  • Lee J. H.;Kim C. K.;Chang Y. M.;Ryu J. W.;Park M. Y.;Chung Y. C.;Pang M. G.
    • Reproductive and Developmental Biology
    • /
    • v.29 no.3
    • /
    • pp.163-168
    • /
    • 2005
  • The aim of this study was to investigate the changes in insulin-like growth factor-I (IGF-I) and growth hormone (GH) in serum, the quantitation of spermato-genesis and the comparable relationships among these measurements during pubertal period in New Zealand White male rabbits. To investigate the age-related testicular changes in DNA contents of spermatogenic cells, the fine-needle testicular biopsies from males aged 10 to 28 wks were evaluated by flow cytometry(FCM). Body weight increased significantly between the ages of 12 and 20 wks (P<0.05) and reached 3.4 kg at 28 wks of age. The highest serum IGF-I level (451.3ng/mL) was observed at 20wks of age (P<0.05) and thereafter remained stable at low levels. Serum GH level at 18 wks of age was 183.3 pg/mL which was significantly higher compared to the other ages (P<0.05), and the rising time in serum GH tend to be somewhat earlier than that of IGF-I. The relative percentage of It-cells in testicular cell compartments was $48.2\%$ at the age of 18 wks which significantly increased than those of 16-wk-old (P<0.05) and thereafter increased with the advance of age to $68\%$. The percentage of 2C-cells in testis was $26.8\%$ at 18 wks of age which was significantly lower than $54.3\%$ at 16 wks old (P<0.05). The percentage of 4C-cells was constantly maintained $2\~6\%$ except the $9.9\%$ at 18 wks of age. In conclusion, the results suggest that the puberty onset occurred at about the 18 wks of age and that the IGF-I and GH in serum during the pubertal period showed the age/growth-specific changes and these changes might be related to the spermatogenesis. The DNA FCM combined with fine-needle testicular biopsy could offer a very sensitive method to monitor the quantitative spermatogenic events related to the puberty onset.

Usefulness of Troponin-I, Lactate, C-reactive protein as a Prognostic Markers in Critically Ill Non-cardiac Patients (비 순환기계 중환자의 예후 인자로서의 Troponin-I, Lactate, C-reactive protein의 유용성)

  • Cho, Yu Ji;Ham, Hyeon Seok;Kim, Hwi Jong;Kim, Ho Cheol;Lee, Jong Deok;Hwang, Young Sil
    • Tuberculosis and Respiratory Diseases
    • /
    • v.58 no.6
    • /
    • pp.562-569
    • /
    • 2005
  • Background : The severity scoring system is useful for predicting the outcome of critically ill patients. However, the system is quite complicated and cost-ineffective. Simple serologic markers have been proposed to predict the outcome, which include troponin-I, lactate and C-reactive protein(CRP). The aim of this study was to evaluate the prognostic values of troponin-I, lactate and CRP in critically ill non-cardiac patients. Methods : From September 2003 to June 2004, 139 patients(Age: $63.3{\pm}14.7$, M:F = 88:51), who were admitted to the MICU with non-cardiac critical illness at Gyeongsang National University Hospital, were enrolled in this study. This study evaluated the severity of the illness and the multi-organ failure score (Acute Physiologic and Chronic Health EvaluationII, Simplified Acute Physiologic ScoreII and Sequential Organ Failure Assessment) and measured the troponin-I, lactate and CRP within 24 hours after admission in the MICU. Each value in the survivors and non-survivors was compared at the 10th and 30th day after ICU admission. The mortality rate was compared at 10th and 30th day in normal and abnormal group. In addition, the correlations between each value and the severity score were assessed. Results : There were significantly higher troponin-I and CRP levels, not lactate, in the non-survivors than in the survivors at 10th day($1.018{\pm}2.58ng/ml$, $98.48{\pm}69.24mg/L$ vs. $4.208{\pm}10.23ng/ml$, $137.69{\pm}70.18mg/L$) (p<0.05). There were significantly higher troponin-I, lactate and CRP levels in the non-survivors than in the survivors on the 30th day ($0.99{\pm}2.66ng/ml$, $8.02{\pm}9.54ng/dl$, $96.87{\pm}68.83mg/L$ vs. $3.36{\pm}8.74ng/ml$, $15.42{\pm}20.57ng/dl$, $131.28{\pm}71.23mg/L$) (p<0.05). The mortality rate was significantly higher in the abnormal group of troponin-I, lactate and CRP than in the normal group of troponin-I, lactate and CRP at 10th day(28.1%, 31.6%, 18.9% vs. 11.0%, 15.8 %, 0%) and 30th day(38.6%, 47.4%, 25.8% vs. 15.9%, 21.7%, 14.3%) (p<0.05). Troponin-I and lactate were significantly correlated with the SAPS II score($r^2=0.254$, 0.365, p<0.05). Conclusion : Measuring the troponin-I, lactate and CRP levels upon admission may be useful for predicting the outcome of critically ill non-cardiac patients.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.