• Title/Summary/Keyword: 정보적 군집 크기

Search Result 70, Processing Time 0.026 seconds

Function Approximation for Reinforcement Learning using Fuzzy Clustering (퍼지 클러스터링을 이용한 강화학습의 함수근사)

  • Lee, Young-Ah;Jung, Kyoung-Sook;Chung, Tae-Choong
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.587-592
    • /
    • 2003
  • Many real world control problems have continuous states and actions. When the state space is continuous, the reinforcement learning problems involve very large state space and suffer from memory and time for learning all individual state-action values. These problems need function approximators that reason action about new state from previously experienced states. We introduce Fuzzy Q-Map that is a function approximators for 1 - step Q-learning and is based on fuzzy clustering. Fuzzy Q-Map groups similar states and chooses an action and refers Q value according to membership degree. The centroid and Q value of winner cluster is updated using membership degree and TD(Temporal Difference) error. We applied Fuzzy Q-Map to the mountain car problem and acquired accelerated learning speed.

Stability Assessment of FKP System by NGII using Long-term Analysis of NTRIP Correction Signal (NTRIP 보정신호 분석을 통한 국토지리정보원 FKP NRTK 시스템 안정성 평가)

  • Kim, Min-Ho;Bae, Tae-Suk
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.4
    • /
    • pp.321-329
    • /
    • 2013
  • Despite the advantage of unlimited access, there are insufficient studies for the accuracy and stability of FKP that blocks the spread of the system for various applications. Therefore, we performed a long-term analysis from continuous real-time positioning, and investigated the error characteristics dependent on the size and the surrounding environment. The FKP shows significant changes in the positioning accuracy at different times of day, where the accuracy during daytime is worse than that of nighttime. In addition, the size and deviation of FKP correction may change with the ionospheric conditions, and high correlation between ambiguity resolution rate and the deviation of correction was observed. The receivers continuously request the correction information in order to cope with sudden variability of ionosphere. On the other hand, the correction information was not received up to an hour in case of stable ionospheric condition. It is noteworthy that the outliers of FKP are clustered in their position with some biases. Since several meters of errors can be occurred for kinematic positioning with FKP, therefore, it is necessary to make appropriate preparation for real-time applications.

Managing the Reverse Extrapolation Model of Radar Threats Based Upon an Incremental Machine Learning Technique (점진적 기계학습 기반의 레이더 위협체 역추정 모델 생성 및 갱신)

  • Kim, Chulpyo;Noh, Sanguk
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.4
    • /
    • pp.29-39
    • /
    • 2017
  • Various electronic warfare situations drive the need to develop an integrated electronic warfare simulator that can perform electronic warfare modeling and simulation on radar threats. In this paper, we analyze the components of a simulation system to reversely model the radar threats that emit electromagnetic signals based on the parameters of the electronic information, and propose a method to gradually maintain the reverse extrapolation model of RF threats. In the experiment, we will evaluate the effectiveness of the incremental model update and also assess the integration method of reverse extrapolation models. The individual model of RF threats are constructed by using decision tree, naive Bayesian classifier, artificial neural network, and clustering algorithms through Euclidean distance and cosine similarity measurement, respectively. Experimental results show that the accuracy of reverse extrapolation models improves, while the size of the threat sample increases. In addition, we use voting, weighted voting, and the Dempster-Shafer algorithm to integrate the results of the five different models of RF threats. As a result, the final decision of reverse extrapolation through the Dempster-Shafer algorithm shows the best performance in its accuracy.

A Study on a Quantified Structure Simulation Technique for Product Design Based on Augmented Reality (제품 디자인을 위한 증강현실 기반 정량구조 시뮬레이션 기법에 대한 연구)

  • Lee, Woo-Hun
    • Archives of design research
    • /
    • v.18 no.3 s.61
    • /
    • pp.85-94
    • /
    • 2005
  • Most of product designers use 3D CAD system as a inevitable design tool nowadays and many new products are developed through a concurrent engineering process. However, it is very difficult for novice designers to get the sense of reality from modeling objects shown in the computer screens. Such a intangibility problem comes from the lack of haptic interactions and contextual information about the real space because designers tend to do 3D modeling works only in a virtual space of 3D CAD system. To address this problem, this research investigate the possibility of a interactive quantified structure simulation for product design using AR(augmented reality) which can register a 3D CAD modeling object on the real space. We built a quantified structure simulation system based on AR and conducted a series of experiments to measure how accurately human perceive and adjust the size of virtual objects under varied experimental conditions in the AR environment. The experiment participants adjusted a virtual cube to a reference real cube within 1.3% relative error(5.3% relative StDev). The results gave the strong evidence that the participants can perceive the size of a virtual object very accurately. Furthermore, we found that it is easier to perceive the size of a virtual object in the condition of presenting plenty of real reference objects than few reference objects, and using LCD panel than HMD. We tried to apply the simulation system to identify preference characteristics for the appearance design of a home-service robot as a case study which explores the potential application of the system. There were significant variances in participants' preferred characteristics about robot appearance and that was supposed to come from the lack of typicality of robot image. Then, several characteristic groups were segmented by duster analysis. On the other hand, it was interesting finding that participants have significantly different preference characteristics between robot with arm and armless robot and there was a very strong correlation between the height of robot and arm length as a human body.

  • PDF

Research Status and Future Subjects to Predict Pest Occurrences in Agricultural Ecosystems Under Climate Change (기후변화에 따른 농업생태계 내 해충 발생 예측을 위한 연구 현황 및 향후 과제)

  • Jung, Jong-Kook;Lee, Hyoseok;Lee, Joon-Ho
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.16 no.4
    • /
    • pp.368-383
    • /
    • 2014
  • Climate change is expected to affect population density, phenology, distribution, morphological traits, reproduction and genetics of insects, and even in the extinction of insects. To develop novel research subjects for predicting climate change effect, basic information about biological and ecological data on insect species should be compiled and reviewed. For this reason, this study was conducted to collect the biological information on insect pests that are essential for predicting potential damage caused by insect pests in future environment. In addition, we compared domestic and foreign research trends regarding climate change effect and suggested future research subjects. Domestic researchers were rather narrow in the subject, and were mostly conducted based on short-term monitoring data to determine relationship between insects and environmental variables. On the other hand, foreign researches studied on various subjects to analyze the effect of climate change, such as changes in distribution of insect using long-term monitoring data or their prediction using population parameters and models, and monitoring of the change of the insect community structure. To determine change of the phenology, distribution, overwintering characteristics, and genetic structures of insects under climate change through development of monitoring technique, in conclusion, further researches are needed. Also, development of population models for major or potential pests is important for prediction of climate change effects.

A Study on the Application of the AMOEBA Technique for Delineating the Unique Primary Zones for the DIF Zoning Regulation (기반시설부담구역제도 제1단계 유일범역 도출과정에서의 AMOEBA 기법 적용에 관한 모의실험 연구)

  • Lee, Seok-Jun;Choei, Nae-Young
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.5-18
    • /
    • 2017
  • The AMOEBA approach in this study supplements the Hotspot method that had not been fully capable of dealing with the ecotone issues in designating the Development Impact Fee (DIF) zones as had been seen in the preceding study by Kim and Choei (2017). The AMOEBA procedure shares the common Getis-Ord statistic with the Hotspot technique but is more adequate to figure out the ecotones. For the comparative purpose, simulations are run by both methods for a series of different scenarios in terms of analytic spatial units (here, the square grids) from 100m up to 400m; and the zonal outcomes by both methods are compared using a set of evaluative indicators. In terms of the numerical scores, the performances by the two methods are much comparable except that the former is slightly superior with respect to the avoidance of the oversized spread of the selected zones whereas so is the latter with respect to the ease of infrastructure installation. It remains yet to be investigated by the extended studies that include in-depth field surveys to figure out the causes as well as the meanings of such differences in zonal determinations.

An Expert System for the Estimation of the Growth Curve Parameters of New Markets (신규시장 성장모형의 모수 추정을 위한 전문가 시스템)

  • Lee, Dongwon;Jung, Yeojin;Jung, Jaekwon;Park, Dohyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.17-35
    • /
    • 2015
  • Demand forecasting is the activity of estimating the quantity of a product or service that consumers will purchase for a certain period of time. Developing precise forecasting models are considered important since corporates can make strategic decisions on new markets based on future demand estimated by the models. Many studies have developed market growth curve models, such as Bass, Logistic, Gompertz models, which estimate future demand when a market is in its early stage. Among the models, Bass model, which explains the demand from two types of adopters, innovators and imitators, has been widely used in forecasting. Such models require sufficient demand observations to ensure qualified results. In the beginning of a new market, however, observations are not sufficient for the models to precisely estimate the market's future demand. For this reason, as an alternative, demands guessed from those of most adjacent markets are often used as references in such cases. Reference markets can be those whose products are developed with the same categorical technologies. A market's demand may be expected to have the similar pattern with that of a reference market in case the adoption pattern of a product in the market is determined mainly by the technology related to the product. However, such processes may not always ensure pleasing results because the similarity between markets depends on intuition and/or experience. There are two major drawbacks that human experts cannot effectively handle in this approach. One is the abundance of candidate reference markets to consider, and the other is the difficulty in calculating the similarity between markets. First, there can be too many markets to consider in selecting reference markets. Mostly, markets in the same category in an industrial hierarchy can be reference markets because they are usually based on the similar technologies. However, markets can be classified into different categories even if they are based on the same generic technologies. Therefore, markets in other categories also need to be considered as potential candidates. Next, even domain experts cannot consistently calculate the similarity between markets with their own qualitative standards. The inconsistency implies missing adjacent reference markets, which may lead to the imprecise estimation of future demand. Even though there are no missing reference markets, the new market's parameters can be hardly estimated from the reference markets without quantitative standards. For this reason, this study proposes a case-based expert system that helps experts overcome the drawbacks in discovering referential markets. First, this study proposes the use of Euclidean distance measure to calculate the similarity between markets. Based on their similarities, markets are grouped into clusters. Then, missing markets with the characteristics of the cluster are searched for. Potential candidate reference markets are extracted and recommended to users. After the iteration of these steps, definite reference markets are determined according to the user's selection among those candidates. Then, finally, the new market's parameters are estimated from the reference markets. For this procedure, two techniques are used in the model. One is clustering data mining technique, and the other content-based filtering of recommender systems. The proposed system implemented with those techniques can determine the most adjacent markets based on whether a user accepts candidate markets. Experiments were conducted to validate the usefulness of the system with five ICT experts involved. In the experiments, the experts were given the list of 16 ICT markets whose parameters to be estimated. For each of the markets, the experts estimated its parameters of growth curve models with intuition at first, and then with the system. The comparison of the experiments results show that the estimated parameters are closer when they use the system in comparison with the results when they guessed them without the system.

Cellular Imaging of Gold Nanoparticles Using a Compact Soft X-Ray Microscope (연 X-선 현미경을 이용한 금 나노입자 세포영상)

  • Kwon, Young-Man;Kim, Han-Kyong;Kim, Kyong-Woo;Kim, Sun-Hee;Yin, Hong-Hua;Chon, Kwon-Su;Kang, Sung-Hoon;Park, Seong-Hoon;Juhng, Seon-Kwan;Yoon, Kwon-Ha
    • Applied Microscopy
    • /
    • v.38 no.3
    • /
    • pp.235-243
    • /
    • 2008
  • A compact soft x-ray microscope operated in the 'water window' wavelength region ($2.3{\sim}4.4nm$) was used for observing cells with nano-scale spatial resolution. To obtain cellular imaging captured with colloidal gold nanoparticles using a compact soft x-ray microscope. The colloidal gold nanoparticles showed higher contrast and lower transmission more than 7 times than that of cellular protein on the soft x-ray wavelength region. The structure and thickness of the cell membrane of the Coscinodiscus oculoides (diatome) and red blood cells were seen clearly. The gold nanoparticles within the HT1080 and MDA-MB 231 cells were seen clearly on the soft x-ray microscopy. The gold nanoparticles were aggregated within vesicles by endocytosis.

Validation of Suitable Zooplankton Enumeration Method for Species Diversity Study Using Rarefaction Curve and Extrapolation (종 다양성 평가를 위한 호소 생태계 동물플랑크톤 조사 방법 연구: 희박화 분석(rarefaction analysis)을 이용한 적정 시료 농축 정도 및 부차 시료 추출량의 검증)

  • Hye-Ji Oh;Yerim Choi;Hyunjoon Kim;Geun-Hyeok Hong;Young-Seuk Park;Yong-Jae Kim;Kwang-Hyeon Chang
    • Korean Journal of Ecology and Environment
    • /
    • v.55 no.4
    • /
    • pp.274-284
    • /
    • 2022
  • Through sample-size-based rarefaction analyses, we tried to suggest the appropriate degree of sample concentration and sub-sample extraction, as a way to estimate more accurate zooplankton species diversity when assessing biodiversity. When we collected zooplankton from three reservoirs with different environmental characteristics, the estimated species richness (S) and Shannon's H' values showed different changing patterns according to the amount of sub-sample extracted from the whole sample by reservoir. However, consequently, their zooplankton diversity indices were estimated the highest values when analyzed by extracting the largest amount of sub-sample. As a result of rarefaction analysis about sample coverage, in the case of deep eutrophic reservoir (Juam) with high zooplankton species and individual numbers, it was analyzed that 99.8% of the whole samples were represented by only 1 mL of sub-sample based on 100 mL of concentrated samples. On the other hand, in Soyang reservoir, which showed very small species and individual numbers, a relatively low representation at 97% when 10 mL of sub-sample was extracted from the same amount of concentrated sample. As such, the representation of sub-sample for the whole zooplankton sample varies depending on the individual density in the sample collected from the field. If the degree of concentration of samples and the amount of sub-sample extraction are adjusted according to the collected individual density, it is believed that errors that occur when comparing the number of species and diversity indices among different water bodies can be minimized.

A Study on Developing a VKOSPI Forecasting Model via GARCH Class Models for Intelligent Volatility Trading Systems (지능형 변동성트레이딩시스템개발을 위한 GARCH 모형을 통한 VKOSPI 예측모형 개발에 관한 연구)

  • Kim, Sun-Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.19-32
    • /
    • 2010
  • Volatility plays a central role in both academic and practical applications, especially in pricing financial derivative products and trading volatility strategies. This study presents a novel mechanism based on generalized autoregressive conditional heteroskedasticity (GARCH) models that is able to enhance the performance of intelligent volatility trading systems by predicting Korean stock market volatility more accurately. In particular, we embedded the concept of the volatility asymmetry documented widely in the literature into our model. The newly developed Korean stock market volatility index of KOSPI 200, VKOSPI, is used as a volatility proxy. It is the price of a linear portfolio of the KOSPI 200 index options and measures the effect of the expectations of dealers and option traders on stock market volatility for 30 calendar days. The KOSPI 200 index options market started in 1997 and has become the most actively traded market in the world. Its trading volume is more than 10 million contracts a day and records the highest of all the stock index option markets. Therefore, analyzing the VKOSPI has great importance in understanding volatility inherent in option prices and can afford some trading ideas for futures and option dealers. Use of the VKOSPI as volatility proxy avoids statistical estimation problems associated with other measures of volatility since the VKOSPI is model-free expected volatility of market participants calculated directly from the transacted option prices. This study estimates the symmetric and asymmetric GARCH models for the KOSPI 200 index from January 2003 to December 2006 by the maximum likelihood procedure. Asymmetric GARCH models include GJR-GARCH model of Glosten, Jagannathan and Runke, exponential GARCH model of Nelson and power autoregressive conditional heteroskedasticity (ARCH) of Ding, Granger and Engle. Symmetric GARCH model indicates basic GARCH (1, 1). Tomorrow's forecasted value and change direction of stock market volatility are obtained by recursive GARCH specifications from January 2007 to December 2009 and are compared with the VKOSPI. Empirical results indicate that negative unanticipated returns increase volatility more than positive return shocks of equal magnitude decrease volatility, indicating the existence of volatility asymmetry in the Korean stock market. The point value and change direction of tomorrow VKOSPI are estimated and forecasted by GARCH models. Volatility trading system is developed using the forecasted change direction of the VKOSPI, that is, if tomorrow VKOSPI is expected to rise, a long straddle or strangle position is established. A short straddle or strangle position is taken if VKOSPI is expected to fall tomorrow. Total profit is calculated as the cumulative sum of the VKOSPI percentage change. If forecasted direction is correct, the absolute value of the VKOSPI percentage changes is added to trading profit. It is subtracted from the trading profit if forecasted direction is not correct. For the in-sample period, the power ARCH model best fits in a statistical metric, Mean Squared Prediction Error (MSPE), and the exponential GARCH model shows the highest Mean Correct Prediction (MCP). The power ARCH model best fits also for the out-of-sample period and provides the highest probability for the VKOSPI change direction tomorrow. Generally, the power ARCH model shows the best fit for the VKOSPI. All the GARCH models provide trading profits for volatility trading system and the exponential GARCH model shows the best performance, annual profit of 197.56%, during the in-sample period. The GARCH models present trading profits during the out-of-sample period except for the exponential GARCH model. During the out-of-sample period, the power ARCH model shows the largest annual trading profit of 38%. The volatility clustering and asymmetry found in this research are the reflection of volatility non-linearity. This further suggests that combining the asymmetric GARCH models and artificial neural networks can significantly enhance the performance of the suggested volatility trading system, since artificial neural networks have been shown to effectively model nonlinear relationships.