• Title/Summary/Keyword: NN techniques

Search Result 118, Processing Time 0.034 seconds

Speaker Detection and Recognition for a Welfare Robot

  • Sugisaka, Masanori;Fan, Xinjian
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.835-838
    • /
    • 2003
  • Computer vision and natural-language dialogue play an important role in friendly human-machine interfaces for service robots. In this paper we describe an integrated face detection and face recognition system for a welfare robot, which has also been combined with the robot's speech interface. Our approach to face detection is to combine neural network (NN) and genetic algorithm (GA): ANN serves as a face filter while GA is used to search the image efficiently. When the face is detected, embedded Hidden Markov Model (EMM) is used to determine its identity. A real-time system has been created by combining the face detection and recognition techniques. When motivated by the speaker's voice commands, it takes an image from the camera, finds the face inside the image and recognizes it. Experiments on an indoor environment with complex backgrounds showed that a recognition rate of more than 88% can be achieved.

  • PDF

Study on Continuous Nearest Neighbor Query on Trajectory of Moving Objects (이동객체의 궤적에 대한 연속 최근접 질의에 관한 연구)

  • Jeong, Ji-Mun
    • 한국디지털정책학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.517-530
    • /
    • 2005
  • Researches for NN(nearest neighbor) query which is often used in LBS system, have been worked. However, Conventional NN query processing techniques are usually meaningless in moving object management system for LBS since their results may be invalidated as soon as the query and data objects move. To solve these problems, in this paper we propose a new nearest neighbor query processing technique, called CTNN, which is possible to meet continuous trajectory nearest neighbor query processing. The proposed technique consists of Approximate CTNN technique which has quick response time, and Exact CTNN technique which makes it possible to search accurately nearest neighbor objects. Experimental results using GSTD datasets showed that the Exact CTNN technique has high accuracy, but has a little low performance for response time. They also showed that the Approximate CTNN technique has low accuracy comparing with the Exact CTNN, but has high response time.

  • PDF

Optimal k-Nearest Neighborhood Classifier Using Genetic Algorithm (유전알고리즘을 이용한 최적 k-최근접이웃 분류기)

  • Park, Chong-Sun;Huh, Kyun
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.1
    • /
    • pp.17-27
    • /
    • 2010
  • Feature selection and feature weighting are useful techniques for improving the classification accuracy of k-Nearest Neighbor (k-NN) classifier. The main propose of feature selection and feature weighting is to reduce the number of features, by eliminating irrelevant and redundant features, while simultaneously maintaining or enhancing classification accuracy. In this paper, a novel hybrid approach is proposed for simultaneous feature selection, feature weighting and choice of k in k-NN classifier based on Genetic Algorithm. The results have indicated that the proposed algorithm is quite comparable with and superior to existing classifiers with or without feature selection and feature weighting capability.

A Study on Optimal Design of Composite Materials using Neural Networks and Genetic Algorithms (신경회로망과 유전자 알고리즘을 이용한 복합재료의 최적설계에 관한 연구)

  • 김민철;주원식;장득열;조석수
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.04a
    • /
    • pp.501-507
    • /
    • 1997
  • Composite material has very excellent mechanical properties including tensile stress and specific strength. Especially impact loads may be expected in many of the engineering applications of it. The suitability of composite material for such applications is determined not only by the usual paramenters, but its impactor energy-absorbing properties. Composite material under impact load has poor mechanical behavior and so needs tailoring its structure. Genetic algorithms(GA) is probabilistic optimization technique by principle of natural genetics and natural selection and neural networks(NN) is useful for prediction operation on the basis of learned data. Therefore, This study presents optimization techniques on the basis of genetic algorithms and neural networks to minimum stiffness design of laminated composite material.

  • PDF

Study on Continuous Nearest Neighbor Query on Trajectory of Moving Objects (이동객체의 궤적에 대한 연속 최근접 질의에 관한 연구)

  • Chung, Ji-Moon
    • Journal of Digital Convergence
    • /
    • v.3 no.1
    • /
    • pp.149-163
    • /
    • 2005
  • Researches for NN(nearest neighbor) query which is often used in LBS system, have been worked. However. Conventional NN query processing techniques are usually meaningless in moving object management system for LBS since their results may be invalidated as soon as the query and data objects move. To solve these problems, in this paper we propose a new nearest neighbor query processing technique, called CTNN, which is possible to meet continuous trajectory nearest neighbor query processing. The proposed technique consists of Approximate CTNN technique which has quick response time, and Exact CTNN technique which makes it possible to search accurately nearest neighbor objects. Experimental results using GSTD datasets shows that the Exact CTNN technique has high accuracy, but has a little low performance for response time. They also shows that the Approximate CTNN technique has low accuracy comparing with the Exact CTNN, but has high response time.

  • PDF

Short-term Electric Load Forecasting Using Data Mining Technique

  • Kim, Cheol-Hong;Koo, Bon-Gil;Park, June-Ho
    • Journal of Electrical Engineering and Technology
    • /
    • v.7 no.6
    • /
    • pp.807-813
    • /
    • 2012
  • In this paper, we introduce data mining techniques for short-term load forecasting (STLF). First, we use the K-mean algorithm to classify historical load data by season into four patterns. Second, we use the k-NN algorithm to divide the classified data into four patterns for Mondays, other weekdays, Saturdays, and Sundays. The classified data are used to develop a time series forecasting model. We then forecast the hourly load on weekdays and weekends, excluding special holidays. The historical load data are used as inputs for load forecasting. We compare our results with the KEPCO hourly record for 2008 and conclude that our approach is effective.

CNN-based Adaptive K for Improving Positioning Accuracy in W-kNN-based LTE Fingerprint Positioning

  • Kwon, Jae Uk;Chae, Myeong Seok;Cho, Seong Yun
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.3
    • /
    • pp.217-227
    • /
    • 2022
  • In order to provide a location-based services regardless of indoor or outdoor space, it is important to provide position information of the terminal regardless of location. Among the wireless/mobile communication resources used for this purpose, Long Term Evolution (LTE) signal is a representative infrastructure that can overcome spatial limitations, but the positioning method based on the location of the base station has a disadvantage in that the accuracy is low. Therefore, a fingerprinting technique, which is a pattern recognition technology, has been widely used. The simplest yet widely applied algorithm among Fingerprint positioning technologies is k-Nearest Neighbors (kNN). However, in the kNN algorithm, it is difficult to find the optimal K value with the lowest positioning error for each location to be estimated, so it is generally fixed to an appropriate K value and used. Since the optimal K value cannot be applied to each estimated location, therefore, there is a problem in that the accuracy of the overall estimated location information is lowered. Considering this problem, this paper proposes a technique for adaptively varying the K value by using a Convolutional Neural Network (CNN) model among Artificial Neural Network (ANN) techniques. First, by using the signal information of the measured values obtained in the service area, an image is created according to the Physical Cell Identity (PCI) and Band combination, and an answer label for supervised learning is created. Then, the structure of the CNN is modeled to classify K values through the image information of the measurements. The performance of the proposed technique is verified based on actual data measured in the testbed. As a result, it can be seen that the proposed technique improves the positioning performance compared to using a fixed K value.

Assessment of Forest Biomass using k-Neighbor Techniques - A Case Study in the Research Forest at Kangwon National University - (k-NN기법을 이용한 산림바이오매스 자원량 평가 - 강원대학교 학술림을 대상으로 -)

  • Seo, Hwanseok;Park, Donghwan;Yim, Jongsu;Lee, Jungsoo
    • Journal of Korean Society of Forest Science
    • /
    • v.101 no.4
    • /
    • pp.547-557
    • /
    • 2012
  • This study purposed to estimate the forest biomass using k-Nearest Neighbor (k-NN) algorithm. Multiple data sources were used for the analysis such as forest type map, field survey data and Landsat TM data. The accuracy of forest biomass was evaluated with the forest stratification, horizontal reference area (HRA) and spatial filtering. Forests were divided into 3 types such as conifers, broadleaved, and Korean pine (Pinus koriansis) forests. The applied radii of HRA were 4 km, 5 km and 10 km, respectively. The estimated biomass and mean bias for conifers forest was 222 t/ha and 1.8 t/ha when the value of k=8, the radius of HRA was 4 km, and $5{\times}5$ modal was filtered. The estimated forest biomass of Korean pine was 245 t/ha when the value of k=8, the radius of HRA was 4km. The estimated mean biomass and mean bias for broadleaved forests were 251 t/ha and -1.6 t/ha, respectively, when the value of k=6, the radius of HRA was 10 km. The estimated total forest biomass by k-NN method was 799,000t and 237 t/ha. The estimated mean biomass by ${\kappa}NN$method was about 1t/ha more than that of filed survey data.

Estimation of the Input Wave Height of the Wave Generator for Regular Waves by Using Artificial Neural Networks and Gaussian Process Regression (인공신경망과 가우시안 과정 회귀에 의한 규칙파의 조파기 입력파고 추정)

  • Jung-Eun, Oh;Sang-Ho, Oh
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.34 no.6
    • /
    • pp.315-324
    • /
    • 2022
  • The experimental data obtained in a wave flume were analyzed using machine learning techniques to establish a model that predicts the input wave height of the wavemaker based on the waves that have experienced wave shoaling and to verify the performance of the established model. For this purpose, artificial neural network (NN), the most representative machine learning technique, and Gaussian process regression (GPR), one of the non-parametric regression analysis methods, were applied respectively. Then, the predictive performance of the two models was compared. The analysis was performed independently for the case of using all the data at once and for the case by classifying the data with a criterion related to the occurrence of wave breaking. When the data were not classified, the error between the input wave height at the wavemaker and the measured value was relatively large for both the NN and GPR models. On the other hand, if the data were divided into non-breaking and breaking conditions, the accuracy of predicting the input wave height was greatly improved. Among the two models, the overall performance of the GPR model was better than that of the NN model.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.