• Title/Summary/Keyword: vector computer

Search Result 2,006, Processing Time 0.033 seconds

Scalable Collaborative Filtering Technique based on Adaptive Clustering (적응형 군집화 기반 확장 용이한 협업 필터링 기법)

  • Lee, O-Joun;Hong, Min-Sung;Lee, Won-Jin;Lee, Jae-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.73-92
    • /
    • 2014
  • An Adaptive Clustering-based Collaborative Filtering Technique was proposed to solve the fundamental problems of collaborative filtering, such as cold-start problems, scalability problems and data sparsity problems. Previous collaborative filtering techniques were carried out according to the recommendations based on the predicted preference of the user to a particular item using a similar item subset and a similar user subset composed based on the preference of users to items. For this reason, if the density of the user preference matrix is low, the reliability of the recommendation system will decrease rapidly. Therefore, the difficulty of creating a similar item subset and similar user subset will be increased. In addition, as the scale of service increases, the time needed to create a similar item subset and similar user subset increases geometrically, and the response time of the recommendation system is then increased. To solve these problems, this paper suggests a collaborative filtering technique that adapts a condition actively to the model and adopts the concepts of a context-based filtering technique. This technique consists of four major methodologies. First, items are made, the users are clustered according their feature vectors, and an inter-cluster preference between each item cluster and user cluster is then assumed. According to this method, the run-time for creating a similar item subset or user subset can be economized, the reliability of a recommendation system can be made higher than that using only the user preference information for creating a similar item subset or similar user subset, and the cold start problem can be partially solved. Second, recommendations are made using the prior composed item and user clusters and inter-cluster preference between each item cluster and user cluster. In this phase, a list of items is made for users by examining the item clusters in the order of the size of the inter-cluster preference of the user cluster, in which the user belongs, and selecting and ranking the items according to the predicted or recorded user preference information. Using this method, the creation of a recommendation model phase bears the highest load of the recommendation system, and it minimizes the load of the recommendation system in run-time. Therefore, the scalability problem and large scale recommendation system can be performed with collaborative filtering, which is highly reliable. Third, the missing user preference information is predicted using the item and user clusters. Using this method, the problem caused by the low density of the user preference matrix can be mitigated. Existing studies on this used an item-based prediction or user-based prediction. In this paper, Hao Ji's idea, which uses both an item-based prediction and user-based prediction, was improved. The reliability of the recommendation service can be improved by combining the predictive values of both techniques by applying the condition of the recommendation model. By predicting the user preference based on the item or user clusters, the time required to predict the user preference can be reduced, and missing user preference in run-time can be predicted. Fourth, the item and user feature vector can be made to learn the following input of the user feedback. This phase applied normalized user feedback to the item and user feature vector. This method can mitigate the problems caused by the use of the concepts of context-based filtering, such as the item and user feature vector based on the user profile and item properties. The problems with using the item and user feature vector are due to the limitation of quantifying the qualitative features of the items and users. Therefore, the elements of the user and item feature vectors are made to match one to one, and if user feedback to a particular item is obtained, it will be applied to the feature vector using the opposite one. Verification of this method was accomplished by comparing the performance with existing hybrid filtering techniques. Two methods were used for verification: MAE(Mean Absolute Error) and response time. Using MAE, this technique was confirmed to improve the reliability of the recommendation system. Using the response time, this technique was found to be suitable for a large scaled recommendation system. This paper suggested an Adaptive Clustering-based Collaborative Filtering Technique with high reliability and low time complexity, but it had some limitations. This technique focused on reducing the time complexity. Hence, an improvement in reliability was not expected. The next topic will be to improve this technique by rule-based filtering.

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Content-based Recommendation Based on Social Network for Personalized News Services (개인화된 뉴스 서비스를 위한 소셜 네트워크 기반의 콘텐츠 추천기법)

  • Hong, Myung-Duk;Oh, Kyeong-Jin;Ga, Myung-Hyun;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.57-71
    • /
    • 2013
  • Over a billion people in the world generate new news minute by minute. People forecasts some news but most news are from unexpected events such as natural disasters, accidents, crimes. People spend much time to watch a huge amount of news delivered from many media because they want to understand what is happening now, to predict what might happen in the near future, and to share and discuss on the news. People make better daily decisions through watching and obtaining useful information from news they saw. However, it is difficult that people choose news suitable to them and obtain useful information from the news because there are so many news media such as portal sites, broadcasters, and most news articles consist of gossipy news and breaking news. User interest changes over time and many people have no interest in outdated news. From this fact, applying users' recent interest to personalized news service is also required in news service. It means that personalized news service should dynamically manage user profiles. In this paper, a content-based news recommendation system is proposed to provide the personalized news service. For a personalized service, user's personal information is requisitely required. Social network service is used to extract user information for personalization service. The proposed system constructs dynamic user profile based on recent user information of Facebook, which is one of social network services. User information contains personal information, recent articles, and Facebook Page information. Facebook Pages are used for businesses, organizations and brands to share their contents and connect with people. Facebook users can add Facebook Page to specify their interest in the Page. The proposed system uses this Page information to create user profile, and to match user preferences to news topics. However, some Pages are not directly matched to news topic because Page deals with individual objects and do not provide topic information suitable to news. Freebase, which is a large collaborative database of well-known people, places, things, is used to match Page to news topic by using hierarchy information of its objects. By using recent Page information and articles of Facebook users, the proposed systems can own dynamic user profile. The generated user profile is used to measure user preferences on news. To generate news profile, news category predefined by news media is used and keywords of news articles are extracted after analysis of news contents including title, category, and scripts. TF-IDF technique, which reflects how important a word is to a document in a corpus, is used to identify keywords of each news article. For user profile and news profile, same format is used to efficiently measure similarity between user preferences and news. The proposed system calculates all similarity values between user profiles and news profiles. Existing methods of similarity calculation in vector space model do not cover synonym, hypernym and hyponym because they only handle given words in vector space model. The proposed system applies WordNet to similarity calculation to overcome the limitation. Top-N news articles, which have high similarity value for a target user, are recommended to the user. To evaluate the proposed news recommendation system, user profiles are generated using Facebook account with participants consent, and we implement a Web crawler to extract news information from PBS, which is non-profit public broadcasting television network in the United States, and construct news profiles. We compare the performance of the proposed method with that of benchmark algorithms. One is a traditional method based on TF-IDF. Another is 6Sub-Vectors method that divides the points to get keywords into six parts. Experimental results demonstrate that the proposed system provide useful news to users by applying user's social network information and WordNet functions, in terms of prediction error of recommended news.

An Efficient Hardware-Software Co-Implementation of an H.263 Video Codec (하드웨어 소프트웨어 통합 설계에 의한 H.263 동영상 코덱 구현)

  • 장성규;김성득;이재헌;정의철;최건영;김종대;나종범
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.4B
    • /
    • pp.771-782
    • /
    • 2000
  • In this paper, an H.263 video codec is implemented by adopting the concept of hardware and software co-design. Each module of the codec is investigated to find which approach between hardware and software is better to achieve real-time processing speed as well as flexibility. The hardware portion includes motion-related engines, such as motion estimation and compensation, and a memory control part. The remaining portion of theH.263 video codec is implemented in software using a RISC processor. This paper also introduces efficient design methods for hardware and software modules. In hardware, an area-efficient architecture for the motion estimator of a multi-resolution block matching algorithm using multiple candidates and spatial correlation in motion vector fields (MRMCS), is suggested to reduce the chip size. Software optimization techniques are also explored by using the statistics of transformed coefficients and the minimum sum of absolute difference (SAD)obtained from the motion estimator.

  • PDF

Development of an Adult Image Classifier using Skin Color (피부색상을 이용한 유해영상 분류기 개발)

  • Yoon, Jin-Sung;Kim, Gye-Young;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.4
    • /
    • pp.1-11
    • /
    • 2009
  • To classifying and filtering of adult images, in recent the computer vision techniques are actively investigated because rapidly increase for the amount of adult images accessible on the Internet. In this paper, we investigate and develop the tool filtering of adult images using skin color model. The tool is consisting of two steps. In the first step, we use a skin color classifier to extract skin color regions from an image. In the nest step, we use a region feature classifier to determine whether an image is an adult image or not an adult image depending on extracted skin color regions. Using histogram color model, a skin color classifier is trained for RGB color values of adult images and not adult images. Using SVM, a region feature classifier is trained for skin color ratio on 29 regions of adult images. Experimental results show that suggested classifier achieve a detection rate of 92.80% with 6.73% false positives.

A Study on the Sensorless Speed Control of Induction Motor using Direct Torque Control (직접토크 제어를 이용한 유도전동기의 센서리스 속도제어에 관한 연구)

  • Yoon, Kyoung-Kuk;Oh, Sae-Gin;Kim, Jong-Su;Kim, Yoon-Sik;Lee, Sung-Gun;Kim, Sung-Hwan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.33 no.8
    • /
    • pp.1261-1267
    • /
    • 2009
  • The Direct Torque Control[DTC] controls torque and flux by restricting the flux and torque errors within respective hysteresis bands, and motor torque and flux are controlled by the stator voltage space vector using optimum inverter switching table. And the Current Error Compensation method is on the basis of compensating current difference between the induction motor and its numerical model, in which the identical stator voltage is supplied for both the actual motor and the model so that the gaps between stator currents of the two can be forced to decay to zero as time proceeds. Consequently, the rotor speed approaches to the model speed, namely, setting value and the system can control motor speed precisely. This paper proposes a new sensorless speed control of induction motor using DTC and Current Error Compensation, which requires neither shaft encoder, speed estimator nor PI controllers. And through computer simulation, confirm effectiveness of proposed method.

Modification of Discharge Mechanism of Binder Harvesters (바인더수확기(收穫期)의 방출구조(放出構造) 개선(改善)에 관한 연구(硏究))

  • Park, Keum Joo;Chung, Chang Joo;Ryu, Kwan Hee
    • Journal of Biosystems Engineering
    • /
    • v.8 no.2
    • /
    • pp.26-38
    • /
    • 1983
  • Binder harvesters introduced to Korea were originally designed to be used for Japonica varieties which are highly resistant to shattering. In order to improve the performance of the binder to Indica varieties which are easily shattered and have shorter stem, mechanical modifications of the binder are inevitable. Shattering losses of the binder can be classified into two major parts; one incurred before and one after binding operations. The latter has been evaluated as great as the former. Previous studies indicated that the high discharge losses resulted from a great impact force of the discharge arm on the rice bundle during the discharge process. This study was intended to theoretically analyze the discharge mechanism of four-bar linkage. For this purpose, two commercially available binder harvesters having a four-bar linkage as a discharge mechanism were analyzed. Using the results from the motion analysis and the other structural constraints of the machines, they were modified and experimentally compared with the machines without modification to see whether any decrease in grain losses was obtained. The results obtained in this study are summarized as follows: 1. The path, velocity and acceleration of discharge arm were computer analyzed by vector analysis. Using results of the analysis and intrinsic constraints of the binder, discharge mechanism was modified to reduce the impact force on bundle by discharge arm in the range where the discharge performance was not deteriorated. This modification of the discharge mechanism could be done with an aid of four-bar linkage synthesis technique. As a result, average velocity and acceleration of the discharge arm during the discharge process were reduced respectively by 19 percent and 33 percent for binder A, and 17 percent and 35 percent for binder B. 2. Through the modification of the discharge mechanism, discharge losses of binder A were reduced by 42-56 percent for Milyang 23, Poongsan and Hangang chal, and discharge losses of binder B were reduced by 13-20 percent for Milyang 23 and Poongsan. 3. Discharge losses were decreased as the bundle size became larger and the size effect on the decrease rate appeared more significant in the binders with modifications than in those without modifications.

  • PDF

CNN-Based Novelty Detection with Effectively Incorporating Document-Level Information (효과적인 문서 수준의 정보를 이용한 합성곱 신경망 기반의 신규성 탐지)

  • Jo, Seongung;Oh, Heung-Seon;Im, Sanghun;Kim, Seonho
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.10
    • /
    • pp.231-238
    • /
    • 2020
  • With a large number of documents appearing on the web, document-level novelty detection has become important since it can reduce the efforts of finding novel documents by discarding documents sharing redundant information already seen. A recent work proposed a convolutional neural network (CNN)-based novelty detection model with significant performance improvements. We observed that it has a restriction of using document-level information in determining novelty but assumed that the document-level information is more important. As a solution, this paper proposed two methods of effectively incorporating document-level information using a CNN-based novelty detection model. Our methods focus on constructing a feature vector of a target document to be classified by extracting relative information between the target document and source documents given as evidence. A series of experiments showed the superiority of our methods on a standard benchmark collection, TAP-DLND 1.0.

Facial Gaze Detection by Estimating Three Dimensional Positional Movements (얼굴의 3차원 위치 및 움직임 추정에 의한 시선 위치 추적)

  • Park, Gang-Ryeong;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.23-35
    • /
    • 2002
  • Gaze detection is to locate the position on a monitor screen where a user is looking. In our work, we implement it with a computer vision system setting a single camera above a monitor and a user moves (rotates and/or translates) his face to gaze at a different position on the monitor. To detect the gaze position, we locate facial region and facial features(both eyes, nostrils and lip corners) automatically in 2D camera images. From the movement of feature points detected in starting images, we can compute the initial 3D positions of those features by camera calibration and parameter estimation algorithm. Then, when a user moves(rotates and/or translates) his face in order to gaze at one position on a monitor, the moved 3D positions of those features can be computed from 3D rotation and translation estimation and affine transform. Finally, the gaze position on a monitor is computed from the normal vector of the plane determined by those moved 3D positions of features. As experimental results, we can obtain the gaze position on a monitor(19inches) and the gaze position accuracy between the computed positions and the real ones is about 2.01 inches of RMS error.

Video analysis using re-constructing of motion vectors on MPEG compressed domain (압축영역에서 움직임 벡터의 재추정을 이용한 비디오 해석 기법)

  • Kim, Nak-U;Kim, Tae-Yong;Gang, Eung-Gwan;Choe, Jong-Su
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.78-87
    • /
    • 2002
  • A macroblock(MB) in MPEG coded domain can have zero, one, or two motion vectors depending on its frame type and prediction direction (forward-, backward-, or hi-directionally). In this paper, we propose a method that converts these motion vectors on MPEG coded domain as a uniform set, independent of the frame type and the direction of prediction, and directly utilizes these re-analyzed motion vectors for understanding video contents. Also, using this frame-type-independent motion vector, we propose novel methods for detecting and tracking moving objects with frame-based detection accuracy on the compressed domain. These algorithms are performed directly from the MPEG bitstreams after VLC decoding with little time consumption. Experimental results show validity and outstanding performance of our methods.