• Title/Summary/Keyword: trend algorithm

Search Result 438, Processing Time 0.04 seconds

The Parallel Corpus Approach to Building the Syntactic Tree Transfer Set in the English-to- Vietnamese Machine Translation

  • Dien Dinh;Ngan Thuy;Quang Xuan;Nam Chi
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.382-386
    • /
    • 2004
  • Recently, with the machine learning trend, most of the machine translation systems on over the world use two syntax tree sets of two relevant languages to learn syntactic tree transfer rules. However, for the English-Vietnamese language pair, this approach is impossible because until now we have not had a Vietnamese syntactic tree set which is correspondent to English one. Building of a very large correspondent Vietnamese syntactic tree set (thousands of trees) requires so much work and take the investment of specialists in linguistics. To take advantage from our available English-Vietnamese Corpus (EVC) which was tagged in word alignment, we choose the SITG (Stochastic Inversion Transduction Grammar) model to construct English- Vietnamese syntactic tree sets automatically. This model is used to parse two languages at the same time and then carry out the syntactic tree transfer. This English-Vietnamese bilingual syntactic tree set is the basic training data to carry out transferring automatically from English syntactic trees to Vietnamese ones by machine learning models. We tested the syntax analysis by comparing over 10,000 sentences in the amount of 500,000 sentences of our English-Vietnamese bilingual corpus and first stage got encouraging result $(analyzed\;about\;80\%)[5].$ We have made use the TBL algorithm (Transformation Based Learning) to carry out automatic transformations from English syntactic trees to Vietnamese ones based on that parallel syntactic tree transfer set[6].

  • PDF

Proposal of keyword extraction method based on morphological analysis and PageRank in Tweeter (트위터에서 형태소 분석과 PageRank 기반 화제단어 추출 방법 제안)

  • Lee, Won-Hyung;Cho, Sung-Il;Kim, Dong-Hoi
    • Journal of Digital Contents Society
    • /
    • v.19 no.1
    • /
    • pp.157-163
    • /
    • 2018
  • People who use SNS publish their diverse ideas on SNS every day. The data posted on the SNS contains many people's thoughts and opinions. In particular, popular keywords served on Twitter compile the number of frequently appearing words in user posts and rank them. However, this method is sensitive to unnecessary data simply by listing duplicate words. The proposed method determines the ranking based on the topic of the word using the relationship diagram between words, so that the influence of unnecessary data is less and the main word can be stably extracted. For the performance comparison in terms of the descending keyword rank and the ratios of meaningless keywords among high rank 20 keywords, we make a comparison between the proposed scheme which is based on morphological analysis and PageRank, and the existing scheme which is based on the number of appearances. As a result, the proposed scheme and the existing scheme have included 55% and 70% of meaningless keywords among high rank 20 keywords, respectively, where the proposed scheme is improved about 15% compared with the existing scheme.

Noise removal or video sequences with ,3-D anisotropic diffusion equation (3차원 이방성확산 방정식을 이용한 동영상의 영상잡음제거)

  • Lee, Seok-Ho;Choe, Eun-Cheol;Gang, Mun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.79-86
    • /
    • 2002
  • Nowadays there is a trend to apply the diffusion equation to image Processing. The anisotropic diffusion equation is highly favoured as a noise removal algorithm because it can remove noise while enhancing edges. However if the two dimensional anisotropic diffusion equation is applied to the noise removal of video sequences, flickering artifact due to the luminance difference between frames and ghost artifact due to the interfiltering between frames occur. In this paper the two dimensional anisotropic diffusion equation is extended to the sequence axis. The Proposed three dimensional anisotropic diffusion equation removes noise more efficiently than the two dimensional equation, and furthermore removes the flickering and ghost artifact as well.

A Study on Optimal Configuration Method of Hybrid ESS using Lead-acid and Lithium-ion Batteries for Supply of Variation Loads (변동부하 공급을 위한 하이브리드 ESS의 연축전지와 리튬이온전지의 최적구성방안에 관한 연구)

  • Rho, Dea-seok;Choi, Seong-sik;Lee, Hu-dong;Chang, Byunh-hoon;Kim, Su-yeol
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.2 no.1
    • /
    • pp.49-54
    • /
    • 2016
  • The large scaled lead-acid battery is widely used for efficient operation of the photovoltaic system in many islands. However, lithium-ion battery is now being introduced to mitigate the fluctuation of wind power and to replace lead-acid battery. Therefore, hybrid ESS (Energy Storage system) that combines lithium-ion battery with lead-acid battery is being required because lithium-ion battery is costly in present stage. Under this circumstance, this paper presents the optimal algorithm to create composition rate of hybrid ESS by considering fixed and variable costs in order to maximize advantage of each battery. With minimization of total cost including fixed and variable costs, the optimal composition rate can be calculated based on the various scenarios such as load variation, life cycle and cost trend. From simulation results, it is confirmed that the proposed algorithms are an effective tool to produce a optimal composition rate.

Performance Evaluation of RSSI-based Various Trilateration Localization (RSSI기반에서 다양한 삼변측량 위치인식 기법들의 성능평가)

  • Kim, Sun-Gwan;Kim, Tae-Hoon;Tak, Sung-Woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.493-496
    • /
    • 2011
  • Currently in the development of community wireless technology is gaining interest in location-based services and as a result, the importance of the location information is a growing trend. To calculate the location information is being suggested several ways, among them Trilateration is representative. Trilateration is three beacon nodes, the distance between the location in which you want to calculate with information. Beacon from a node to know where to get information when the distance between the obstacle and the distance error caused by the surrounding environment, which leads to the exact location can not be obtained. Currently due to distance error, location information has a variety of algorithms to reduce the error. However, a systematic analysis of these algorithms is not progress. This paper analyzes the location-aware technologies, and the error the distance of the location information to reduce errors in the various aspects of the algorithm for the systematic and empirical comparison was evaluated through the analysis.

  • PDF

Optimization of Stock Trading System based on Multi-Agent Q-Learning Framework (다중 에이전트 Q-학습 구조에 기반한 주식 매매 시스템의 최적화)

  • Kim, Yu-Seop;Lee, Jae-Won;Lee, Jong-Woo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.2
    • /
    • pp.207-212
    • /
    • 2004
  • This paper presents a reinforcement learning framework for stock trading systems. Trading system parameters are optimized by Q-learning algorithm and neural networks are adopted for value approximation. In this framework, cooperative multiple agents are used to efficiently integrate global trend prediction and local trading strategy for obtaining better trading performance. Agents Communicate With Others Sharing training episodes and learned policies, while keeping the overall scheme of conventional Q-learning. Experimental results on KOSPI 200 show that a trading system based on the proposed framework outperforms the market average and makes appreciable profits. Furthermore, in view of risk management, the system is superior to a system trained by supervised learning.

Ordering of Project priorities For Open Market Portfolio (오픈마켓 포트폴리오 관리를 위한 프로젝트 우선순위결정)

  • Lee, Yong-Hee;Lee, Gun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.18D no.4
    • /
    • pp.299-308
    • /
    • 2011
  • In the recent years, a variety of projects have been conducted in order to enhance competitiveness of leading businesses and their followers in the market. Accordingly, the importance of project portfolio management has risen in the open market industry. Project portfolio management refers to crucial decision-making processes which aim to maximize benefits by selecting projects most suitable for a strategic objective among multiple projects with limited resources. In this study, the trend of project portfolio management studies is introduced. The study also presents a mathematical model of the problem, which aims at maximizing project values, possibility, and similarity between projects in the limited resources. We use the genetic algorithm to obtain the priority orders of projects. In order to verify this study, we compare the results of this study and the existing schedules of the E-open market in South Korea. This study ultimately reduces project risks, improves efficiency of development and continuity of tasks by properly ordering projects and assigning developers to the projects.

Swell Correction of Shallow Marine Seismic Reflection Data Using Genetic Algorithms

  • park, Sung-Hoon;Kong, Young-Sae;Kim, Hee-Joon;Lee, Byung-Gul
    • Journal of the korean society of oceanography
    • /
    • v.32 no.4
    • /
    • pp.163-170
    • /
    • 1997
  • Some CMP gathers acquired from shallow marine seismic reflection survey in offshore Korea do not show the hyperbolic trend of moveout. It originated from so-called swell effect of source and streamer, which are towed under rough sea surface during the data acquisition. The observed time deviations of NMO-corrected traces can be entirely ascribed to the swell effect. To correct these time deviations, a residual statics is introduced using Genetic Algorithms (GA) into the swell correction. A new class of global optimization methods known as GA has recently been developed in the field of Artificial Intelligence and has a resemblance with the genetic evolution of biological systems. The basic idea in using GA as an optimization method is to represent a population of possible solutions or models in a chromosome-type encoding and manipulate these encoded models through simulated reproduction, crossover and mutation. GA parameters used in this paper are as follows: population size Q=40, probability of multiple-point crossover P$_c$=0.6, linear relationship of mutation probability P$_m$ from 0.002 to 0.004, and gray code representation are adopted. The number of the model participating in tournament selection (nt) is 3, and the number of expected copies desired for the best population member in the scaling of fitness is 1.5. With above parameters, an optimization run was iterated for 101 generations. The combination of above parameters are found to be optimal for the convergence of the algorithm. The resulting reflection events in every NMO-corrected CMP gather show good alignment and enhanced quality stack section.

  • PDF

The Study for ENHPP Software Reliability Growth Model Based on Kappa(2) Coverage Function (Kappa(2) 커버리지 함수를 이용한 ENHPP 소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.12
    • /
    • pp.2311-2318
    • /
    • 2007
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. Accurate predictions of software release times, and estimation of the reliability and availability of a software product require Release times of a critical element of the software testing process : test coverage. This model called Enhanced non-homogeneous Poission process(ENHPP). In this paper, exponential coverage and S-shaped model was reviewed, proposes the Kappa coverage model, which make out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE statistics and Kolmogorov distance, for the sake of efficient model, was employed. Numerical examples using real data set for the sake of proposing Kappa coverage model was employed. This analysis of failure data compared with the Kappaa coverage model and the existing model(using arithmetic and Laplace trend tests, bias tests) is presented.

The Study for NHPP Software Reliability Growth Model based on Burr Distribution (Burr 분포를 이용한 NHPP소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee-Cheul;Park, Jong-Goo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.3
    • /
    • pp.514-522
    • /
    • 2007
  • Finite failure NHPP models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this parer, Goel-Okumoto and Yamada-Ohba-Osaki model was reviewed, proposes the Burr distribution reliability model, which making out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE, AIC statistics and Kolmogorov distance, for the sake of efficient model, was employed. Analysis of failure using real data set for the sake of proposing shape parameter of the Burr distribution was employed. This analysis of failure data compared with the Burr distribution model and the existing model(using arithmetic and Laplace trend tests, bias tests) is presented.