• Title/Summary/Keyword: 재귀적 알고리즘

Search Result 90, Processing Time 0.026 seconds

A Versatile Reed-Solomon Decoder for Continuous Decoding of Variable Block-Length Codewords (가변 블록 길이 부호어의 연속 복호를 위한 가변형 Reed-Solomon 복호기)

  • 송문규;공민한
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.41 no.3
    • /
    • pp.29-38
    • /
    • 2004
  • In this paper, we present an efficient architecture of a versatile Reed-Solomon (RS) decoder which can be programmed to decode RS codes continuously with my message length k as well as any block length n. This unique feature eliminates the need of inserting zeros for decoding shortened RS codes. Also, the values of the parameters n and k, hence the error-correcting capability t can be altered at every codeword block. The decoder permits 3-step pipelined processing based on the modified Euclid's algorithm (MEA). Since each step can be driven by a separate clock, the decoder can operate just as 2-step pipeline processing by employing the faster clock in step 2 and/or step 3. Also, the decoder can be used even in the case that the input clock is different from the output clock. Each step is designed to have a structure suitable for decoding RS codes with varying block length. A new architecture for the MEA is designed for variable values of the t. The operating length of the shift registers in the MEA block is shortened by one, and it can be varied according to the different values of the t. To maintain the throughput rate with less circuitry, the MEA block uses both the recursive technique and the over-clocking technique. The decoder can decodes codeword received not only in a burst mode, but also in a continuous mode. It can be used in a wide range of applications because of its versatility. The adaptive RS decoder over GF(2$^{8}$ ) having the error-correcting capability of upto 10 has been designed in VHDL, and successfully synthesized in an FPGA chip.

Finding the K Least Fare Routes In the Distance-Based Fare Policy (거리비례제 요금정책에 따른 K요금경로탐색)

  • Lee, Mi-Yeong;Baek, Nam-Cheol;Mun, Byeong-Seop;Gang, Won-Ui
    • Journal of Korean Society of Transportation
    • /
    • v.23 no.1
    • /
    • pp.103-114
    • /
    • 2005
  • The transit fare resulted from the renovation of public transit system in Seoul is basically determined based on the distance-based fare policy (DFP). In DFP, the total fare a passenger has to pay for is calculated by the basic-transfer-premium fare decision rule. The fixed amount of the basic fare is first imposed when a passenger get on a mode and it lasts within the basic travel distance. The transfer fare is additionally imposed when a passenger switches from one mode to another and the fare of the latter mode is higher than the former. The premium fare is also another and the fare of the latter begins to exceed the basic travel distance and increases at the proportion of the premium fare distance. The purpose of this study is to propose an algorithm for finding K number of paths, paths that are sequentially sorted based on total amount of transit fare, under DFP of the idstance-based fare policy. For this purpose, the link mode expansion technique is proposed in order to save notations associated with the travel modes. Thus the existing K shortest path algorithms adaptable for uni-modal network analysis are applicable to the analysis for inter-modal transportation networks. An optimality condition for finding the K shortest fare routes is derived and a corresponding algorithms is developed. The case studies demonstrate that the proposed algorithm may play an important role to provide diverse public transit information considering fare, travel distance, travel time, and number of transfer.

Automatic Extraction of Roof Components from LiDAR Data Based on Octree Segmentation (LiDAR 데이터를 이용한 옥트리 분할 기반의 지붕요소 자동추출)

  • Song, Nak-Hyeon;Cho, Hong-Beom;Cho, Woo-Sug;Shin, Sung-Woong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.4
    • /
    • pp.327-336
    • /
    • 2007
  • The 3D building modeling is one of crucial components in building 3D geospatial information. The existing methods for 3D building modeling depend mainly on manual photogrammetric processes by stereoplotter compiler, which indeed take great amount of time and efforts. In addition, some automatic methods that were proposed in research papers and experimental trials have limitations of describing the details of buildings with lack of geometric accuracy. It is essential in automatic fashion that the boundary and shape of buildings should be drawn effortlessly by a sophisticated algorithm. In recent years, airborne LiDAR data representing earth surface in 3D has been utilized in many different fields. However, it is still in technical difficulties for clean and correct boundary extraction without human intervention. The usage of airborne LiDAR data will be much feasible to reconstruct the roof tops of buildings whose boundary lines could be taken out from existing digital maps. The paper proposed a method to reconstruct the roof tops of buildings using airborne LiDAR data with building boundary lines from digital map. The primary process is to perform octree-based segmentation to airborne LiDAR data recursively in 3D space till there are no more airborne LiDAR points to be segmented. Once the octree-based segmentation has been completed, each segmented patch is thereafter merged based on geometric spatial characteristics. The experimental results showed that the proposed method were capable of extracting various building roof components such as plane, gable, polyhedric and curved surface.

Super High-Resolution Image Style Transfer (초-고해상도 영상 스타일 전이)

  • Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.104-123
    • /
    • 2022
  • Style transfer based on neural network provides very high quality results by reflecting the high level structural characteristics of images, and thereby has recently attracted great attention. This paper deals with the problem of resolution limitation due to GPU memory in performing such neural style transfer. We can expect that the gradient operation for style transfer based on partial image, with the aid of the fixed size of receptive field, can produce the same result as the gradient operation using the entire image. Based on this idea, each component of the style transfer loss function is analyzed in this paper to obtain the necessary conditions for partitioning and padding, and to identify, among the information required for gradient calculation, the one that depends on the entire input. By structuring such information for using it as auxiliary constant input for partition-based gradient calculation, this paper develops a recursive algorithm for super high-resolution image style transfer. Since the proposed method performs style transfer by partitioning input image into the size that a GPU can handle, it can perform style transfer without the limit of the input image resolution accompanied by the GPU memory size. With the aid of such super high-resolution support, the proposed method can provide a unique style characteristics of detailed area which can only be appreciated in super high-resolution style transfer.

A Hybrid Randomizing Function Based on Elias and Peres Method (일라이어스와 페레즈의 방식에 기반한 하이브리드 무작위화 함수)

  • Pae, Sung-Il;Kim, Min-Su
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.12
    • /
    • pp.149-158
    • /
    • 2012
  • Proposed is a hybrid randomizing function using two asymptotically optimal randomizing functions: Elias function and Peres function. Randomizing function is an mathematical abstraction of producing a uniform random bits from a source of randomness with bias. It is known that the output rate of Elias function and Peres function approaches to the information-theoretic upper bound. Especially, for each fixed input length, Elias function is optimal. However, its computation is relatively complicated and depends on input lengths. On the contrary, Peres function is defined by a simple recursion. So its computation is much simpler, uniform over the input lengths, and runs on a small footprint. In view of this tradeoff between computational complexity and output efficiency, we propose a hybrid randomizing function that has strengths of the two randomizing functions and analyze it.

A Non-annotated Recurrent Neural Network Ensemble-based Model for Near-real Time Detection of Erroneous Sea Level Anomaly in Coastal Tide Gauge Observation (비주석 재귀신경망 앙상블 모델을 기반으로 한 조위관측소 해수위의 준실시간 이상값 탐지)

  • LEE, EUN-JOO;KIM, YOUNG-TAEG;KIM, SONG-HAK;JU, HO-JEONG;PARK, JAE-HUN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.4
    • /
    • pp.307-326
    • /
    • 2021
  • Real-time sea level observations from tide gauges include missing and erroneous values. Classification as abnormal values can be done for the latter by the quality control procedure. Although the 3𝜎 (three standard deviations) rule has been applied in general to eliminate them, it is difficult to apply it to the sea-level data where extreme values can exist due to weather events, etc., or where erroneous values can exist even within the 3𝜎 range. An artificial intelligence model set designed in this study consists of non-annotated recurrent neural networks and ensemble techniques that do not require pre-labeling of the abnormal values. The developed model can identify an erroneous value less than 20 minutes of tide gauge recording an abnormal sea level. The validated model well separates normal and abnormal values during normal times and weather events. It was also confirmed that abnormal values can be detected even in the period of years when the sea level data have not been used for training. The artificial neural network algorithm utilized in this study is not limited to the coastal sea level, and hence it can be extended to the detection model of erroneous values in various oceanic and atmospheric data.

Efficient Processing of Transitive Closure Queries in Ontology using Graph Labeling (온톨로지에서의 그래프 레이블링을 이용한 효율적인 트랜지티브 클로저 질의 처리)

  • Kim Jongnam;Jung Junwon;Min Kyeung-Sub;Kim Hyoung-Joo
    • Journal of KIISE:Databases
    • /
    • v.32 no.5
    • /
    • pp.526-535
    • /
    • 2005
  • Ontology is a methodology on describing specific concepts and their relationships, and it is being considered important more and more as semantic web and variety of knowledge management systems are being highlighted. Ontology uses the relationships among concerts to represent some concrete semantics of specific concept. When we want to get some useful information from ontology, we severely have to process the transitive relationships because most of relationships among concepts represent transitivity. Technically, it causes recursive calls to process such transitive closure queries with heavy costs. This paper describes the efficient technique for processing transitive closure queries in ontology. To the purpose of it, we examine some approaches of current systems for transitive closure queries, and propose a technique by graph labeling scheme. Basically, we assume large size of ontology, and then we show that our approach gives relative efficiency in processing of transitive closure, queries.

Finding a Minimum Fare Route in the Distance-Based System (거리비례제 요금부과에 따른 최소요금경로탐색)

  • Lee, Mee-Young;Baik, Nam-Cheol;Nam, Doo-Hee;Shin, Seon-Gil
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.6
    • /
    • pp.101-108
    • /
    • 2004
  • The new transit fare in the Seoul Metropolitan is basically determined based on the distance-based fare system (DBFS). The total fare in DBFS consists of three parts- (1) basic fare, (2) transfer fare, and (3) extra fare. The fixed amount of basic fare for each mode is charged when a passenger gets on a mode, and it proceeds until traveling within basic travel distance. The transfer fare may be added when a passenger switches from the present mode to another. The extra fare is imposed if the total travel distance exceeds the basic travel distance, and after that, the longer distance the more extra fare based on the extra-fare-charging rule. This study proposes an algorithm for finding minimum fare route in DBFS. This study first exploits the link-label-based searching method to enable shortest path algorithms to implement without network expansion at junction nodes in inter-modal transit networks. Moreover, the link-expansion technique is adopted in order for each mode's travel to be treated like duplicated links, which have the same start and end nodes, but different link features. In this study, therefore, some notations associated with modes can be saved, thus the existing link-based shortest path algorithm is applicable without any loss of generality. For fare calculation as next steps, a mathematical formula is proposed to embrace fare-charging process using search process of two adjacent links illustrated from the origin. A shortest path algorithm for finding a minimum fare route is derived by converting the formula as a recursive form. The implementation process of the algorithm is evaluated through a simple network test.

Prediction of Dissolved Oxygen in Jindong Bay Using Time Series Analysis (시계열 분석을 이용한 진동만의 용존산소량 예측)

  • Han, Myeong-Soo;Park, Sung-Eun;Choi, Youngjin;Kim, Youngmin;Hwang, Jae-Dong
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.4
    • /
    • pp.382-391
    • /
    • 2020
  • In this study, we used artificial intelligence algorithms for the prediction of dissolved oxygen in Jindong Bay. To determine missing values in the observational data, we used the Bidirectional Recurrent Imputation for Time Series (BRITS) deep learning algorithm, Auto-Regressive Integrated Moving Average (ARIMA), a widely used time series analysis method, and the Long Short-Term Memory (LSTM) deep learning method were used to predict the dissolved oxygen. We also compared accuracy of ARIMA and LSTM. The missing values were determined with high accuracy by BRITS in the surface layer; however, the accuracy was low in the lower layers. The accuracy of BRITS was unstable due to the experimental conditions in the middle layer. In the middle and bottom layers, the LSTM model showed higher accuracy than the ARIMA model, whereas the ARIMA model showed superior performance in the surface layer.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.