• 제목/요약/키워드: Runs

검색결과 1,521건 처리시간 0.03초

Syndrome Check aided Fast-SSCANL Decoding Algorithm for Polar Codes

  • Choangyang Liu;Wenjie Dai;Rui Guo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권5호
    • /
    • pp.1412-1430
    • /
    • 2024
  • The soft cancellation list (SCANL) decoding algorithm for polar codes runs L soft cancellation (SCAN) decoders with different decoding factor graphs. Although it can achieve better decoding performance than SCAN algorithm, it has high latency. In this paper, a fast simplified SCANL (Fast-SSCANL) algorithm that runs L independent Fast-SSCAN decoders is proposed. In Fast-SSCANL decoder, special nodes in each factor graph is identified, and corresponding low-latency decoding approaches for each special node is propose first. Then, syndrome check aided Fast-SSCANL (SC-Fast-SSCANL) algorithm is further put forward. The ordinary nodes satisfied the syndrome check will execute hard decision directly without traversing the factor graph, thereby reducing the decoding latency further. Simulation results show that Fast-SSCANL and SC-Fast-SSCANL algorithms can achieve the same BER performance as the SCANL algorithm with lower latency. Fast-SSCANL algorithm can reduce latency by more than 83% compared with SCANL, and SC-Fast-SSCANL algorithm can reduce more than 85% latency compared with SCANL regardless of code length and code rate.

시뮬레이션을 통한 프로야구 타자들의 공격능력의 종합적인 평가 (Comprehensive evaluation of baseball player's offensive ability by use of simulation)

  • 김남기;김선호
    • Journal of the Korean Data and Information Science Society
    • /
    • 제26권4호
    • /
    • pp.865-874
    • /
    • 2015
  • 본 연구에서는 시뮬레이션을 활용하여 타자의 공격능력, 즉 타자로서의 타격능력과 주자로서의 주루능력을 포괄하는 득점생산능력을 종합적으로 평가한다. 이를 위하여, 각 타자의 스코어링 인덱스를 구하는데, 여기서 스코어링 인덱스란 한 팀의 모든 타자가 동일한, 한 선수로만으로 구성되었을 때, 기대되는 경기당 득점이다. 시뮬레이션 입력으로는 2014시즌 한국 프로야구 데이터를 사용하였는데, 주요 출력결과로서 상위 10명의 타자들의 스코어링 인덱스 및 9개 구단과 2014시즌 한국 프로야구의 스코어링 인덱스를 제시한다. 이렇게 구한 스코어링 인덱스는 타자 및 팀의 공격능력의 종합적인 평가뿐만 아니라, 대표선수 및 선발타자의 선정, 선수들의 연봉의 책정 등에도 활용될 수 있을 것이다.

영산호 운영을 위한 홍수예보모형의 개발(I) -나주지점의 홍수유출 추정- (River Flow Forecasting Model for the Youngsan Estuary Reservoir Operations(I) -Estimation Runof Hydrographs at Naju Station)

  • 박창언;박승우
    • 한국농공학회지
    • /
    • 제36권4호
    • /
    • pp.95-102
    • /
    • 1994
  • The series of the papers consist of three parts to describe the development, calibration, and applications of the flood forecasting models for the Youngsan Estuarine Dam located at the mouth of the Youngsan river. And this paper discusses the hydrologic model for inflow simulation at Naju station, which constitutes 64 percent of the drainage basin of 3521 .6km$^2$ in area. A simplified TANK model was formulated to simulate hourly runoff from rainfall And the model parameters were optirnized using historical storm data, and validated with the records. The results of this paper were summarized as follows. 1. The simplified TANK model was formulated to conceptualize the hourly rainfall-run-off relationships at a watershed with four tanks in series having five runoff outlets. The runoff from each outlet was assumed to be proportional to the storage exceeding a threshold value. And each tank was linked with a drainage hole from the upper one. 2. Fifteen storm events from four year records from 1984 to 1987 were selected for this study. They varied from 81 to 289rn'm The watershed averaged, hourly rainfall data were determined from those at fifteen raingaging stations using a Thiessen method. Some missing and unrealistic records at a few stations were estimated or replaced with the values determined using a reciprocal distance square method from abjacent ones. 3. An univariate scheme was adopted to calibrate the model parameters using historical records. Some of the calibrated parameters were statistically related to antecedent precipitation. And the model simulated the streamflow close to the observed, with the mean coefficient of determination of 0.94 for all storm events. 4. The simulated streamflow were in good agreement with the historical records for ungaged condition simulation runs. The mean coefficient of determination for the runs was 0.93, nearly the same as calibration runs. This may indicates that the model performs very well in flood forecasting situations for the watershed.

  • PDF

Support Vector Machine을 이용한 Reactive ion Etching의 Run-to-Run 오류검출 및 분석 (Run-to-Run Fault Detection of Reactive Ion Etching Using Support Vector Machine)

  • 박영국;홍상진;한승수
    • 한국정보통신학회논문지
    • /
    • 제10권5호
    • /
    • pp.962-969
    • /
    • 2006
  • 현재 고밀도 반도체제작 환경에서는 반작용적인 이온 식각 과정(reactive ion etching)에서의 생산성을 극대화하기 위해서 비정상적인 공정장비를 발견하는 것이 매우 중요하다. 생산과정에서 오류발견의 중요성을 설명하기 위해 Support Vector Machine (SVM)은 실시간으로 공정오류에 대한 판단을 위해 사용되었다. 반작용적인 이온 식각도구 데이터는 59개 변수들로 구성된 반도체 공정장비로부터 얻는다. 각각의 변수들은 초당 10개의 데이터로 구성되어있다. 식각 런의 11개의 파라미터에 대한 모델을 만들기 위해 baseline런으로부터 얻은 데이터로 SVM모델을 구성하고 정상 런데이터와 비정상 런데이터로 SVM모델을 검증한다. 통계적 공정제어에서 흔히 이용되는 관리한계를 도입하여 정상데이터가 내재하고 있는 램덤변화율이 반영된 SVM 모델 기반의 관리 한계를 수립하고, 그 관리 한계를 바탕으로 오류발견을 실행한다. SVM을 이용함으로써 RIE의 오류발견은 run to run 기반에 정상 런데이터는 0% 오류율이 증명되었다.

병원 급식시설의 미생물적 품질관리를 위한 전산 프로그램개발에 관한 연구 (Development of a Computer-Assisted Microbiological Quality Assurance Program for Hosipital Foodservice Operations)

  • 곽동경;장혜자;주세영
    • 한국식품조리과학회지
    • /
    • 제8권2호
    • /
    • pp.137-145
    • /
    • 1992
  • A computer-assisted microbiological quality assurance program was developed based on HACCP data obtained from a 500 bed general hospital by assessing time and temperature conditions and microbiological qualities of six categories of menu items according to the process of food product flow. The purpose of the study was to develop a computer-assisted microbiological quality assurance program in order to simplify the assessment procedures and to provide a maximum assurance to foodservice personnel and the public. A 16-Bit personnel computer compatible with IBM-PC/AT was used. The data base files and processing programs were created using dBASE III plus packages. The contents of the computerized system are summarized as knows: 1. When the input program for hazard analysis runs, a series of questions are asked to determine hazards and assess their severity and risks. Critical control points and monitoring methods for CCPs are identified and saved in Master file. 2. Output and search programs for hazard analysis are composed of 6 categories of recipe data file list, code identification list, and HACCP identification of the specific menu item. 3. When the user selects a specific category of recipe from 6 categories presented on the screen and runs data file list, a series of menu item list, CCP list, monitoring methods list are generated. When the code search program runs, menu names, ingredients, amounts and a series of codes are generated. 4. When the user types in a menu item and an identification code, critical control points and monitoring methods are generated for each menu item.

  • PDF

DEEP-South: Automated Scheduler and Data Pipeline

  • Yim, Hong-Suh;Kim, Myung-Jin;Roh, Dong-Goo;Park, Jintae;Moon, Hong-Kyu;Choi, Young-Jun;Bae, Young-Ho;Lee, Hee-Jae;Oh, Young-Seok
    • 천문학회보
    • /
    • 제41권1호
    • /
    • pp.54.3-55
    • /
    • 2016
  • DEEP-South Scheduling and Data reduction System (DS SDS) consists of two separate software subsystems: Headquarters (HQ) at Korea Astronomy and Space Science Institute (KASI), and SDS Data Reduction (DR) at Korea Institute of Science and Technology Information (KISTI). HQ runs the DS Scheduling System (DSS), DS database (DB), and Control and Monitoring (C&M) designed to monitor and manage overall SDS actions. DR hosts the Moving Object Detection Program (MODP), Asteroid Spin Analysis Package (ASAP) and Data Reduction Control & Monitor (DRCM). MODP and ASAP conduct data analysis while DRCM checks if they are working properly. The functions of SDS is three-fold: (1) DSS plans schedules for three KMTNet stations, (2) DR performs data analysis, and (3) C&M checks whether DSS and DR function properly. DSS prepares a list of targets, aids users in deciding observation priority, calculates exposure time, schedules nightly runs, and archives data using Database Management System (DBMS). MODP is designed to discover moving objects on CCD images, while ASAP performs photometry and reconstructs their lightcurves. Based on ASAP lightcurve analysis and/or MODP astrometry, DSS schedules follow-up runs to be conducted with a part of, or three KMTNet telescopes.

  • PDF

Analysis and Distribution of Esculetin in Plasma and Tissues of Rats after Oral Administration

  • Kim, Ji-Sun;Ha, Tae-Youl;Ahn, Jiyun;Kim, Suna
    • Preventive Nutrition and Food Science
    • /
    • 제19권4호
    • /
    • pp.321-326
    • /
    • 2014
  • In this study, we developed a method to quantify esculetin (6,7-dihydroxycoumarin) in plasma and tissues using HPLC coupled with ultraviolet detection and measured the level of esculetin in rat plasma after oral administration. The calibration curve for esculetin was linear in the range of 4.8 ng/mL to 476.2 ng/mL, with a correlation coefficient ($r^2$) of 0.996, a limit of detection value of 33.2 ng/mL, and a limit of quantification value of 100.6 ng/mL. Recovery rates for the 95.2 ng/mL and 190.5 ng/mL samples were 95.2% and 100.3%, within-runs and 104.8% and 101.0% between-runs, respectively. The relative standard deviation was less than 7% for both runs. In the pharmacokinetic analysis, the peak plasma esculetin level was reached 5 min after administration ($C_{max}=173.3ng/mL$; $T_{1/2}=45min$; $AUC_{0{\sim}180min}=5,167.5ng{\cdot}min/mL$). At 180 min post-administration (i.e., after euthanasia), esculetin was only detectable in the liver ($30.87{\pm}11.33ng/g$) and the kidney ($20.29{\pm}7.02ng/g$).

젖소의 개체인식 및 형상 정보화를 위한 컴퓨터 시각 시스템 개발 (I) - 반문에 의한 개체인식 - (Development of Computer Vision System for Individual Recognition and Feature Information of Cow (I) - Individual recognition using the speckle pattern of cow -)

  • 이종환
    • Journal of Biosystems Engineering
    • /
    • 제27권2호
    • /
    • pp.151-160
    • /
    • 2002
  • Cow image processing technique would be useful not only for recognizing an individual but also for establishing the image database and analyzing the shape of cows. A cow (Holstein) has usually the unique speckle pattern. In this study, the individual recognition of cow was carried out using the speckle pattern and the content-based image retrieval technique. Sixty cow images of 16 heads were captured under outdoor illumination, which were complicated images due to shadow, obstacles and walking posture of cow. Sixteen images were selected as the reference image for each cow and 44 query images were used for evaluating the efficiency of individual recognition by matching to each reference image. Run-lengths and positions of runs across speckle area were calculated from 40 horizontal line profiles for ROI (region of interest) in a cow body image after 3 passes of 5$\times$5 median filtering. A similarity measure for recognizing cow individuals was calculated using Euclidean distance of normalized G-frame histogram (GH). normalized speckle run-length (BRL), normalized x and y positions (BRX, BRY) of speckle runs. This study evaluated the efficiency of individual recognition of cow using Recall(Success rate) and AVRR(Average rank of relevant images). Success rate of individual recognition was 100% when GH, BRL, BRX and BRY were used as image query indices. It was concluded that the histogram as global property and the information of speckle runs as local properties were good image features for individual recognition and the developed system of individual recognition was reliable.

DNA 서열분석을 위한 거리합기반 문자열의 근사주기 (Approximate Periods of Strings based on Distance Sum for DNA Sequence Analysis)

  • 정주희;김영호;나중채;심정섭
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제2권2호
    • /
    • pp.119-122
    • /
    • 2013
  • 주기와 같은 반복문자열에 대한 연구는 데이터압축, 컴퓨터활용 음악분석, 바이오인포매틱스 등 다양한 분야에서 진행되고 있다. 바이오인포매틱스 분야에서 주기는 유전자 서열이 반복적으로 나타나는 종렬중복과 밀접한 관련이 있으며 이는 근사문자열매칭을 이용한 근사주기 연구와 관련이 있다. 본 논문에서는 기존의 근사주기에 대한 정의를 보완하는 거리합기반 근사주기를 정의하고 이에 대한 연구 결과를 제시한다. 길이가 각각 m과 n인 문자열 p와 x가 주어졌을 때, p의 x에 대한 거리합기반 최소 근사주기거리를 가중편집거리에 대해 $O(mn^2)$ 시간, 편집거리에 대해 O)(mn) 시간, 해밍거리에 대해 O(n) 시간에 계산하는 알고리즘을 제시한다.