• Title/Summary/Keyword: large pipeline

Search Result 221, Processing Time 0.025 seconds

An 8b 220 MS/s 0.25 um CMOS Pipeline ADC with On-Chip RC-Filter Based Voltage References (온-칩 RC 필터 기반의 기준전압을 사용하는 8b 220 MS/s 0.25 um CMOS 파이프라인 A/D 변환기)

  • 이명진;배현희;배우진;조영재;이승훈;김영록
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.41 no.10
    • /
    • pp.69-75
    • /
    • 2004
  • This work proposes an 8b 220 MS/s 230 mW 3-stage pipeline CMOS ADC with on-chip filers for temperature- and power- insensitive voltage references. The proposed RC low-pass filters improve switching noise performance and reduce reference settling time at heavy R & C loads without conventional off-chip large bypass capacitors. The prototype ABC fabricated in a 0.25 um CMOS occupies the active die area of 2.25 $\textrm{mm}^2$ and shows the measured DNL and INL of maximum 0.43 LSB and 0.82 LSB, respectively. The ADC maintains the SNDR of 43 dB and 41 dB up to the 110 MHz input at 200 MS/s and 220 MS/s, respectively, while the SNDR at the 500 MHz input is degraded as much as only 3 dB than the SNDR at the 110 MHz input.

NEW PHOTOMETRIC PIPELINE TO EXPLORE TEMPORAL AND SPATIAL VARIABILITY WITH KMTNET DEEP-SOUTH OBSERVATIONS

  • Chang, Seo-Won;Byun, Yong-Ik;Shin, Min-Su;Yi, Hahn;Kim, Myung-Jin;Moon, Hong-Kyu;Choi, Young-Jun;Cha, Sang-Mok;Lee, Yongseok
    • Journal of The Korean Astronomical Society
    • /
    • v.51 no.5
    • /
    • pp.129-142
    • /
    • 2018
  • The DEEP-South (the Deep Ecliptic Patrol of the Southern Sky) photometric census of small Solar System bodies produces massive time-series data of variable, transient or moving objects as a by-product. To fully investigate unexplored variable phenomena, we present an application of multi-aperture photometry and FastBit indexing techniques for faster access to a portion of the DEEP-South year-one data. Our new pipeline is designed to perform automated point source detection, robust high-precision photometry and calibration of non-crowded fields which have overlap with previously surveyed areas. In this paper, we show some examples of catalog-based variability searches to find new variable stars and to recover targeted asteroids. We discover 21 new periodic variables with period ranging between 0.1 and 31 days, including four eclipsing binary systems (detached, over-contact, and ellipsoidal variables), one white dwarf/M dwarf pair candidate, and rotating variable stars. We also recover astrometry (< ${\pm}1-2$ arcsec level accuracy) and photometry of two targeted near-earth asteroids, 2006 DZ169 and 1996 SK, along with the small- (~0.12 mag) and relatively large-amplitude (~0.5 mag) variations of their dominant rotational signals in R-band.

QoS Guarantee in Partial Failure of Clustered VOD Server (클러스터 VOD 서버의 부분적 장애에서 QoS 보장)

  • Lee, Joa-Hyoung;Jung, In-Bum
    • The KIPS Transactions:PartC
    • /
    • v.16C no.3
    • /
    • pp.363-372
    • /
    • 2009
  • For large scale VOD service, cluster servers are spotlighted to their high performance and low cost. A cluster server usually consists of a front-end node and multiple back-end nodes. Though increasing the number of back-end nodes can result in the more QoS streams for clients, the possibility of failures in back-end nodes is proportionally increased. The failure causes not only the stop of all streaming service but also the loss of the current playing positions. In this paper, when a back-end node becomes a failed state, the recovery mechanisms are studied to support the unceasing streaming service. For the actual VOD service environment, we implement a cluster-based VOD servers composed of general PCs and adopt the parallel processing for MPEG movies. From the implemented VOD server, a video block recovery mechanism is designed on parity algorithms. However, without considering the architecture of cluster-based VOD server, the application of the basic technique causes the performance bottleneck of the internal network for recovery and also results in the inefficiency CPU usage of back-end nodes. To address these problems, we propose a new failure recovery mechanism based on the pipeline computing concept.

Memory Reduction Method of Radix-22 MDF IFFT for OFDM Communication Systems (OFDM 통신시스템을 위한 radix-22 MDF IFFT의 메모리 감소 기법)

  • Cho, Kyung-Ju
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.1
    • /
    • pp.42-47
    • /
    • 2020
  • In OFDM-based very high-speed communication systems, FFT/IFFT processor should have several properties of low-area and low-power consumption as well as high throughput and low processing latency. Thus, radix-2k MDF (multipath delay feedback) architectures by adopting pipeline and parallel processing are suitable. In MDF architecture, the feedback memory which increases in proportion to the input signal word-length has a large area and power consumption. This paper presents a feedback memory size reduction method of radix-22 MDF IFFT processor for OFDM applications. The proposed method focuses on reducing the feedback memory size in the first two stages of MDF architectures since the first two stages occupy about 75% of the total feedback memory. In OFDM transmissions, IFFT input signals are composed of modulated data and pilot, null signals. In order to reduce the IFFT input word-length, the integer mapping which generates mapped data composed of two signed integer corresponding to modulated data and pilot/null signals is proposed. By simulation, it is shown that the proposed method has achieved a feedback memory reduction up to 39% compared to conventional approach.

Design of Extended Real-time Data Pipeline System Architecture (확장형 실시간 데이터 파이프라인 시스템 아키텍처 설계)

  • Shin, Hoseung;Kang, Sungwon;Lee, Jihyun
    • Journal of KIISE
    • /
    • v.42 no.8
    • /
    • pp.1010-1021
    • /
    • 2015
  • Big data systems are widely used to collect large-scale log data, so it is very important for these systems to operate with a high level of performance. However, the current Hadoop-based big data system architecture has a problem in that its performance is low as a result of redundant processing. This paper solves this problem by improving the design of the Hadoop system architecture. The proposed architecture uses the batch-based data collection of the existing architecture in combination with a single processing method. A high level of performance can be achieved by analyzing the collected data directly in memory to avoid redundant processing. The proposed architecture guarantees system expandability, which is an advantage of using the Hadoop architecture. This paper confirms that the proposed architecture is approximately 30% to 35% faster in analyzing and processing data than existing architectures and that it is also extendable.

Study for Operation Method of Underwater Cable and Pipeline Burying ROV Trencher using Barge and Its Application in Real Construction

  • Kim, Min-Gyu;Kang, Hyungjoo;Lee, Mun-Jik;Cho, Gun Rae;Li, Ji-Hong;Yoon, Tae-Sagm;Ju, Jaeheung;Kwak, Han-Wan
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.5
    • /
    • pp.361-370
    • /
    • 2020
  • We developed a heavy-duty work class ROV trencher named URI-T (Underwater robot it's trencher) that can conduct burial and maintenance tasks for underwater cables and small diameter pipelines. It requires various supporting systems, including a dynamic positioning (DP) vessel, launch and recovery system (LARS), A-frame, and winch in order to perform burial tasks because of its dimensions (6.5 m × 5.0 m × 4.5 m, 20 t) and the tough working environment. However, operating a DP vessel has disadvantages as it is expensive to rent and operate and it is difficult to adjust the working schedule for some domestic coast construction cases. In this paper, we propose a method using a barge instead of a DP vessel to avoid the above disadvantages. Although burying the cable and pipeline using a barge has lower working efficiency than a DP vessel, it can save construction expenses and does not require a large crew. The proposed method was applied over two months at the construction of the water supply in Yokji-do, and the results were verified.

Analysis on Flexural Behavior of Spiral Steel Pipe Considering Residual Stress Developed by Pipe Manufacturing (조관에 의한 잔류 응력을 고려한 스파이럴 강관의 휨 거동 분석)

  • Kim, Kyuwon;Kim, Jeongsoo;Kang, Dongyoon;Kim, Moon Kyum
    • Journal of the Korean Institute of Gas
    • /
    • v.23 no.4
    • /
    • pp.65-73
    • /
    • 2019
  • A spiral steel pipe has been more used widely as a structural member as well as transport pipeline because the pipe can be manufactured continuously, consequently more economical than the conventional UOE pipe. As improved pipe manufacture technology makes spiral pipes to have high strength and to have larger diameters, the spiral pipes have been recently used as long distance transport pipeline with a large diameter and strain-based design is thus required to keep structural integrity and cost effectiveness of the spiral pipe. However, design codes of spiral pipe have not been completely established yet, and structural behaviors of a spiral pipe are not clearly understood for strain-based design. In this paper, the effects of residual stresses due to the spiral pipe manufacture process are investigated on the flexural behavior of the spiral pipe. Finite element analyses were conducted to estimate residual stresses due to the manufacturing process for the pipes which have different forming angle, thickness, and strength, respectively. After that, the results were used as initial conditions for flexural analysis of the pipe to numerically investigate its flexural behaviors.

IMSNG: Automatic Data Reduction Pipeline gppy for heterogeneous telescopes

  • Paek, Gregory S.H.;Im, Myungshin;Chang, Seo-won;Choi, Changsu;Lim, Gu;Kim, Sophia;Jung, Mankeun;Hwang, Sungyong;Kim, Joonho;Sung, Hyun-il
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.53.4-54
    • /
    • 2021
  • Although the era of very large telescopes has come, small telescopes still have advantages for fast follow-up and long-term monitoring observation. Intensive monitoring survey of nearby galaxies (IMSNG) aims to understand the nature of the supernovae (SNe) by catching the early light curve from them with the network of small telescopes from 0.4-m to 1.0-m all around the world. To achieve the scientific goals with heterogeneous facilities, three factors are important. First, automatic processes as soon as data is uploaded will increase efficiency and shorten the time. Second, searching for transients is necessary to deal with newly emerged transients for fast follow-up observation. Finally, the Integrated process for different telescopes gives a homogeneous output, which will eventually make connections with the database easy. Here, we introduce the integrated pipeline, 'gppy' based on Python, for more than 10 facilities having various configurations and its performance. Processes consist of image pre-process, photometry, image align, image combine, photometry, and transient search. In the connected database, homogeneous output is summarized and analyzed additionally to filter transient candidates with light curves. This talk will suggest the future work to improve the performance and usability on the other projects, gravitational wave electromagnetic wave counterpart in Korea Observatory (GECKO), and small telescope network of Korea (SOMANGNET).

  • PDF

Machine-assisted Semi-Simulation Model (MSSM): Predicting Galactic Baryonic Properties from Their Dark Matter Using A Machine Trained on Hydrodynamic Simulations

  • Jo, Yongseok;Kim, Ji-hoon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.55.3-55.3
    • /
    • 2019
  • We present a pipeline to estimate baryonic properties of a galaxy inside a dark matter (DM) halo in DM-only simulations using a machine trained on high-resolution hydrodynamic simulations. As an example, we use the IllustrisTNG hydrodynamic simulation of a (75 h-1 Mpc)3 volume to train our machine to predict e.g., stellar mass and star formation rate in a galaxy-sized halo based purely on its DM content. An extremely randomized tree (ERT) algorithm is used together with multiple novel improvements we introduce here such as a refined error function in machine training and two-stage learning. Aided by these improvements, our model demonstrates a significantly increased accuracy in predicting baryonic properties compared to prior attempts --- in other words, the machine better mimics IllustrisTNG's galaxy-halo correlation. By applying our machine to the MultiDark-Planck DM-only simulation of a large (1 h-1 Gpc)3 volume, we then validate the pipeline that rapidly generates a galaxy catalogue from a DM halo catalogue using the correlations the machine found in IllustrisTNG. We also compare our galaxy catalogue with the ones produced by popular semi-analytic models (SAMs). Our so-called machine-assisted semi-simulation model (MSSM) is shown to be largely compatible with SAMs, and may become a promising method to transplant the baryon physics of galaxy-scale hydrodynamic calculations onto a larger-volume DM-only run. We discuss the benefits that machine-based approaches like this entail, as well as suggestions to raise the scientific potential of such approaches.

  • PDF

A Study on the Prediction of Ship Collision Based on Semi-Supervised Learning (준지도 학습 기반 선박충돌 예측에 대한 연구)

  • Ho-June Seok;Seung Sim;Jeong-Hun Woo;Jun-Rae Cho;Deuk-Jae Cho;Jong-Hwa Baek;Jaeyong Jung
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.204-205
    • /
    • 2023
  • This study studied a prediction model for sending collision alarms for small fishing boats based on semi-supervised learning(SSL). The supervised learning (SL) method requires a large number of labeled data, but the labeling process takes a lot of resources and time. This study used service data collected through a data pipeline linked to 'intelligent maritime traffic information service' and data collected from real-sea experiment. The model accuracy was improved as a result of learning not only real-sea experiment data with labeling determined based on actual user satisfaction but also service data without label determined together.

  • PDF