• Title/Summary/Keyword: 데이터 처리량

Search Result 2,586, Processing Time 0.03 seconds

Analysis of CO2 Emission Intensity per Industry using the Input-Output Tables 2003 (산업연관표(2003년)를 활용한 산업별 CO2 배출 원단위 분석)

  • Park, Pil-Ju;Kim, Mann-Young;Yi, Il-Seuk
    • Environmental and Resource Economics Review
    • /
    • v.18 no.2
    • /
    • pp.279-309
    • /
    • 2009
  • Greenhouse gas emissions should be precisely forecast to reduce the emissions from industrial production processes. This study calculated the direct and indirect $CO_2$ emission intensities of 401 industries using the Input-Output tables 2003 and statistical data on the amount of energy use. This study had some limitations in drawing study findings because overseas data were used given the lack of domestic data. Other limiting factors included the oil distribution problems in the oil refinery sector, re-review of carbon neutral, and insufficient consideration of waste treatment. Nonetheless, this study is very meaningful since the direct and indirect $CO_2$ emission intensities of 401 industries were calculated. Specifically, this study considered from the zero-waste perspective the effects of waste, which attract interest worldwide since coke gas and gas from the steel industry are obtained as byproducts for the first time in Korea. According to the results of the analysis of $CO_2$ emission intensity per industry, typical industries whose indirect $CO_2$ emission intensity is high include crude steel making, Remicon, steel wire rods & track rail, cast iron, and iron reinforcing rods & bar steel. These industries produce products using the raw materials produced in the industrial sector whose $CO_2$ emission intensity is high. The representative industries whose direct $CO_2$ emission intensity is high include cement, pig iron, lime & plaster products, andcoal-based compounds. These industries extract raw ore from nature and refine them into raw materials that are useful in other industries. The findings in this study can be effectively used for the following case: estimation of target $CO_2$ emission reduction level reflecting each industrial sector's characteristics, calculation of potential emission reduction of each policy to reduce $CO_2$ emissions, identification of a firm's $CO_2$ emission level, and setting of the target level of emission reduction. Moreover, the findings in this study can be utilized widely in fields such as System of integrated Environmental and Economic Accounting(SEEA) and Material Flow Analysis(MFA) as the current topic of research in Korea.

  • PDF

A Design of PRESENT Crypto-Processor Supporting ECB/CBC/OFB/CTR Modes of Operation and Key Lengths of 80/128-bit (ECB/CBC/OFB/CTR 운영모드와 80/128-비트 키 길이를 지원하는 PRESENT 암호 프로세서 설계)

  • Kim, Ki-Bbeum;Cho, Wook-Lae;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.6
    • /
    • pp.1163-1170
    • /
    • 2016
  • A hardware implementation of ultra-lightweight block cipher algorithm PRESENT which was specified as a standard for lightweight cryptography ISO/IEC 29192-2 is described. The PRESENT crypto-processor supports two key lengths of 80 and 128 bits, as well as four modes of operation including ECB, CBC, OFB, and CTR. The PRESENT crypto-processor has on-the-fly key scheduler with master key register, and it can process consecutive blocks of plaintext/ciphertext without reloading master key. In order to achieve a lightweight implementation, the key scheduler was optimized to share circuits for key lengths of 80 bits and 128 bits. The round block was designed with a data-path of 64 bits, so that one round transformation for encryption/decryption is processed in a clock cycle. The PRESENT crypto-processor was verified using Virtex5 FPGA device. The crypto-processor that was synthesized using a $0.18{\mu}m$ CMOS cell library has 8,100 gate equivalents(GE), and the estimated throughput is about 908 Mbps with a maximum operating clock frequency of 454 MHz.

Impact of Tropospheric Delays on the GPS Positioning with Double-difference Observables (대류권 지연이 이중차분법을 이용한 GPS 측위에 미치는 영향)

  • Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.31 no.5
    • /
    • pp.421-427
    • /
    • 2013
  • In general, it can be assumed that the tropospheric effect are removed through double-differencing technique in short-baseline GPS data processing. This means that the high-accuracy positioning can be obtained because various error sources can be eliminated and the number of unknown can be decreased in the adjustment computation procedure. As a consequence, short-baseline data processing is widely used in the fields such as deformation monitoring which require precise positioning. However, short-baseline data processing is limited to achieve high positioning accuracy when the height difference between the reference and the rover station is significant. In this study, the effects of tropospheric delays on the determination of short-baseline is analyzed, which depends on the orientation of baseline. The GPS measurements which include tropospheric effect and measurement noises are generated by simulation, and then rover coordinates are computed by short-baseline data processing technique. The residuals of rover coordinates are analyzed to interpret the tropospheric effect on the positioning. The results show that the magnitudes of the biases in the coordinate residuals increase as the baseline length gets longer. The increasing rate is computed as 0.07cm per meter in baseline length. Therefore, the tropospheric effects should be carefully considered in short-baseline data processing when the significant height difference between the reference and rover is observed.

Design and Analysis of Efficient Operation Sequencing in FMC Robot Using Simulation and Sequential Patterns (시뮬레이션과 순차 패턴을 이용한 FMC 로봇의 효율적 작업 순서 설계 및 분석)

  • Kim, Sun-Gil;Kim, Youn-Jin;Lee, Hong-Chul
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.6
    • /
    • pp.2021-2029
    • /
    • 2010
  • This paper suggested the method to design and analyze FMC robot's dispatching rule using the Simulation and Sequential Patterns. To do this, first of all, we built FMC using simulation and then, extracted signals that facilities call a robot, saved it as the log type. Secondly, we built robot's optimal path using the Sequential Pattern Mining with the results of analyzing the log and relationship between machine and robot actions. Lastly, we adapted it to the A corp.'s manufacturing line for verifying its performance. As a result of applying the new dispatching rule in FMC, total throughput and total flow time decrease because of decreasing material loss time and increasing robot utility. Furthermore, because this method can be applied for every manufacturing plant using simulation, it can contribute to advance total FMC efficiency as well.

Energy-Efficient Routing Protocol based on Interference Awareness for Transmission of Delay-Sensitive Data in Multi-Hop RF Energy Harvesting Networks (다중 홉 RF 에너지 하베스팅 네트워크에서 지연에 민감한 데이터 전송을 위한 간섭 인지 기반 에너지 효율적인 라우팅 프로토콜)

  • Kim, Hyun-Tae;Ra, In-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.3
    • /
    • pp.611-625
    • /
    • 2018
  • With innovative advances in wireless communication technology, many researches for extending network lifetime in maximum by using energy harvesting have been actively performed on the area of network resource optimization, QoS-guaranteed transmission, energy-intelligent routing and etc. As known well, it is very hard to guarantee end-to-end network delay due to uncertainty of the amount of harvested energy in multi-hop RF(radio frequency) energy harvesting wireless networks. To minimize end-to-end delay in multi-hop RF energy harvesting networks, this paper proposes an energy efficient routing metric based on interference aware and protocol which takes account of various delays caused by co-channel interference, energy harvesting time and queuing in a relay node. The proposed method maximizes end-to-end throughput by performing avoidance of packet congestion causing load unbalance, reduction of waiting time due to exhaustion of energy and restraint of delay time from co-channel interference. Finally simulation results using ns-3 simulator show that the proposed method outperforms existing methods in respect of throughput, end-to-end delay and energy consumption.

Design and Implementation of ASTERIX Parsing Module Based on Pattern Matching for Air Traffic Control Display System (항공관제용 현시시스템을 위한 패턴매칭 기반의 ASTERIX 파싱 모듈 설계 및 구현)

  • Kim, Kanghee;Kim, Hojoong;Yin, Run Dong;Choi, SangBang
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.3
    • /
    • pp.89-101
    • /
    • 2014
  • Recently, as domestic air traffic dramatically increases, the need of ATC(air traffic control) systems has grown for safe and efficient ATM(air traffic management). Especially, for smooth ATC, it is far more important that performance of display system which should show all air traffic situation in FIR(Flight Information Region) without additional latency is guaranteed. In this paper, we design a ASTERIX(All purpose STructured Eurocontrol suRveillance Information eXchange) parsing module to promote stable ATC by minimizing system loads, which is connected with reducing overheads arisen when we parse ASTERIX message. Our ASTERIX parsing module based on pattern matching creates patterns by analyzing received ASTERIX data, and handles following received ASTERIX data using pre-defined procedure through patterns. This module minimizes display errors by rapidly extracting only necessary information for display different from existing parsing module containing unnecessary parsing procedure. Therefore, this designed module is to enable controllers to operate stable ATC. The comparison with existing general bit level ASTERIX parsing module shows that ASTERIX parsing module based on pattern matching has shorter processing delay, higher throughput, and lower CPU usage.

Experiment and Simulation for Evaluation of Jena Storage Plug-in Considering Hierarchical Structure (계층 구조를 고려한 Jena Plug-in 저장소의 평가를 위한 실험 및 시뮬레이션)

  • Shin, Hee-Young;Jeong, Dong-Won;Baik, Doo-Kwon
    • Journal of the Korea Society for Simulation
    • /
    • v.17 no.2
    • /
    • pp.31-47
    • /
    • 2008
  • As OWL(Web Ontology Language) has been selected as a standard ontology description language by W3C, many ontologies have been building and developing in OWL. The lena developed by HP as an Application Programming Interface(API) provides various APIs to develop inference engines as well as storages, and it is widely used for system development. However, the storage model of Jena2 stores most owl documents not acceptable into a single table and it shows low processing performance for a large ontology data set. Most of all, Jena2 storage model does not consider hierarchical structures of classes and properties. In addition, it shows low query processing performance using the hierarchical structure because of many join operations. To solve these issues, this paper proposes an OWL ontology relational database model. The proposed model semantically classifies and stores information such as classes, properties, and instances. It improves the query processing performance by managing hierarchical information in a separate table. This paper also describes the implementation and evaluation results. This paper also shows the experiment and evaluation result and the comparative analysis on both results. The experiment and evaluation show our proposal provides a prominent performance as against Jena2.

  • PDF

Application of Side Scan Sonar to Disposed Material Analysis at the Bottom of Coastal Water and River (해저 및 하저 폐기물의 분석을 위한 양방향음파탐사기의 적용)

  • 안도경;이중우
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2002.11a
    • /
    • pp.147-153
    • /
    • 2002
  • Due to the growth of population and industrial development at the coastal cities, there has been much increase in necessity to effective control of the wastes into the coastal water and river. The amount of disposal at those waters has been increased rapidly and it is necessary for us to track of it in order to keep the water clean. The investigation and research related to the water quality in this region have been conducted continuously but the systematic survey of the disposed wastes at the bottom was neglected and/or minor. In this study we surveyed the status of disposed waste distribution at the bottom coastal water and river from the scanned images. The intensity of sound received by the side scan sonar tow vehicle from the sea floor provides information as to the general distribution and characteristics of the superficial wastes. The port and starboard side scanned images produced from a transducer borne on a tow fish connected by tow cable to a tug boat have the area with width of 22m∼112m, and band of 44m∼224m. All data are displayed in real-time on a high-resolution color display (1280 ${\times}$ 1024 pixels) together with position information by DGPS. From the field measurement and analysis of the recorded images, we could draw the location and distribution of bottom disposals. Furthermore, we made a database system which might be fundamental for planning the waste reception and process control system.

  • PDF

Structural Segmentation for 3-D Brain Image by Intensity Coherence Enhancement and Classification (명암도 응집성 강화 및 분류를 통한 3차원 뇌 영상 구조적 분할)

  • Kim, Min-Jeong;Lee, Joung-Min;Kim, Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.13A no.5 s.102
    • /
    • pp.465-472
    • /
    • 2006
  • Recently, many suggestions have been made in image segmentation methods for extracting human organs or disease affected area from huge amounts of medical image datasets. However, images from some areas, such as brain, which have multiple structures with ambiruous structural borders, have limitations in their structural segmentation. To address this problem, clustering technique which classifies voxels into finite number of clusters is often employed. This, however, has its drawback, the influence from noise, which is caused from voxel by voxel operations. Therefore, applying image enhancing method to minimize the influence from noise and to make clearer image borders would allow more robust structural segmentation. This research proposes an efficient structural segmentation method by filtering based clustering to extract detail structures such as white matter, gray matter and cerebrospinal fluid from brain MR. First, coherence enhancing diffusion filtering is adopted to make clearer borders between structures and to reduce the noises in them. To the enhanced images from this process, fuzzy c-means clustering method was applied, conducting structural segmentation by assigning corresponding cluster index to the structure containing each voxel. The suggested structural segmentation method, in comparison with existing ones with clustering using Gaussian or general anisotropic diffusion filtering, showed enhanced accuracy which was determined by how much it agreed with the manual segmentation results. Moreover, by suggesting fine segmentation method on the border area with reproducible results and minimized manual task, it provides efficient diagnostic support for morphological abnormalities in brain.

A Study on Field Seismic Data Processing using Migration Velocity Analysis (MVA) for Depth-domain Velocity Model Building (심도영역 속도모델 구축을 위한 구조보정 속도분석(MVA) 기술의 탄성파 현장자료 적용성 연구)

  • Son, Woohyun;Kim, Byoung-yeop
    • Geophysics and Geophysical Exploration
    • /
    • v.22 no.4
    • /
    • pp.225-238
    • /
    • 2019
  • Migration velocity analysis (MVA) for creating optimum depth-domain velocities in seismic imaging was applied to marine long-offset multi-channel data, and the effectiveness of the MVA approach was demonstrated by the combinations of conventional data processing procedures. The time-domain images generated by conventional time-processing scheme has been considered to be sufficient so far for the seismic stratigraphic interpretation. However, when the purpose of the seismic imaging moves to the hydrocarbon exploration, especially in the geologic modeling of the oil and gas play or lead area, drilling prognosis, in-place hydrocarbon volume estimation, the seismic images should be converted into depth domain or depth processing should be applied in the processing phase. CMP-based velocity analysis, which is mainly based on several approximations in the data domain, inherently contains errors and thus has high uncertainties. On the other hand, the MVA provides efficient and somewhat real-scale (in depth) images even if there are no logging data available. In this study, marine long-offset multi-channel seismic data were optimally processed in time domain to establish the most qualified dataset for the usage of the iterative MVA. Then, the depth-domain velocity profile was updated several times and the final velocity-in-depth was used for generating depth images (CRP gather and stack) and compared with the images obtained from the velocity-in-time. From the results, we were able to confirm the depth-domain results are more reasonable than the time-domain results. The spurious local minima, which can be occurred during the implementation of full waveform inversion, can be reduced when the result of MVA is used as an initial velocity model.