• Title/Summary/Keyword: 구조 설계 최적화

Search Result 1,738, Processing Time 0.042 seconds

Studies on Miniaturization and Notched Wi-Fi Bandwidth for UWB Antenna Using a Wide Radiating Slot (넓은 방사 슬롯을 이용한 초광대역 안테나의 소형화와 Wi-Fi 대역의 노치에 관한 연구)

  • Beom, Kyeong-Hwa;Kim, Ki-Chan;Jo, Se-Young;Ko, Young-Ho
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.22 no.2
    • /
    • pp.265-274
    • /
    • 2011
  • In this paper, it is studied on wide radiating slot antenna's miniaturization for ultra wide-band(UWB) technologies and notch structure to prevent interference between UWB systems and existing wireless systems for using Wi-Fi service of IEEE standards 802.11 a/n. Proposed antenna that wide slot is decreased from $\lambda/2$ to $\lambda/4$ length of resonant frequency has decreased by 72 % compared with conventional antenna. And optimized T-shaped CPW-fed stub has satisfied UWB bandwidth for 3.0~11.8 GHz. Then, creating 2-order Hilbert curve slot line in the stub's patch area, 4.9~5.6 GHz that centered frequency is 5 GHz is eliminated. Finally, the designed antenna constructed on FR4-epoxy has $20{\times}15\;mm^2$ dimension. The measured results that are obtained return loss under -10 dB through 3.2~11.8 GHz without Wi-Fi bandwidth, a linear phase characteristic, a stable group delay, and omnidirectional radiation patterns are presented.

ASIC Design of Lifting Processor for Motion JPEG2000 (Motion JPEG2000을 위한 리프팅 프로세서의 ASIC 설계)

  • Seo Young-Ho;Kim Dong-Wook
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.7C
    • /
    • pp.647-657
    • /
    • 2005
  • In this paper, we proposed a new lifting architecture for JPEG2000 and implemented to ASIC. We proposed a new cell to execute unit calculation of lifting using the property of lifting which is the repetitious arithmetic with same structure, and then recomposed the whole lifting by expanding it. After the operational sequence of lifting arithmetic was analyzed in detail and the causality was imposed for implementation to hardware, the unit cell was optimized. A new lifting kernel was organized by expanding simply the unit cell, and a lifting processor was implemented for Motion JPEG2000 using it. The implemented lifting kernel can accommodate the tile size of 1024$\times$1024, and support both lossy compression using the (9,7) filter and lossless compression using (5,3) filter. Also, it has the same output rate as input rate, and can continuously output the wavelet coefficients of 4 types(LL, LH, HL, HH) at the same time. The implemented lifting processor completed a course of ASIC using 0.35$\mu$m CMOS library of SAMSUNG. It occupied about 90,000 gates, and stably operated in about 150MHz though difference from the used macro cell for the multiplier. Finally, the improved operated in about 150MHz though difference from the used macro cell for the multiplier. Finally, the performance can be identified in comparison with the previous researches and commercial IPs.

Applicability Estimation of Ballast Non-exchange-type Quick-hardening Track Using a Layer Separation Pouring Method (층 분리주입을 이용한 도상자갈 무교환방식 급속경화궤도의 적용성 평가)

  • Lee, Il Wha;Jung, Young Ho;Lee, Min Soo
    • Journal of the Korean Society for Railway
    • /
    • v.18 no.6
    • /
    • pp.543-551
    • /
    • 2015
  • Quick-hardening track (QHT) is a construction method which is used to change from old ballast track to concrete track. Sufficient time for construction is important, as the construction should be done during operational breaks at night. Most of the time is spent on exchanging the ballast layer. If it is possible to apply the ballast non-exchange type of quick-hardening track, it would be more effective to reduce the construction time and costs. In this paper, pouring materials with high permeability are suggested and a construction method involving a layer separation pouring process considering the void condition is introduced in order to develop ballast non-exchange type of QHT. The separate pouring method can secure the required strength because optimized materials are poured into the upper layer and the lower layer for each void ratio condition. To ensure this process, a rheology analysis was conducted on the design of the pouring materials according to aggregate size, the aggregate distribution, the void ratio, the void size, the tortuosity and the permeability. A polymer series was used as the pouring material of the lower layer to secure the void filling capacity and for adhesion to the fine-grained layer. In addition, magnesium-phosphate ceramic (MPC) was used as the pouring material of the upper layer to secure the void-filling capacity and for adhesion of the coarse-grained layer. As a result of a mechanics test of the materials, satisfactory performance corresponding to existing quick-hardening track was noted.

A Study on the Element Technologies in Flame Arrester of End Line (선박의 엔드라인 폭연방지기의 요소기술에 관한 연구)

  • Pham, Minh-Ngoc;Choi, Min-Seon;Kim, Bu-Gi
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.4
    • /
    • pp.468-475
    • /
    • 2019
  • An end-line flame arrester allows free venting in combination with flame protection for vertical vent applications. End-line flame arresters are employed in various fields, especially in shipping. In flame arresters, springs are essential parts because the spring load and the spring's elasticity determine the hood opening moment. In addition, the spring has to work under a high-temperature condition because of the burning gas flame. Therefore, it is necessary to analyze the mechanical load and elasticity of the spring when the flame starts to appear. Based on simulations of the working process of a specific end-line flame arrester, a thermal and structural analysis of the spring is performed. A three-dimensional model of a burned spring is built using computational fluid dynamics (CFD) simulation. Results of the CFD analysis are input into a finite element method simulation to analyze the spring structure. The research team focused on three cases of spring loads: 43, 93, and 56 kg, correspondingly, at 150 mm of spring deflection. Consequently, the spring load was reduced by 10 kg after 5 min under a $1,000^{\circ}C$ heat condition. The simulation results can be used to predict and estimate the spring's load and elasticity at the burning time variation. Moreover, the obtained outcome can provide the industry with references to optimize the design of the spring as well as that of the flame arrester.

A 13b 100MS/s 0.70㎟ 45nm CMOS ADC for IF-Domain Signal Processing Systems (IF 대역 신호처리 시스템 응용을 위한 13비트 100MS/s 0.70㎟ 45nm CMOS ADC)

  • Park, Jun-Sang;An, Tai-Ji;Ahn, Gil-Cho;Lee, Mun-Kyo;Go, Min-Ho;Lee, Seung-Hoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.3
    • /
    • pp.46-55
    • /
    • 2016
  • This work proposes a 13b 100MS/s 45nm CMOS ADC with a high dynamic performance for IF-domain high-speed signal processing systems based on a four-step pipeline architecture to optimize operating specifications. The SHA employs a wideband high-speed sampling network properly to process high-frequency input signals exceeding a sampling frequency. The SHA and MDACs adopt a two-stage amplifier with a gain-boosting technique to obtain the required high DC gain and the wide signal-swing range, while the amplifier and bias circuits use the same unit-size devices repeatedly to minimize device mismatch. Furthermore, a separate analog power supply voltage for on-chip current and voltage references minimizes performance degradation caused by the undesired noise and interference from adjacent functional blocks during high-speed operation. The proposed ADC occupies an active die area of $0.70mm^2$, based on various process-insensitive layout techniques to minimize the physical process imperfection effects. The prototype ADC in a 45nm CMOS demonstrates a measured DNL and INL within 0.77LSB and 1.57LSB, with a maximum SNDR and SFDR of 64.2dB and 78.4dB at 100MS/s, respectively. The ADC is implemented with long-channel devices rather than minimum channel-length devices available in this CMOS technology to process a wide input range of $2.0V_{PP}$ for the required system and to obtain a high dynamic performance at IF-domain input signal bands. The ADC consumes 425.0mW with a single analog voltage of 2.5V and two digital voltages of 2.5V and 1.1V.

A review on the design requirement of temperature in high-level nuclear waste disposal system: based on bentonite buffer (고준위폐기물처분시스템 설계 제한온도 설정에 관한 기술현황 분석: 벤토나이트 완충재를 중심으로)

  • Kim, Jin-Seop;Cho, Won-Jin;Park, Seunghun;Kim, Geon-Young;Baik, Min-Hoon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.21 no.5
    • /
    • pp.587-609
    • /
    • 2019
  • Short-and long-term stabilities of bentonite, favored material as buffer in geological repositories for high-level waste were reviewed in this paper in addition to alternative design concepts of buffer to mitigate the thermal load from decay heat of SF (Spent Fuel) and further increase the disposal efficiency. It is generally reported that the irreversible changes in structure, hydraulic behavior, and swelling capacity are produced due to temperature increase and vapor flow between $150{\sim}250^{\circ}C$. Provided that the maximum temperature of bentonite is less than $150^{\circ}C$, however, the effects of temperature on the material, structural, and mineralogical stability seems to be minor. The maximum temperature in disposal system will constrain and determine the amount of waste to be disposed per unit area and be regarded as an important design parameter influencing the availability of disposal site. Thus, it is necessary to identify the effects of high temperature on the performance of buffer and allow for the thermal constraint greater than $100^{\circ}C$. In addition, the development of high-performance EBS (Engineered Barrier System) such as composite bentonite buffer mixed with graphite or silica and multi-layered buffer (i.e., highly thermal-conductive layer or insulating layer) should be taken into account to enhance the disposal efficiency in parallel with the development of multilayer repository. This will contribute to increase of reliability and securing the acceptance of the people with regard to a high-level waste disposal.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Prototype for Real-time Indoor Evacuation Simulation System using Indoor IR Sensor Information (적외선 센서정보기반 실시간 실내 대피시뮬레이션 시스템 프로토타입)

  • Nam, Hyun-Woo;Kwak, Su-Yeong;Jun, Chul-Min
    • Spatial Information Research
    • /
    • v.20 no.2
    • /
    • pp.155-164
    • /
    • 2012
  • Indoor fire simulators have been used to analyse building safety in the events of emergency evacuation. These applications are primarily focused on simulating evacuation behaviors for the purpose of checking building structural problems in normal time rather than in real time situations. Therefore, they have limitations in handling real-time evacuation events with the following reasons. First, the existing models mostly experiment the artificial situations using randomly generated evacuees while real world requires actual data. Second, they take too long time in operation to generate real time data. Third, they do not produce optimal results to be used in rescueing or evacuation guidance. In order to solve these limitations, we suggest a method to build an evacuation simulation system that can be used in real-world emergency situations. The system performs numerous simulations in advance according to varying distributions of occupants. Then the resulting data are stored in DBMS. The actual person data captured in infrared sensor network are compared with the simulation data in DBMS and the querried data most closely is provided to the user. The developed system is tested using a campus building and the suggested processes are illustrated.

Vibration Analysis of Combined Deck Structure-Car System of Car Carriers (자동차운반선(自動車運搬船)의 갑판-차량(甲板-車輛) 연성계(聯成系)의 진동해석(振動解析))

  • S.Y.,Han;K.C.,Kim
    • Bulletin of the Society of Naval Architects of Korea
    • /
    • v.27 no.2
    • /
    • pp.63-77
    • /
    • 1990
  • The combined deckstructure-car system of a car carrier is especially sensitive to hull girder vibrations due to mechanical excitations and wave loads. For the free and forced vibration analysis of the system, the analytical methods based on the receptance method and two schemes for efficient applications of the methods are presented. The methods are especially relevant to dynamical reanalysis of the system subject to design modification or to dynamic optimization. The deck-car system is modelled as a combined system consisting of a stiffened plate representing deck, primary structure, and attached subsystems such as pillars, additional stiffeners and damped spring-mass systems representing cars/trucks. For response calculations of the system subjected to displacement excitations along the boundaries, the support displacement transfer ratio conceptually similar to the receptance is introduced. For the verification of accuracy and calculation efficiency of the proposed methods, numerical and experimental investigations are carried out.

  • PDF

Prediction of Shore Tide level using Artificial Neural Network (인공신경망을 이용한 해안 조위예측)

  • Rhee Kyoung Hoon;Moon Byoung Seok;Kim Tae Kyoung;Oh jong yang
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2005.05b
    • /
    • pp.1068-1072
    • /
    • 2005
  • 조석이란, 해면의 완만한 주기적 승강을 말하며, 보통 그 승강은 1일 약 2회이나, 곳에 따라서는 1일 1회의 곳도 있다. 조석에 있어서는 이 밖에 수일의 주기를 갖는 약간 불규칙한 승강, 반년, 또는 1년을 주기로 하는 다소 규칙적인 승강까지 포함하여 취급한다. 그러나, 각 항만마다 갖는 특정적인 주기인 수분내지 수십분의 주기의 승강은 조석으로 취급하지 않는다. 조석은 해양의 제현상 중에서 예측가능성이 가장 큰 현장으로 이는 조석이 천체의 운행과 연관되기 때문이다. 조석이란 지구로부터 일정한 거리에서 각 고유의 속도를 가지는 적도상을 운행하는 무수의 가상천체에 기인하는 규칙적인 개개의 조석을 합성한 것이며 이 개개의 조석을 분조(Constituent)라 한다. 여기에서 사용되는 신경망 모형은 입력과 출력으로 구성되는 블랙박스 모형으로서 하나의 시스템을 병렬적으로 비선형적으로 구축할 수 있다는 장점 때문에 과거 하천유역의 강우-유출과정에서의 경우 유출현상을 해석하고 유출과정을 모형화 하기 위해 사용하였다. 본 연구에서는 기존의 조위 예측방법인 조화분석법이 아닌 인공신경망을 이용하여 조위예측을 실시하였다. 학습이라는 최적화 과정을 통해 구조와 기능이 복잡한 자연현상을 그대로 받아들여 축적시킴으로써 이를 지식으로 현상에 대한 재현능력이 뛰어나고, 또한 신경회로망의 연상기억능력에 적용하여 수학적으로 표현이 불가능한 불확실한 조위곡선에 적용하기에 유리한 장점을 가지고 있다. 본 연구의 목적은 과거 조위이론을 통해 이루었던 조위예측을 우리가 알기 쉬운 여러 기후인자(해면기압, 풍향, 풍속, 음력 등)에 따른 조위곡선을 예측하기 위해 신경망 모형을 이용하여 여수지역의 조위에 적용하여 비교 분석하고자 한다. May가 제안한 공식을 더 확장하여 적용할 수 있는 실험 공식으로 개선하였으며 다양한 조건에 대한 실험을 수행하여 보다 정밀한 공식으로 개선할 수 있었다.$10,924m^3/s$ 및 $10,075m^3/s$로서 실험 I의 $2,757m^3/s$에 비해 통수능이 많이 개선되었음을 알 수 있다.함을 알 수 있다. 상수관로 설계 기준에서는 관로내 수압을 $1.5\~4.0kg/cm^2$으로 나타내고 있는데 $6kg/cm^2$보다 과수압을 나타내는 경우가 $100\%$로 밸브를 개방하였을 때보다 $60\%,\;80\%$ 개방하였을 때가 더 빈번히 발생하고 있으므로 대상지역의 밸브 개폐는 $100\%$ 개방하는 것이 선계기준에 적합한 것으로 나타났다. 밸브 개폐에 따른 수압 변화를 모의한 결과 밸브 개폐도를 적절히 유지하여 필요수량의 확보 및 누수방지대책에 활용할 수 있을 것으로 판단된다.8R(mm)(r^2=0.84)$로 지수적으로 증가하는 경향을 나타내었다. 유거수량은 토성별로 양토를 1.0으로 기준할 때 사양토가 0.86으로 가장 작았고, 식양토 1.09, 식토 1.15로 평가되어 침투수에 비해 토성별 차이가 크게 나타났다. 이는 토성이 세립질일 수록 유거수의 저항이 작기 때문으로 생각된다. 경사에 따라서는 경사도가 증가할수록 증가하였으며 $10\% 경사일 때를 기준으로 $Ro(mm)=Ro_{10}{\times}0.797{\times}e^{-0.021s(\%)}$로 나타났다.천성 승모판 폐쇄 부전등을 초래하는 심각한 선

  • PDF