• Title/Summary/Keyword: 비정형구조

Search Result 203, Processing Time 0.023 seconds

Development of the Free-formed Concrete Structure Construction Technologies using 3D Digital Design (3차원 디지털 설계를 통한 비정형 콘크리트 구조물의 구현기술 개발)

  • Park, Young-Mi;Jo, Seong-Joon;Kim, Sung-Jin
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2012.05a
    • /
    • pp.205-208
    • /
    • 2012
  • Recent the free-formed architecture is smearing as a trend with the development of the digital equipment and technologies. The development of new method based on digital technology is required for the free-formed structure,, because the conventional construction methods are limited to shorten the construction period and to ensure the construction quality. Particularly, the development of the new method for the free-formed concrete structure is important. In this study, the developed method of the T-shape lightweight steel fabricated using CNC can control the geometries of the free-formed concrete structure based on the digital design. Also, new method is effective to ensure the precision of the construction and economic than the conventional construction methods.

  • PDF

A development XML schema for non-formal technical documents (비정형 기술문서에 대한 XML 스키마 개발)

  • Jeong Seong-Yun;Kim Seong-Jin
    • Annual Conference of KIPS
    • /
    • 2004.11a
    • /
    • pp.89-92
    • /
    • 2004
  • 전자거래기술이 발전되면서 인터넷을 통해 다양한 포맷의 전자문서가 유통되기 시작하였다. 이로 인해 상이한 문서 포맷간의 호환성 결여와 문서 데이터의 교환 등의 문제가 대두되기 시작하였다. 이를 위해 많은 전자문서가 XML 포맷으로 작성, 유통되기 시작하였다. 하지만 대부분의 XML 전자문서는 일정한 형식을 가지면서 분량이 적은 서식문서를 대상으로 개발되고 있으며 분량이 방대하고 비정형 구조를 갖는 기술문서에 대한 XML 전자문서의 연구 개발은 많지 않은 실정이다. 본 연구는 이러한 기술문서를 XML 전자문서로 할 때 공통으로 표현될 수 있는 정보요소와 정보로서 가치가 있는 구성 항목 등을 분석하여 35종의 공통 정보요소에 대한 XML 스키마를 개발하였다.

  • PDF

Recognition of numeral stings with broken digits (획의 일부분이 손상된 숫자가 포함된 필기체 숫자 열의 인식)

  • Kim, Kye-Kyung;Kim, Jin-Ho;Cho, Soo-Hyun;Chi, Soo-Young;Chung, Yun-Koo
    • Annual Conference of KIPS
    • /
    • 2001.10a
    • /
    • pp.503-506
    • /
    • 2001
  • 본 논문에서는 획의 일부분이 손상된 숫자(broken digit)나 붙은 숫자(touching digits)와 같은 비정형 숫자들이 포함된 필기체 숫자 열을 인식할 수 있는 방법에 대하여 제안하였다. 비정형 숫자들은 분류(pre-segmentation) 단계에서 숫자들의 구조적인 특징 정보를 이용하여 정형인 개별 숫자(isolated digit)로부터 획의 일부분이 손상된 숫자 또는 붙은 숫자들로 분류된다. 획의 일부분이 분리된 숫자의 결합 및 붙은 숫자들의 분할 단계를 거쳐 인식을 시도하였다. 제안된 방법의 타당성을 증명하기 위하여 NIST SDl9 데이터베이스를 이용하여 시뮬레이션 해 보았다.

  • PDF

Wavelet transform-based hierarchical active shape model for object tracking (객체추적을 위한 웨이블릿 기반 계층적 능동형태 모델)

  • Kim Hyunjong;Shin Jeongho;Lee Seong-won;Paik Joonki
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11C
    • /
    • pp.1551-1563
    • /
    • 2004
  • This paper proposes a hierarchical approach to shape model ASM using wavelet transform. Local structure model fitting in the ASM plays an important role in model-based pose and shape analysis. The proposed algorithm can robustly find good solutions in complex images by using wavelet decomposition. we also proposed effective method that estimates and corrects object's movement by using Wavelet transform-based hierarchical motion estimation scheme for ASM-based, real-time video tracking. The proposed algorithm has been tested for various sequences containing human motion to demonstrate the improved performance of the proposed object tracking.

Telemetering Service in OpenStack (오픈스택 텔레메터링 서비스(Ceilometer))

  • Baek, D.M.;Lee, B.C.
    • Electronics and Telecommunications Trends
    • /
    • v.29 no.6
    • /
    • pp.102-112
    • /
    • 2014
  • 최근 빌링(billing, 과금), 벤치마킹, 확장성(scalability), 통계적 목적을 위해 오픈스택 클라우드의 개별 컴포넌트를 모니터링하고 메터링하는 텔레메터링 서비스가 Ceilometer라는 코드명으로 정식 프로젝트로 추가되었다. 초기의 빌링만을 위해 필수 요소만 모니터링하는 것에서, 상태를 감시하여 클라우드 자원의 오토스케일링 등의 오케스트레이션 기능을 위한 다목적성으로 발전하고 있다. 특히 이것은 빅데이터 등의 데이터 분석에 있어서 중요한 힌트를 제공해 준다. 본고는 소스분석을 통한 Ceilometer의 데이터 수집 구조, Ceilometer 모니터링의 핵심 키워드, 비정형 데이터 DB인 MongoDB, 외부인터페이스로써 API(Application Interface) 혹은 CLI(Command Line Interface) 명령어를 소개하고자 한다. 결론에서는 ceilometer의 전반적 구조에 대한 나름대로의 평가를 기술하였다.

  • PDF

Distribution of Member Forces Due To Lost Member (기둥의 제거에 의한 부재력 분포)

  • Han, Saem;Park, Seung-Hee;Kim, Jin-Koo;Park, Jong-Yeol
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2010.04a
    • /
    • pp.289-292
    • /
    • 2010
  • 본 연구에서는 기둥이 제거되는 경우 선형정적해석법을 사용하여 부재력을 산정하고 확률 신경망을 이용하여 그 분포를 파악하였다. 1층 내부기둥이 제거될 경우 다른 부재의 부재력이 가장 큰 것으로 나타났다. 확룰신경망을 이용하여 부재력의 분포를 파악하고 추정하는 것은 연쇄붕괴 시 초고층 건물이나 비정형 건물에 대한 위험부재를 선정하고 파악하는데 시간과 노력을 경감할 수 있는 것으로 나타났다.

  • PDF

Performance Evaluation of Steel Moment Frame and Connection including Inclined Column (경사기둥을 포함한 철골모멘트 골조 및 접합부의 성능평가)

  • Kim, Yong-Wan;Kim, Taejin;Kim, Jongho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.26 no.3
    • /
    • pp.173-182
    • /
    • 2013
  • The building design projects which are being proceeded nowadays pursue a complex and various shape of structures, escaping from the traditional and regular shape of buildings. In this new trend of the architecture, there rises a demand of the research in the structural engineering for the effective realization of such complex-shaped buildings which disassembles the orthogonality of frames. As a distinguished characteristics of the buildings in a complex-shape, there frequently are inclined columns included in the structural frame. The inclined column causes extra axial force and bending moment at the beam-column connection so it is necessary to assess those effects on the structural behavior of the frame and the connection by experiment or analysis. However, with comparing to the studies on the normal beam-column connections, the inclined column connections have not been studied sufficiently. Therefore, this study evaluated the beam-column connections having an inclined column using nonlinear and finite element analysis method. In this paper, steel moment frames having inclined columns were analyzed by the nonlinear pushover analysis to check the global behavior and beam-column connection models were analyzed by the finite element analysis to check the buckling behavior and the fracture potentials.

Seismic Performance-based Design using Computational Platform for Structural Design of Complex-shaped Tall Building (전산플랫폼을 이용한 비정형 초고층 건축물 성능기반 내진설계기술의 실무적용)

  • Lee, Dong-Hun;Cho, Chang-Hee;Youn, Wu-Seok;Kang, Dae-Eon;Kim, Taejin;Kim, Jong-Ho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.26 no.1
    • /
    • pp.59-67
    • /
    • 2013
  • Complex-shaped tall building causes many structural challenges due to its structural characteristics regarding inclined members and complexed shape. This paper is aimed at development of design process using computational-platform which is effective design tool for responding frequent design changes, particularly as to overseas projects. StrAuto, a parametric structural modeling and optimizing system, provides the optimized alternatives according to design intent and realize a swift process converting a series of structural information necessary to nonlinear analytical models. The application of the process was to a 45-story hotel building in Ulanbator, Mongolia adopting shear wall and special moment frame with outrigger systems. To investigate the safety of lateral force resisting system against maximum considered earthquake(MCE), nonlinear response history analysis was conducted using StrAuto.

A Study on Parametric Modeler to Generate Structural Analysis Model (구조해석모델 생성을 위한 파라메트릭 모델러의 적용성 연구)

  • Kim, Chee-Kyeong;Lee, Sang-Su;Choi, Hyun-Chul
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2010.04a
    • /
    • pp.247-250
    • /
    • 2010
  • 최근 많은 건축가들이 파라메트릭 디자인 도구에 관심을 가지고 형태에 대한 다양한 실험을 하고 있는 것과는 달리, 구조 엔지니어들은 이에 적절하게 대응하지 못하고 있다. 현재 파라메트릭 모델과 구조해석 프로그램사이의 자료교환을 가능하게 하는 인터페이스가 없는 것이 주된 원인이다. 따라서 구조 엔지니어들도 자유롭게 파라메트릭 방법론의 장점을 활용한 모델을 생성하고, 그것을 구조해석에 바로 사용할 수 있도록 하는 일이 오늘날 구조계가 당면한 과제이다. 본 연구에서는 파라메트릭 모델로부터 구조해석 모델을 자동으로 생성하는 인터페이스의 실무 적용가능성을 살펴보기 위하여, 파라메트릭 디자인 도구의 특성과 확장성을 검토하고, 라이노를 기반으로 하는 구조해석 모델 생성 플러그인을 개발하였다.

  • PDF

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.