• Title/Summary/Keyword: 통계적 축소

Search Result 72, Processing Time 0.028 seconds

A Study on the Standard of Cost Estimation in the Construction of Pavement and Maintenance (도로포장 및 유지공사 표준품셈 개정 방법에 대한 연구)

  • Jung, Dae-Kwon;Tae, Yong-Ho;Ahn, Bang-Ryul;Cho, Yoon-Ho
    • International Journal of Highway Engineering
    • /
    • v.11 no.1
    • /
    • pp.85-94
    • /
    • 2009
  • In cost estimation of construction, several methods including quantity-per-unit costing, job costing, unit cost estimation and lumpsum estimation are being utilized in Korea. Among them, a Quantity-per-unit Costing Method is used as a standard of cost estimation in public and private works. This paper presents the realistic job-costing method on all road construction tasks through statistical analyses with field survey data to solve the problems induced by the existing quantity-per-unit costing method. Furthermore, it was found that the newly developed job costing method is able to produce a simple costing procedure and a more actual construction cost estimation by a case study, which was performed to compare particular construction costs produced by two different methods, existing quantity-per-unit costing and newly developed job costing. These methods is compared by Case-study about sub-base. In the case of Job costing method, the estimate is shorter than the other case about 50% and can make up for the weak point about instrument in the current Standard of cost estimation. And it can be depict by Job Costing method about progress of work for using by a plan about construction management.

  • PDF

Clustering Analysis of Science and Engineering College Students' understanding on Probability and Statistics (Robust PCA를 활용한 이공계 대학생의 확률 및 통계 개념 이해도 분석)

  • Yoo, Yongseok
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.3
    • /
    • pp.252-258
    • /
    • 2022
  • In this study, we propose a method for analyzing students' understanding of probability and statistics in small lectures at universities. A computer-based test for probability and statistics was performed on 95 science and engineering college students. After dividing the students' responses into 7 clusters using the Robust PCA and the Gaussian mixture model, the achievement of each subject was analyzed for each cluster. High-ranking clusters generally showed high achievement on most topics except for statistical estimation, and low-achieving clusters showed strengths and weaknesses on different topics. Compared to the widely used PCA-based dimension reduction followed by clustering analysis, the proposed method showed each group's characteristics more clearly. The characteristics of each cluster can be used to develop an individualized learning strategy.

By Analyzing the IoT Sensor Data of the Building, using Artificial Intelligence, Real-time Status Monitoring and Prediction System for buildings (건축물 IoT 센서 데이터를 분석하여 인공지능을 활용한 건축물 실시간 상태감시 및 예측 시스템)

  • Seo, Ji-min;Kim, Jung-jip;Gwon, Eun-hye;Jung, Heokyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.533-535
    • /
    • 2021
  • The differences between this study and previous studies are as follows. First, by building a cloud-based system using IoT technology, the system was built to monitor the status of buildings in real time from anywhere with an internet connection. Second, a model for predicting the future was developed using artificial intelligence (LSTM) and statistical (ARIMA) methods for the measured time series sensor data, and the effectiveness of the proposed prediction model was experimentally verified using a scaled-down building model. Third, a method to analyze the condition of a building more three-dimensionally by visualizing the structural deformation of a building by convergence of multiple sensor data was proposed, and the effectiveness of the proposed method was demonstrated through the case of an actual earthquake-damaged building.

  • PDF

A Competitive Advantage Analysis of Construction Duration through the Comparison of Actual Data of Domestic Construction Firms - Focused on Mix-Use Residential Building and Officetel Building - (건설사별 공기비교를 통한 공기경쟁력 분석 - 주상복합 및 오피스텔 건물을 중심으로 -)

  • Ryu, Han-Guk;Kim, Sun-Kuk;Lee, Hyun-Soo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.7 no.1 s.29
    • /
    • pp.138-147
    • /
    • 2006
  • Construction companies have been interested in the construction duration which importantly affects the performance and the success of the construction projects in accordance with the systemic changes such as five days per week system, introduction of construction duration reduction bidding system and post sale system nowadays. It is also very important to estimate and forecast properly the construction duration as the construction companies compete for the projects in the situation of construction market reduction and the lowest bidding system. Recognizing the importance about the construction duration, the researches about comparing and analyzing or estiamting the construction duration have been performed. However, comparing studies about the construction duraion have been limited to the apartment and office building in domestic area. Many studies about forecasting construction duration have been performed through stochastic analysis and simulations. Little research has been addressed the comparison analysis of the real construction duration about the mix-use building and officetel building which occured according to the changes of the building requirements. Therefore, the objective of this study is to compare and analyze the real construction duration and the hypothetical construction duration about the mix-use building and officetel building of the domestic companies. Moreover, we select the most competitve construction company to get the strengths and analyze the competitive advatages of the construction companies about construction duration.

A Study on Estimating Method for Actual Unit Cost Based on Bid Prices in Public Construction Projects (시설공사 입찰단가를 활용한 실적단가의 산정 방안에 관한 연구)

  • Kang, Sang-Hyeok;Park, Won-Young;Song, Soon-Ho;Seo, Jong-Won
    • Korean Journal of Construction Engineering and Management
    • /
    • v.7 no.5
    • /
    • pp.159-166
    • /
    • 2006
  • It was found that Korean Standard of Estimate which has been used as the only basis of cost estimate of public construction projects had some side effects such as jerry-build construction and over-estimation because it failed to reflect the current price and the state-of-the-art construction methods in a changing construction environment. Therefore, the government decided to gradually introduce historical construction cost into cost estimate of public construction projects from 2004. This paper presents analytic criteria and a process model for deducing more current and reasonable historical construction cost for contract items from not only previous contract prices but also all of the other bid prices that were not contracted. The procedure of estimating actual unit cost proposed in this paper focuses on the removal of abnormal values including strategically too low or high prices and the time correction. In addition, basic research is conducted for the correction of actual unit cost through the analysis of fluctuation of bid price depending on bidding types and rates of successful bid. It is anticipated that the effective use of the proposed process model for estimating actual unit cost would make the cost estimation more current and reasonable.

SpatioTemporal GIS를 활용한 도시공간모형 적용에 관한 연구 / 인구분포모델링을 중심으로

  • 남광우;이성호;김영섭;최철옹
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2002.03b
    • /
    • pp.127-141
    • /
    • 2002
  • GIS환경에서 도시모형(urban model)의 적용을 목적으로 사회·경제적 데이터(socio-economic data)를 활용하는 과정은 도시현상이 갖는 복잡성과 변동성으로 인해 하나의 특정시간에서의 상황을 그대로 저장한 형태인 스냅샷 모형(snapshot model)만으로는 효율적인 공간분석의 실행이 불가능하다. 또한 도시모형을 적용하는 과정에서 GIS의 대상이 되는 공간, 속성, 시간의 정의는 분석목적에 따라 다르게 정의되어질 수 있으며 이에 따라 상이한 결과가 도출될 수 있다. 본 연구는 30년 간의 부산시 인구분포의 동적 변화과정 관측을 위해 시간개념을 결합한 Temporal GIS를 구축하고 이를 활용하여 인구밀도모형 및 접근성모형을 적용하는 과정을 통해 보다 효율적이고 다양한 결과를 제시할 수 있는 GIS 활용방안을 제시하고자 하였다. 흔히 공간현상의 계량화와 통계적 기법의 적용을 위한 데이터 처리과정은 많은 오차와 오류를 유발할 수 있다. 이러한 문제의 해결을 위해서는 우선적으로 분석목적에 맞는 데이터의 정의(Data Definition), 적용하고자 하는 모형(Model)의 유용성 검증, 적절한 분석단위의 설정, 결과해석의 객관적 접근 등이 요구된다. 이와 더불어 변동성 파악을 위한 시계열 자료의 효율적 처리를 위한 방법론이 마련되어져야 한다. 즉, GIS환경에서의 도시모형의 적용에 따른 효율성과 효과성의 극대화를 위해서는 분석목적에 맞는 데이터모델의 설정과 공간DB의 구축방법이 이루어져야 하며 분석가능한 데이터의 유형에 대한 충분한 고려와 적용과정에서 분석결과에 중대한 영향을 미칠 수 있는 요소들을 미리 검증하여 결정하는 순환적 의사결정과정이 필요하다., 표준패턴을 음표와 비음표의 두개의 그룹으로 나누어 인식함으로써 DP 매칭의 처리 속도를 개선시켰고, 국소적인 변형이 있는 패턴과 특징의 수가 다른 패턴의 경우에도 좋은 인식률을 얻었다.r interferon alfa concentrated solution can be established according to the monograph of EP suggesting the revision of Minimum requirements for biological productss of e-procurement, e-placement, e-payment are also investigated.. monocytogenes, E. coli 및 S. enteritidis에 대한 키토산의 최소저해농도는 각각 0.1461 mg/mL, 0.2419 mg/mL, 0.0980 mg/mL 및 0.0490 mg/mL로 측정되었다. 또한 2%(v/v) 초산 자체의 최소저해농도를 측정한 결과, B. cereus, L. mosocytogenes, E. eoli에 대해서는 control과 비교시 유의적인 항균효과는 나타나지 않았다. 반면에 S. enteritidis의 경우는 배양시간 4시간까지는 항균활성을 나타내었지만, 8시간 이후부터는 S. enteritidis의 성장이 control 보다 높아져 배양시간 20시간에서는 control 보다 약 2배 이상 균주의 성장을 촉진시켰다.차에 따른 개별화 학습을 가능하게 할 뿐만 아니라 능동적인 참여를 유도하여 학습효율을 높일 수 있을 것으로 기대된다.향은 패션마케팅의 정의와 적용범위를 축소시킬 수 있는 위험을 내재한 것으로 보여진다. 그런가 하면, 많이 다루어진 주제라

  • PDF

Reliability-Based Design Optimization of 130m Class Fixed-Type Offshore Platform (신뢰성 기반 최적설계를 이용한 130m급 고정식 해양구조물 최적설계 개발)

  • Kim, Hyun-Seok;Kim, Hyun-Sung;Park, Byoungjae;Lee, Kangsu
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.34 no.5
    • /
    • pp.263-270
    • /
    • 2021
  • In this study, a reliability-based design optimization of a 130-m class fixed-type offshore platform, to be installed in the North Sea, was carried out, while considering environmental, material, and manufacturing uncertainties to enhance its structural safety and economic aspects. For the reliability analysis, and reliability-based design optimization of the structural integrity, unity check values (defined as the ratio between working and allowable stress, for axial, bending, and shear stresses), of the members of the offshore platform were considered as constraints. Weight of the supporting jacket structure was minimized to reduce the manufacturing cost of the offshore platform. Statistical characteristics of uncertainties were defined based on observed and measured data references. Reliability analysis and reliability-based design optimization of a jacket-type offshore structure were computationally burdensome due to the large number of members; therefore, we suggested a method for variable screening, based on the importance of their output responses, to reduce the dimension of the problem. Furthermore, a deterministic design optimization was carried out prior to the reliability-based design optimization, to improve overall computational efficiency. Finally, the optimal design obtained was compared with the conventional rule-based offshore platform design in terms of safety and cost.

Statistical Techniques to Detect Sensor Drifts (센서드리프트 판별을 위한 통계적 탐지기술 고찰)

  • Seo, In-Yong;Shin, Ho-Cheol;Park, Moon-Ghu;Kim, Seong-Jun
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.3
    • /
    • pp.103-112
    • /
    • 2009
  • In a nuclear power plant (NPP), periodic sensor calibrations are required to assure sensors are operating correctly. However, only a few faulty sensors are found to be calibrated. For the safe operation of an NPP and the reduction of unnecessary calibration, on-line calibration monitoring is needed. In this paper, principal component-based Auto-Associative support vector regression (PCSVR) was proposed for the sensor signal validation of the NPP. It utilizes the attractive merits of principal component analysis (PCA) for extracting predominant feature vectors and AASVR because it easily represents complicated processes that are difficult to model with analytical and mechanistic models. With the use of real plant startup data from the Kori Nuclear Power Plant Unit 3, SVR hyperparameters were optimized by the response surface methodology (RSM). Moreover the statistical techniques are integrated with PCSVR for the failure detection. The residuals between the estimated signals and the measured signals are tested by the Shewhart Control Chart, Exponentially Weighted Moving Average (EWMA), Cumulative Sum (CUSUM) and generalized likelihood ratio test (GLRT) to detect whether the sensors are failed or not. This study shows the GLRT can be a candidate for the detection of sensor drift.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Evaluation of Oil Spill Detection Models by Oil Spill Distribution Characteristics and CNN Architectures Using Sentinel-1 SAR data (Sentienl-1 SAR 영상을 활용한 유류 분포특성과 CNN 구조에 따른 유류오염 탐지모델 성능 평가)

  • Park, Soyeon;Ahn, Myoung-Hwan;Li, Chenglei;Kim, Junwoo;Jeon, Hyungyun;Kim, Duk-jin
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1475-1490
    • /
    • 2021
  • Detecting oil spill area using statistical characteristics of SAR images has limitations in that classification algorithm is complicated and is greatly affected by outliers. To overcome these limitations, studies using neural networks to classify oil spills are recently investigated. However, the studies to evaluate whether the performance of model shows a consistent detection performance for various oil spill cases were insufficient. Therefore, in this study, two CNNs (Convolutional Neural Networks) with basic structures(Simple CNN and U-net) were used to discover whether there is a difference in detection performance according to the structure of CNN and distribution characteristics of oil spill. As a result, through the method proposed in this study, the Simple CNN with contracting path only detected oil spill with an F1 score of 86.24% and U-net, which has both contracting and expansive path showed an F1 score of 91.44%. Both models successfully detected oil spills, but detection performance of the U-net was higher than Simple CNN. Additionally, in order to compare the accuracy of models according to various oil spill cases, the cases were classified into four different categories according to the spatial distribution characteristics of the oil spill (presence of land near the oil spill area) and the clarity of border between oil and seawater. The Simple CNN had F1 score values of 85.71%, 87.43%, 86.50%, and 85.86% for each category, showing the maximum difference of 1.71%. In the case of U-net, the values for each category were 89.77%, 92.27%, 92.59%, and 92.66%, with the maximum difference of 2.90%. Such results indicate that neither model showed significant differences in detection performance by the characteristics of oil spill distribution. However, the difference in detection tendency was caused by the difference in the model structure and the oil spill distribution characteristics. In all four oil spill categories, the Simple CNN showed a tendency to overestimate the oil spill area and the U-net showed a tendency to underestimate it. These tendencies were emphasized when the border between oil and seawater was unclear.