• Title/Summary/Keyword: large-scale systems

Search Result 1,879, Processing Time 0.028 seconds

Efficiency Analysis of Project Management Offices Using Bootstrap DEA (부트스트랩 자료포락분석을 이용한 프로젝트 관리 조직의 효율성 분석)

  • Ko, Joong-Hoon;Park, Sung-Hun;Bae, Eun-Song;Kim, Dae-Cheol
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.41 no.3
    • /
    • pp.61-71
    • /
    • 2018
  • The purpose of this study is to analyze the efficiencies of project management offices in large information system construction projects using the data envelopment analysis. In addition, we tried to estimate the confidence interval of those efficiencies using bootstrap DEA to give a statistical meaning. The efficiency by the CCR model is analyzed as eight project management offices are fully efficient and 22 project management offices are inefficient. On the other hand, there are 15 project management offices are fully efficient, but 15 project management offices are inefficient in the BCC model. As the result of the scale efficiencies, of the inefficient project management offices, 13 project management offices are inefficient in scale. It is possible to eliminate the inefficiency in the CCR model by improving their project performances. And, the nine project management offices showed that the inefficiency was due to pure technical efficiency, and these companies should look for various improvements such as improvement of project execution system and project management process. In order that the inefficient project management offices be efficient, it is analyzed that more efforts must be made for on-budget and on-time as a result of examining the potential improvement potentials of inefficient project management offices.

Diurnal Variation of Atomospheric Pollutant Concentrations Affected by Development of Windstorms along the Lee Side of Coastal Mountain Area

  • Choi, Hyo
    • International Union of Geodesy and Geophysics Korean Journal of Geophysical Research
    • /
    • v.24 no.1
    • /
    • pp.29-45
    • /
    • 1996
  • Before (March 26, 1994) or after the occurrence of a downslope windstorm (March 29), the NO, $NO_2$, and $SO_2$ at the ground level of Kangnung city were monitored with high concentrations in the afternoon, due to a large amount of gases emitted from combustion of motor vehicle and heating apparatus, especially near 1600-1800 LST and 2000-2100 LST, but at night, they had low concentrations, resulting from small consumptions of vehicle and heating fuels. When both moderate westerly synoptic-scale winds flow over Mt. Taegwallyang and easterly meso-scale sea breeze during the day, atmospheric pollutants should be trapped by two different wind systems, resulting in higher concentration at Kangnung city in the afternoon. At night, the association of westerly synoptic wind and land breeze can produce relatively strong winds and the dissipation by the winds cause these low concentrations to lower and lower, as nightime goes on. From March 27 through 28, an enforced localized windstorm could be produced along the lee side of the mountain near Kangnung, generating westerly internal gravity waves with hydraulic jump motions. Sea breeze toward inland appartantly confines to the bottom of the eastern side of the mountain, due to the interruption of eastward violent internal gravity waves. As the windstorm moves down toward the ground, an encountering point of two opposite winds approaches Kangnung, and a great amount of NO and $NO_2$ were removed by the strong surface winds. Thus, their maximum concentrations are found to be near 18 and 20 LST, 17 and 21 LST. In the nighttime, the more developed storm should produce very strong surface winds and the NO and $NO_2$ could be easily dissipated into other place. The $SO_2$ concentration had no maximum value, that is, almost constant one all day long, due to its removal by the strong surface winds. Especially, the CO concentrations were slightly lower during the strom period than both before or after the strom, but they were nearly constant without much changes during the during the daytime and nighttime.

  • PDF

POLLUTION PREVENTION : ENGINEERING DESIGN AT MACRO-, MESO-, AND MICROSCALES

  • Allen, David T.
    • Clean Technology
    • /
    • v.2 no.2
    • /
    • pp.51-59
    • /
    • 1996
  • Billions of tons of industrial waste are generated annually in industrialized countries. Managing and legally disposing of these wastes costs tens to hundreds of billions of dollars each year, and these costs have been increasing rapidly. The escalation is likely to continue as emission standards become even more stringent around the world. In the face of these rapidly rising costs and rapidly increasing performance standards, traditional end-of-pipe approaches to waste management have become less attractive. The most economical waste management alternatives in many cases have become recycling of the waste or the redesign of chemical processes and products so that wastes are prevented or put to productive use. These strategies of recycling or reducing waste at the source have collectively come to be known as pollution prevention. The engineering challenges associated with pollution prevention are substantial. This presentation will categorize the challenges in three levels. At the most macroscopic level, the flow of materials in our industrial economy, from natural resource extraction to consumer product disposal, can be redesigned. Currently, most of our raw materials are virgin natural resources that are used once, then discarded. Studies in what has come to be called industrial ecology examine the material efficiency of large-scale industrial systems and attempt to improve that efficiency. A second level of engineering challenges is found at the scale of individual industrial facilities, where chemical processes and products can be redesigned so that waste is reduced. Finally, on a molecular level, chemical synthesis pathways, combustion reaction pathways, and other material fabrication procedures can be redesigned to reduce emissions of pollution and unwanted by-products. All of these design activities, shown in Figure 1, have the potential to prevent pollution. All involve the tools of engineering, and in particular, chemical engineering.

  • PDF

Improvements on the Three-Dimensional Positioning of High Resolution Stereo Satellite Imagery (고해상도 스테레오 위성영상의 3차원 정확도 평가 및 향상)

  • Jeong, In-Jun;Lee, Chang-Kyung;Yun, Kong-Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.5
    • /
    • pp.617-625
    • /
    • 2014
  • The Rational Function Model has been used as a replacement sensor model in most commercial photogrammetric systems due to its capability of maintaining the accuracy of the physical sensor models. Although satellite images with rational polynomial coefficients have been used to determine three-dimensional position, it has limitations in the accuracy for large scale topographic mapping. In this study, high resolution stereo satellite images, QuickBird-2, were used to investigate how much the three-dimensional position accuracy was affected by the No. of ground control points, polynomial order, and distribution of GCPs. As the results, we can confirm that these experiments satisfy the accuracy requirements for horizontal and height position of 1:25,000 map scale.

Development and Application of a Physics-based Soil Erosion Model (물리적 표토침식모형의 개발과 적용)

  • Yu, Wansik;Park, Junku;Yang, JaeE;Lim, Kyoung Jae;Kim, Sung Chul;Park, Youn Shik;Hwang, Sangil;Lee, Giha
    • Journal of Soil and Groundwater Environment
    • /
    • v.22 no.6
    • /
    • pp.66-73
    • /
    • 2017
  • Empirical erosion models like Universal Soil Loss Equation (USLE) models have been widely used to make spatially distributed soil erosion vulnerability maps. Even if the models detect vulnerable sites relatively well utilizing big data related to climate, geography, geology, land use, etc within study domains, they do not adequately describe the physical process of soil erosion on the ground surface caused by rainfall or overland flow. In other words, such models are still powerful tools to distinguish the erosion-prone areas at large scale, but physics-based models are necessary to better analyze soil erosion and deposition as well as the eroded particle transport. In this study a physics-based soil erosion modeling system was developed to produce both runoff and sediment yield time series at watershed scale and reflect them in the erosion and deposition maps. The developed modeling system consists of 3 sub-systems: rainfall pre-processor, geography pre-processor, and main modeling processor. For modeling system validation, we applied the system for various erosion cases, in particular, rainfall-runoff-sediment yield simulation and estimation of probable maximum sediment (PMS) correlated with probable maximum rainfall (PMP). The system provided acceptable performances of both applications.

Effect of Pretreatment of Biogenic Titanium Dioxide on Photocatalytic Transformation of Chloroform (Biogenic TiO2 나노입자 전처리가 클로로포름 광분해에 미치는 영향)

  • Kwon, Sooyoul;Rorrer, Greg;Semprini, Lewis;Kim, Young
    • Journal of Korean Society on Water Environment
    • /
    • v.27 no.1
    • /
    • pp.98-103
    • /
    • 2011
  • Photocatalysis using UV light and catalysts is an attractive low temperature and non-energy- intensive method for remediation of a wide range of chemical contaminants like chloroform (CF). Recently development of environmental friendly and sustainable catalytic systems is needed before such catalysts can be routinely applied to large-scale remediation or drinking water treatment. Titanium dioxide is a candidate material, since it is stable, highly reactive, and inexpensive. Diatoms are photosynthetic, single-celled algae that make a microscale silica shell with nano scale features. These diatoms have an ability to biologically fabricate $TiO_2$ nanoparticles into this shell in a process that parallels nanoscale silica mineralization. We cultivated diatoms, metabolically deposited titanium into the shell by using a two-stage photobioreactor and used this biogenic $TiO_2$ to this study. In this study we evaluated how effectively biogenic $TiO_2$ nanoparticles transform CF compared with chemically-synthesized $TiO_2$ nanoparticlesthe and effect of pretreatment of diatom-produced $TiO_2$ nanoparticles on photocatalytic transformation of CF. The rate of CF transformation by diatom-$TiO_2$ particles is a factor of 3 slower than chemically-synthesized one and chloride ion production was also co-related with CF transformation, and 79~91% of CF mineralization was observed in two $TiO_2$ particles. And the period of sonication and mass transfer due to particle size, evaluated by difference of oxygen tention does not affect on the CF transformation. Based on the XRD analysis we conclude that slower CF transformation by diatom-$TiO_2$ might be due to incomplete annealing to the anatase form.

A Study on the Improvement of Design For Safety through the Analysis of Overseas Cases (국내 설계 안전성 검토 및 해외 사례 분석을 통한 개선방안 연구)

  • Yeom, Seong-Jun;Kim, Jun-HO;Lee, Donghoon
    • Journal of the Korea Institute of Construction Safety
    • /
    • v.3 no.1
    • /
    • pp.25-31
    • /
    • 2020
  • While the disaster rate in all industries has been on a steady decline over the past decade, the accident rate in the construction industry has been on the rise, and it is urgent to improve it. The purpose of this study is to compare and analyze domestic design safety review system with overseas cases to derive problems of domestic design for safety review system and find ways to improve it. Like U.K and Singapore, it should be reviewed from the beginning of the working design, and Safety reviews should be conducted not only in large-scale construction but also in small-scale construction to reduce construction industry disasters. In particular, it is necessary to prepare a system for the lack of related systems, and the Design for safety system is meaningful in that it is an expansion of the spectrum from operator-centered safety management system and requires sufficient improvement and research in the future.

CNN based data anomaly detection using multi-channel imagery for structural health monitoring

  • Shajihan, Shaik Althaf V.;Wang, Shuo;Zhai, Guanghao;Spencer, Billie F. Jr.
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.181-193
    • /
    • 2022
  • Data-driven structural health monitoring (SHM) of civil infrastructure can be used to continuously assess the state of a structure, allowing preemptive safety measures to be carried out. Long-term monitoring of large-scale civil infrastructure often involves data-collection using a network of numerous sensors of various types. Malfunctioning sensors in the network are common, which can disrupt the condition assessment and even lead to false-negative indications of damage. The overwhelming size of the data collected renders manual approaches to ensure data quality intractable. The task of detecting and classifying an anomaly in the raw data is non-trivial. We propose an approach to automate this task, improving upon the previously developed technique of image-based pre-processing on one-dimensional (1D) data by enriching the features of the neural network input data with multiple channels. In particular, feature engineering is employed to convert the measured time histories into a 3-channel image comprised of (i) the time history, (ii) the spectrogram, and (iii) the probability density function representation of the signal. To demonstrate this approach, a CNN model is designed and trained on a dataset consisting of acceleration records of sensors installed on a long-span bridge, with the goal of fault detection and classification. The effect of imbalance in anomaly patterns observed is studied to better account for unseen test cases. The proposed framework achieves high overall accuracy and recall even when tested on an unseen dataset that is much larger than the samples used for training, offering a viable solution for implementation on full-scale structures where limited labeled-training data is available.

Prediction of Plant Operator Error Mode (원자력발전소 운전원의 오류모드 예측)

  • Lee, H.C.;E. Hollnagel;M. Kaarstad
    • Proceedings of the ESK Conference
    • /
    • 1997.04a
    • /
    • pp.56-60
    • /
    • 1997
  • The study of human erroneous actions has traditionally taken place along two different lines of approach. One has been concerned with finding and explaining the causes of erroneous actions, such as studies in the psychology of "error". The other has been concerned with the qualitative and quantitative prediction of possible erroneous actions, exemplified by the field of human reliability analysis (HRA). Another distinction is also that the former approach has been dominated by an academic point of view, hence emphasising theories, models, and experiments, while the latter has been of a more pragmatic nature, hence putting greater emphasis on data and methods. We have been developing a method to make predictions about error modes. The input to the method is a detailed task description of a set of scenarios for an experiment. This description is then analysed to characterise thd nature of the individual task steps, as well as the conditions under which they must be carried out. The task steps are expressed in terms of a predefined set of cognitive activity types. Following that each task step is examined in terms of a systematic classification of possible error modes and the likely error modes are identified. This effectively constitutes a qualitative analysis of the possibilities for erroneous action in a given task. In order to evaluate the accuracy of the predictions, the data from a large scale experiment were analysed. The experiment used the full-scale nuclear power plant simulator in the Halden Man-Machine Systems Laboratory (HAMMLAB) and used six crews of systematic performance observations by experts using a pre-defined task description, as well as audio and video recordings. The purpose of the analysis was to determine how well the predictions matiched the actually observed performance failures. The results indicated a very acceptable rate of accuracy. The emphasis in this experiment has been to develop a practical method for qualitative performance prediction, i.e., a method that did not require too many resources or specialised human factors knowledge. If such methods are to become practical tools, it is important that they are valid, reliable, and robust.

  • PDF

Spatial Computation on Spark Using GPGPU (GPGPU를 활용한 스파크 기반 공간 연산)

  • Son, Chanseung;Kim, Daehee;Park, Neungsoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.8
    • /
    • pp.181-188
    • /
    • 2016
  • Recently, as the amount of spatial information increases, an interest in the study of spatial information processing has been increased. Spatial database systems extended from the traditional relational database systems are difficult to handle large data sets because of the scalability. SpatialHadoop extended from Hadoop system has a low performance, because spatial computations in SpationHadoop require a lot of write operations of intermediate results to the disk, resulting in the performance degradation. In this paper, Spatial Computation Spark(SC-Spark) is proposed, which is an in-memory based distributed processing framework. SC-Spark is extended from Spark in order to efficiently perform the spatial operation for large-scale data. In addition, SC-Spark based on the GPGPU is developed to improve the performance of the SC-Spark. SC-Spark uses the advantage of the Spark holding intermediate results in the memory. And GPGPU-based SC-Spark can perform spatial operations in parallel using a plurality of processing elements of an GPU. To verify the proposed work, experiments on a single AMD system were performed using SC-Spark and GPGPU-based SC-Spark for Point-in-Polygon and spatial join operation. The experimental results showed that the performance of SC-Spark and GPGPU-based SC-Spark were up-to 8 times faster than SpatialHadoop.