• Title/Summary/Keyword: optimization problems

Search Result 2,423, Processing Time 0.029 seconds

Design and Implementation of A Distributed Information Integration System based on Metadata Registry (메타데이터 레지스트리 기반의 분산 정보 통합 시스템 설계 및 구현)

  • Kim, Jong-Hwan;Park, Hea-Sook;Moon, Chang-Joo;Baik, Doo-Kwon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.233-246
    • /
    • 2003
  • The mediator-based system integrates heterogeneous information systems with the flexible manner. But it does not give much attention on the query optimization issues, especially for the query reusing. The other thing is that it does not use standardized metadata for schema matching. To improve this two issues, we propose mediator-based Distributed Information Integration System (DIIS) which uses query caching regarding performance and uses ISO/IEC 11179 metadata registry in terms of standardization. The DIIS is designed to provide decision-making support, which logically integrates the distributed heterogeneous business information systems based on the Web environment. We designed the system in the aspect of three-layer expression formula architecture using the layered pattern to improve the system reusability and to facilitate the system maintenance. The functionality and flow of core components of three-layer architecture are expressed in terms of process line diagrams and assembly line diagrams of Eriksson Penker Extension Model (EPEM), a methodology of an extension of UML. For the implementation, Supply Chain Management (SCM) domain is used. And we used the Web-based environment for user interface. The DIIS supports functions of query caching and query reusability through Query Function Manager (QFM) and Query Function Repository (QFR) such that it enhances the query processing speed and query reusability by caching the frequently used queries and optimizing the query cost. The DIIS solves the diverse heterogeneity problems by mapping MetaData Registry (MDR) based on ISO/IEC 11179 and Schema Repository (SCR).

Vibration Analysis of Large Structures by the Component-Mode Synthesis (부분구조진동형 합성방법에 의한 대형구조계의 진동해석)

  • B.H. Kim;T.Y. Chung;K.C. Kim
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.30 no.3
    • /
    • pp.116-126
    • /
    • 1993
  • The finite element method(FEM) has been commonly used for structural dynamic analysis. However, the direct global application of FEM to large complex structures such as ships and offshore structures requires considerable computational efforts, and remarkably more in structural dynamic optimization problems. Adoption of the component-mode synthesis method is an efficient means to overcome the above difficulty. Among three classes of the component-mode synthesis method, the free-interface mode method is recognized to have the advantages of better computational efficiency and easier implementation of substructures' experimental results, but the disadvantage of lower accuracy in analytical results. In this paper, an advanced method to improve the accuracy in the application of the free-interface mode method for the vibration analysis of large complex structures is presented. In order to compensate the truncation effect of the higher modes of substructures in the synthesis process, both residual inertia and stiffness effects are taken into account and a frequency shifting technique is introduced in the formulation of the residual compliance of substructures. The introduction of the frequency shrift ins not only excludes cumbersome manipulation of singular matrices for semi-definite substructural systems but gives more accurate results around the specified shifting frequency. Numerical examples of typical structural models including a ship-like two dimensional finite element model show that the analysis results based on the presented method are well competitive in accuracy with those obtained by the direst global FEM analysis for the frequencies which are lower than the highest one employed in the synthesis with remarkably higher computational efficiency and that the presented method is more efficient and accurate than the fixed-interface mode method.

  • PDF

Optimal Management of Mackerel in Korea: A Maximum Entropy Approach (최대 엔트로피 기법을 이용한 한국 연근해 고등어 최적 관리에 관한 연구)

  • Park, Yunsun;Kwon, Oh-Sang
    • Environmental and Resource Economics Review
    • /
    • v.28 no.2
    • /
    • pp.277-306
    • /
    • 2019
  • Mackerel is one of the most widely consumed aquatic products in Korea. Concerns about the depletion of stocks have also arisen as the catch has decreased. The primary purpose of this study is to estimate the mackerel stock and derive the optimal level of catch in Korea. We apply a generalized maximum entropy econometric method to estimate the mackerel growth function, which does not require the steady state assumption. We incorporate a bootstrapping approach to derive the significance levels of parameter estimates. We found that the average ratio of catch to the estimated total stock was less than 30% before the 1990s but exceeded 40% in the 1990s. After 2000, it dropped back to about 36%. This finding indicates that mackerel may have been over-fished in the 1990s, but the government regulations introduced in the 2000s alleviated over-fishing problems. Nevertheless, our dynamic optimization analysis suggests that the total allowable catch may need to be carefully controlled to achieve socially optimal management of resources.

Optimization of Electrolytic Oxidant OCl- Production for Malodorous VOCs Removal (악취성 VOCs 제거를 위한 전해 산화제 OCl-의 생산 최적화)

  • Yang, Woo Young;Lee, Tae Ho;Ryu, Hee Wook
    • Clean Technology
    • /
    • v.27 no.2
    • /
    • pp.152-159
    • /
    • 2021
  • Volatile organic compounds (VOCs) occur in indoor and outdoor industrial and urban areas and cause environmental problems. Malodorous VOCs, along with aesthetic discomfort, can have a serious effect on the human body. Compared with the existing method of reducing malodorous VOCs, a wet scrubbing method using an electrolytic oxidant has the advantage of reducing pollutants and regenerating oxidants. This study investigated the optimal conditions for producing OCl-, a chlorine-oxidant. Experiments were conducted by changing the type of anode and cathode electrode, the type of electrolyte, the concentration of electrolytes, and the current density. With Ti/IrO2 as the anode electrode and Ti as the cathode electrode, OClproduction was highest and most stable. Although OCl- production was similar with the use of KCl or NaCl, NaCl is preferable because it is cheap and easy to obtain. The effect of NaCl concentration and current density was examined, and the OCl- production rate and concentration were highest at 0.75 M NaCl and 0.03 A cm-2. However, considering the cost of electric power, OCl- production under the conditions of 1.00 M NaCl and 0.01 A cm-2 was most effective among the conditions examined. It is desirable to produce OCl- by adjusting the current density in accordance with the concentration and characteristics of pollutants.

Verification of Weight Effect Using Actual Flight Data of A350 Model (A350 모델의 비행실적을 이용한 중량 효과 검증)

  • Jang, Sungwoo;Yoo, Jae Leame;Yo, Kwang Eui
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.1
    • /
    • pp.13-20
    • /
    • 2022
  • Aircraft weight is an important factor affecting performance and fuel efficiency. In the conceptual design stage of the aircraft, the process of balancing cost and weight is performed using empirical formulas such as fuel consumption cost per weight in estimating element weight. In addition, when an airline operates an aircraft, it promotes fuel efficiency improvement, fuel saving and carbon reduction through weight management activities. The relationship between changes in aircraft weight and changes in fuel consumption is called the cost of weight, and the cost of weight is used to evaluate the effect of adding or reducing weight to an aircraft on fuel consumption. In this study, the problems of the existing cost of weight calculation method are identified, and a new cost of weight calculation method is introduced to solve the problem. Using Breguet's Range Formula and actual flight data of the A350-900 aircraft, two weight costs are calculated based on take-off weight and landing weight. In conclusion, it was suggested that it is reasonable to use the cost of weight based on the take-off weight and the landing weight for other purposes. In particular, the cost of weight based on the landing weight can be used as an empirical formula for estimating element weight and optimizing cost and weight in the conceptual design stage of similar aircraft.

Utilizing cell-free DNA to validate targeted disruption of MYO7A in rhesus macaque pre-implantation embryos

  • Junghyun Ryu;Fernanda C. Burch;Emily Mishler;Martha Neuringer;Jon D. Hennebold;Carol Hanna
    • Journal of Animal Reproduction and Biotechnology
    • /
    • v.37 no.4
    • /
    • pp.292-297
    • /
    • 2022
  • Direct injection of CRISPR/Cas9 into zygotes enables the production of genetically modified nonhuman primates (NHPs) essential for modeling specific human diseases, such as Usher syndrome, and for developing novel therapeutic strategies. Usher syndrome is a rare genetic disease that causes loss of hearing, retinal degeneration, and problems with balance, and is attributed to a mutation in MYO7A, a gene that encodes an uncommon myosin motor protein expressed in the inner ear and retinal photoreceptors. To produce an Usher syndrome type 1B (USH1B) rhesus macaque model, we disrupted the MYO7A gene in developing zygotes. Identification of appropriately edited MYO7A embryos for knockout embryo transfer requires sequence analysis of material recovered from a trophectoderm (TE) cell biopsy. However, the TE biopsy procedure is labor intensive and could adversely impact embryo development. Recent studies have reported using cell-free DNA (cfDNA) from embryo culture media to detect aneuploid embryos in human in vitro fertilization (IVF) clinics. The cfDNA is released from the embryo during cell division or cell death, suggesting that cfDNA may be a viable resource for sequence analysis. Moreover, cfDNA collection is not invasive to the embryo and does not require special tools or expertise. We hypothesized that selection of appropriate edited embryos could be performed by analyzing cfDNA for MYO7A editing in embryo culture medium, and that this method would be advantageous for the subsequent generation of genetically modified NHPs. The purpose of this experiment is to determine whether cfDNA can be used to identify the target gene mutation of CRISPR/Cas9 injected embryos. In this study, we were able to obtain and utilize cfDNA to confirm the mutagenesis of MYO7A, but the method will require further optimization to obtain better accuracy before it can replace the TE biopsy approach.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

Performance Evaluation of Loss Functions and Composition Methods of Log-scale Train Data for Supervised Learning of Neural Network (신경 망의 지도 학습을 위한 로그 간격의 학습 자료 구성 방식과 손실 함수의 성능 평가)

  • Donggyu Song;Seheon Ko;Hyomin Lee
    • Korean Chemical Engineering Research
    • /
    • v.61 no.3
    • /
    • pp.388-393
    • /
    • 2023
  • The analysis of engineering data using neural network based on supervised learning has been utilized in various engineering fields such as optimization of chemical engineering process, concentration prediction of particulate matter pollution, prediction of thermodynamic phase equilibria, and prediction of physical properties for transport phenomena system. The supervised learning requires training data, and the performance of the supervised learning is affected by the composition and the configurations of the given training data. Among the frequently observed engineering data, the data is given in log-scale such as length of DNA, concentration of analytes, etc. In this study, for widely distributed log-scaled training data of virtual 100×100 images, available loss functions were quantitatively evaluated in terms of (i) confusion matrix, (ii) maximum relative error and (iii) mean relative error. As a result, the loss functions of mean-absolute-percentage-error and mean-squared-logarithmic-error were the optimal functions for the log-scaled training data. Furthermore, we figured out that uniformly selected training data lead to the best prediction performance. The optimal loss functions and method for how to compose training data studied in this work would be applied to engineering problems such as evaluating DNA length, analyzing biomolecules, predicting concentration of colloidal suspension.

The Effect of Engineering Design Based Ocean Clean Up Lesson on STEAM Attitude and Creative Engineering Problem Solving Propensity (공학설계기반 오션클린업(Ocean Clean-up) 수업이 STEAM태도와 창의공학적 문제해결성향에 미치는 효과)

  • DongYoung Lee;Hyojin Yi;Younkyeong Nam
    • Journal of the Korean earth science society
    • /
    • v.44 no.1
    • /
    • pp.79-89
    • /
    • 2023
  • The purpose of this study was to investigate the effects of engineering design-based ocean cleanup classes on STEAM attitudes and creative engineering problem-solving dispositions. Furthermore, during this process, we tried to determine interesting points that students encountered in engineering design-based classes. For this study, a science class with six lessons based on engineering design was developed and reviewed by a professor who majored in engineering design, along with five engineering design experts with a master's degree or higher. The subject of the class was selected as the design and implementation of scientific and engineering measures to reduce marine pollution based on the method implemented in an actual Ocean Clean-up Project. The engineering design process utilized the engineering design model presented by NGSS (2013), and was configured to experience redesign through the optimization process. To verify effectiveness, the STEAM attitude questionnaire developed by Park et al. (2019) and the creative engineering problemsolving propensity test tool developed by Kang and Nam (2016) were used. A pre and post t-test was used for statistical analysis for the effectiveness test. In addition, the contents of interesting points experienced by the learners were transcribed after receiving descriptive responses, and were analyzed and visualized through degree centrality analysis. Results confirmed that engineering design in science classes had a positive effect on both STEAM attitude and creative engineering problem-solving disposition (p< .05). In addition, as a result of unstructured data analysis, science and engineering knowledge, engineering experience, and cooperation and collaboration appeared as factors in which learners were interested in learning, confirming that engineering experience was the main factor.

Optimal Sensor Placement for Improved Prediction Accuracy of Structural Responses in Model Test of Multi-Linked Floating Offshore Systems Using Genetic Algorithms (다중연결 해양부유체의 모형시험 구조응답 예측정확도 향상을 위한 유전알고리즘을 이용한 센서배치 최적화)

  • Kichan Sim;Kangsu Lee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.3
    • /
    • pp.163-171
    • /
    • 2024
  • Structural health monitoring for ships and offshore structures is important in various aspects. Ships and offshore structures are continuously exposed to various environmental conditions, such as waves, wind, and currents. In the event of an accident, immense economic losses, environmental pollution, and safety problems can occur, so it is necessary to detect structural damage or defects early. In this study, structural response data of multi-linked floating offshore structures under various wave load conditions was calculated by performing fluid-structure coupled analysis. Furthermore, the order reduction method with distortion base mode was applied to the structures for predicting the structural response by using the results of numerical analysis. The distortion base mode order reduction method can predict the structural response of a desired area with high accuracy, but prediction performance is affected by sensor arrangement. Optimization based on a genetic algorithm was performed to search for optimal sensor arrangement and improve the prediction performance of the distortion base mode-based reduced-order model. Consequently, a sensor arrangement that predicted the structural response with an error of about 84.0% less than the initial sensor arrangement was derived based on the root mean squared error, which is a prediction performance evaluation index. The computational cost was reduced by about 8 times compared to evaluating the prediction performance of reduced-order models for a total of 43,758 sensor arrangement combinations. and the expected performance was overturned to approximately 84.0% based on sensor placement, including the largest square root error.