• Title/Summary/Keyword: Real-time inspection system

Search Result 354, Processing Time 0.027 seconds

Quantitative Deterioration and Maintenance Profiles of Typical Steel Bridges based on Response Surface Method (응답면 기법을 이용한 강교의 열화 및 보수보강 정량화 이력 모델)

  • Park, Seung-Hyun;Park, Kyung Hoon;Kim, Hee Joong;Kong, Jung-Sik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6A
    • /
    • pp.765-778
    • /
    • 2008
  • Performance Profiles are essential to predict the performance variation over time for the bridge management system (BMS) based on risk management. In general, condition profiles based on experts opinion and/or visual inspection records have been used widely because obtaining profiles based on real performance is not easy. However, those condition profiles usually don't give a good consistency to the safety of bridges, causing practical problems for the effective bridge management. The accuracy of performance evaluation is directly related to the accuracy of BMS. The reliability of the evaluation is important to produce the optimal solution for distributing maintenance budget reasonably. However, conventional methods of bridge assessment are not suitable for a more sophisticated decision making procedure. In this study, a method to compute quantitative performance profiles has been proposed to overcome the limitations of those conventional models. In Bridge Management Systems, the main role of performance profiles is to compute and predict the performance of bridges subject to lifetime activities with uncertainty. Therefore, the computation time for obtaining an optimal maintenance scenario is closely related to the efficiency of the performance profile. In this study, the Response Surface Method (RSM) based on independent and important design variables is developed for the rapid computation. Steel box bridges have been investigated because the number of independent design variables can be reduced significantly due to the high dependency between design variables.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Timely Sensor Fault Detection Scheme based on Deep Learning (딥 러닝 기반 실시간 센서 고장 검출 기법)

  • Yang, Jae-Wan;Lee, Young-Doo;Koo, In-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.163-169
    • /
    • 2020
  • Recently, research on automation and unmanned operation of machines in the industrial field has been conducted with the advent of AI, Big data, and the IoT, which are the core technologies of the Fourth Industrial Revolution. The machines for these automation processes are controlled based on the data collected from the sensors attached to them, and further, the processes are managed. Conventionally, the abnormalities of sensors are periodically checked and managed. However, due to various environmental factors and situations in the industrial field, there are cases where the inspection due to the failure is not missed or failures are not detected to prevent damage due to sensor failure. In addition, even if a failure occurs, it is not immediately detected, which worsens the process loss. Therefore, in order to prevent damage caused by such a sudden sensor failure, it is necessary to identify the failure of the sensor in an embedded system in real-time and to diagnose the failure and determine the type for a quick response. In this paper, a deep neural network-based fault diagnosis system is designed and implemented using Raspberry Pi to classify typical sensor fault types such as erratic fault, hard-over fault, spike fault, and stuck fault. In order to diagnose sensor failure, the network is constructed using Google's proposed Inverted residual block structure of MobilieNetV2. The proposed scheme reduces memory usage and improves the performance of the conventional CNN technique to classify sensor faults.

Impact Monitoring of Composite Structures using Fiber Bragg Grating Sensors (광섬유 브래그 격자 센서를 이용한 복합재 구조물의 충격 모니터링 기법 연구)

  • Jang, Byeong-Wook;Park, Sang-Oh;Lee, Yeon-Gwan;Kim, Chun-Gon;Park, Chan-Yik;Lee, Bong-Wan
    • Composites Research
    • /
    • v.24 no.1
    • /
    • pp.24-30
    • /
    • 2011
  • Low-velocity impact can cause various damages which are mostly hidden inside the laminates or occur in the opposite side. Thus, these damages cannot be easily detected by visual inspection or conventional NDT systems. And if they occurred between the scheduled NDT periods, the possibilities of extensive damages or structural failure can be higher. Due to these reasons, the built-in NDT systems such as real-time impact monitoring system are required in the near future. In this paper, we studied the impact monitoring system consist of impact location detection and damage assessment techniques for composite flat and stiffened panel. In order to acquire the impact-induced acoustic signals, four multiplexed FBG sensors and high-speed FBG interrogator were used. And for development of the impact and damage occurrence detections, the neural networks and wavelet transforms were adopted. Finally, these algorithms were embodied using MATLAB and LabVIEW software for the user-friendly interface.

The Development of XML Message for Status Tracking the Importing Agrifoods During Transport by UBL (UBL 기반 수입농수산물 운송 중 상태 모니터링을 위한 XML 메시지 개발)

  • Ahn, Kyeong Rim;Ryu, Heeyoung;Lee, Hochoon;Park, Chankwon
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.3
    • /
    • pp.159-171
    • /
    • 2018
  • The imported foods, which are imported and sold domestically, are on the rise every year, and the scale is expected to be larger, including processing the imported raw materials. However, the origin of raw materials is indicated when declaring cargo for finished products of agricultural products, but the standardization of inspection information management system for raw materials is insufficient. In addition, there is a growing concern about the presence of residual pesticides or radioactivity in raw materials or products, and customer want to know production history information when purchasing agrifoods. It manages the hazard analysis of imported agricultural products, but most of them are global issues such as microorganisms, residual pesticides, food additives, and allergy components, etc. Therefore, it is necessary to share among the logistics entities in the entire transportation process the related data. Additionally, to do this, it needs to design an architecture and standardize business model. In this paper, it defines the architecture and the work-flow that occurs between the business process for collecting, processing, and processing information for tracking the status of imported agricultural products by steps, and develops XML message with UBL and the extracted conceptual information model. It will be easy to exchange and share information among the logistics entities through the defined standard model and it will be possible to establish visibility, reliability, safety, and freshness system for transportation of agricultural products requiring real-time management.

A Study on the Current State of the Library's AI Service and the Service Provision Plan (도서관의 인공지능(AI) 서비스 현황 및 서비스 제공 방안에 관한 연구)

  • Kwak, Woojung;Noh, Younghee
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.1
    • /
    • pp.155-178
    • /
    • 2021
  • In the era of the 4th industrial revolution, public libraries need a strategy for promoting intelligent library services in order to actively respond to changes in the external environment such as artificial intelligence. Therefore, in this study, based on the concept of artificial intelligence and analysis of domestic and foreign artificial intelligence related trends, policies, and cases, we proposed the future direction of introduction and development of artificial intelligence services in the library. Currently, the library operates a reference information service that automatically provides answers through the introduction of artificial intelligence technologies such as deep learning and natural language processing, and develops a big data-based AI book recommendation and automatic book inspection system to increase business utilization and provide customized services for users. Has been provided. In the field of companies and industries, regardless of domestic and overseas, we are developing and servicing technologies based on autonomous driving using artificial intelligence, personal customization, etc., and providing optimal results by self-learning information using deep learning. It is developed in the form of an equation. Accordingly, in the future, libraries will utilize artificial intelligence to recommend personalized books based on the user's usage records, recommend reading and culture programs, and introduce real-time delivery services through transport methods such as autonomous drones and cars in the case of book delivery service. Service development should be promoted.

Establishment of a deep learning-based defect classification system for optimizing textile manufacturing equipment

  • YuLim Kim;Jaeil Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.27-35
    • /
    • 2023
  • In this paper, we propose a process of increasing productivity by applying a deep learning-based defect detection and classification system to the prepreg fiber manufacturing process, which is in high demand in the field of producing composite materials. In order to apply it to toe prepreg manufacturing equipment that requires a solution due to the occurrence of a large amount of defects in various conditions, the optimal environment was first established by selecting cameras and lights necessary for defect detection and classification model production. In addition, data necessary for the production of multiple classification models were collected and labeled according to normal and defective conditions. The multi-classification model is made based on CNN and applies pre-learning models such as VGGNet, MobileNet, ResNet, etc. to compare performance and identify improvement directions with accuracy and loss graphs. Data augmentation and dropout techniques were applied to identify and improve overfitting problems as major problems. In order to evaluate the performance of the model, a performance evaluation was conducted using the confusion matrix as a performance indicator, and the performance of more than 99% was confirmed. In addition, it checks the classification results for images acquired in real time by applying them to the actual process to check whether the discrimination values are accurately derived.

Creation of Actual CCTV Surveillance Map Using Point Cloud Acquired by Mobile Mapping System (MMS 점군 데이터를 이용한 CCTV의 실질적 감시영역 추출)

  • Choi, Wonjun;Park, Soyeon;Choi, Yoonjo;Hong, Seunghwan;Kim, Namhoon;Sohn, Hong-Gyoo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1361-1371
    • /
    • 2021
  • Among smart city services, the crime and disaster prevention sector accounted for the highest 24% in 2018. The most important platform for providing real-time situation information is CCTV (Closed-Circuit Television). Therefore, it is essential to create the actual CCTV surveillance coverage to maximize the usability of CCTV. However, the amount of CCTV installed in Korea exceeds one million units, including those operated by the local government, and manual identification of CCTV coverage is a time-consuming and inefficient process. This study proposed a method to efficiently construct CCTV's actual surveillance coverage and reduce the time required for the decision-maker to manage the situation. For this purpose, first, the exterior orientation parameters and focal lengths of the pre-installed CCTV cameras, which are difficult to access, were calculated using the point cloud data of the MMS (Mobile Mapping System), and the FOV (Field of View) was calculated accordingly. Second, using the FOV result calculated in the first step, CCTV's actual surveillance coverage area was constructed with 1 m, 2 m, 3 m, 5 m, and 10 m grid interval considering the occluded regions caused by the buildings. As a result of applying our approach to 5 CCTV images located in Uljin-gun, Gyeongsnagbuk-do the average re-projection error was about 9.31 pixels. The coordinate difference between calculated CCTV and location obtained from MMS was about 1.688 m on average. When the grid length was 3 m, the surveillance coverage calculated through our research matched the actual surveillance obtained from visual inspection with a minimum of 70.21% to a maximum of 93.82%.

A Development of Facility Web Program for Small and Medium-Sized PSM Workplaces (중·소규모 공정안전관리 사업장의 웹 전산시스템 개발)

  • Kim, Young Suk;Park, Dal Jae
    • Korean Chemical Engineering Research
    • /
    • v.60 no.3
    • /
    • pp.334-346
    • /
    • 2022
  • There is a lack of knowledge and information on the understanding and application of the Process Safety Management (PSM) system, recognized as a major cause of industrial accidents in small-and medium-sized workplaces. Hence, it is necessary to prepare a protocol to secure the practical and continuous levels of implementation for PSM and eliminate human errors through tracking management. However, insufficient research has been conducted on this. Therefore, this study investigated and analyzed the various violations in the administrative measures, based on the regulations announced by the Ministry of Employment and Labor, in approximately 200 small-and medium-sized PSM workplaces with fewer than 300 employees across in korea. This study intended to contribute to the prevention of major industrial accidents by developing a facility maintenance web program that removed human errors in small-and medium-sized workplaces. The major results are summarized as follows. First, It accessed the web via a QR code on a smart device to check the equipment's specification search function, cause of failure, and photos for the convenience of accessing the program, which made it possible to make requests for the it inspection and maintenance in real time. Second, it linked the identification of the targets to be changed, risk assessment, worker training, and pre-operation inspection with the program, which allowed the administrator to track all the procedures from start to finish. Third, it made it possible to predict the life of the equipment and verify its reliability based on the data accumulated through the registration of the pictures for improvements, repairs, time required, cost, etc. after the work was completed. It is suggested that these research results will be helpful in the practical and systematic operation of small-and medium-sized PSM workplaces. In addition, it can be utilized in a useful manner for the development and dissemination of a facility maintenance web program when establishing future smart factories in small-and medium-sized PSM workplaces under the direction of the government.

A Validation Study on the Drive Ability Cognitive Assessment Tool of Elderly Drivers (고령자 운전능력 인지 검사 도구의 타당화 연구)

  • Cheong, Moon Joo;Lee, Young Mi;Seo, Puluna
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.3
    • /
    • pp.298-308
    • /
    • 2020
  • This study was designed to verify reliability and feasibility by analyzing elderly drivers' ability test tools for older drivers aged 65 or older, which were improved in 2018 and are currently being conducted by the Korea Highway Traffic Authority. Only those aged 65 or older who voluntarily applied to the elderly driving ability evaluation system implemented by the Seoul branch of the Korea Highway Traffic Authority. The research was conducted for about 50 days until Aug. 31, 2018, starting with the registration and inspection of the first study subjects. The analysis performed a correlation analysis with existing tools and cognitive testing tools (MMSE_K) to determine their feasibility and reliability as an improved tool in 2018. As a result, the first, the speed distance, time-space memory, and dispersionism of each sub-component of the old version showed statistically significant static correlation with the sub-factor of the current version. Persistence, on the other hand, was not statistically significant to the current version. The limitations of this study were as follows. Most of the people in the study were highly educated and residents in the metropolitan area. Therefore, it is likely that the results of MSE_K, which checks cognitive and judgment skills, have been upgraded. Also, cognitive tools that are measured by computers are likely to have real measurement errors for generations who are not familiar with computers. Therefore, it is expected that improvement and development of tools for improving the limit points at the site and assessing actual operation capability will be required.