• Title/Summary/Keyword: Solution algorithm

Search Result 3,896, Processing Time 0.029 seconds

A Study on Application of Improved Tunnel Water-Sealing Grouting Construction Process and the Inverse Analysis Material Selection Method Using the Injection Processing Results (개선된 터널 차수그라우팅 시공 프로세스 적용 및 그 주입시공결과를 이용한 역해석 재료선정방법 연구)

  • Kim, Jin Chun;Yoo, Byung Sun;Kang, Hee Jin;Choi, Gi Sung;Kim, Seok Hyun
    • Journal of Korean Society of Disaster and Security
    • /
    • v.15 no.3
    • /
    • pp.101-113
    • /
    • 2022
  • This study is planned with the aim of developing a systematic construction process based on the scientific and engineering theory of the water-sealing grouting construction applied to the tunnel excavation process during the construction of the downtown underground traffic network, so that the construction quality of the relatively backward domestic tunnel water-sealing grouting construction is improved and continuously maintained no matter who constructs it. The main contents of the improved tunnel water-sealing grouting can be largely examined in the classification of tunnel water-sealing grouting application and the definition of grouting materials, the correlation analysis of groundwater pressure conditions with groundwater inflow, the study of the characteristic factors of bedrock, and the element technologies and injection management techniques required for grouting construction. Looking at the trends in global research, research in the field of theoretical-based science and engineering grouting is actively progressing in Nordic countries (Sweden, Finland, Norway, etc.), Japan, Germany, and the United States. Therefore, in this study, the algorithm is established through theoretical analysis of the elements of tunnel water-sealing grouting construction techniques to provide an integrated solution including a construction process that can effectively construct tunnel water-sealing grouting construction.

Parallel Computation on the Three-dimensional Electromagnetic Field by the Graph Partitioning and Multi-frontal Method (그래프 분할 및 다중 프론탈 기법에 의거한 3차원 전자기장의 병렬 해석)

  • Kang, Seung-Hoon;Song, Dong-Hyeon;Choi, JaeWon;Shin, SangJoon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.50 no.12
    • /
    • pp.889-898
    • /
    • 2022
  • In this paper, parallel computing method on the three-dimensional electromagnetic field is proposed. The present electromagnetic scattering analysis is conducted based on the time-harmonic vector wave equation and the finite element method. The edge-based element and 2nd -order absorbing boundary condition are used. Parallelization of the elemental numerical integration and the matrix assemblage is accomplished by allocating the partitioned finite element subdomain for each processor. The graph partitioning library, METIS, is employed for the subdomain generation. The large sparse matrix computation is conducted by MUMPS, which is the parallel computing library based on the multi-frontal method. The accuracy of the present program is validated by the comparison against the Mie-series analytical solution and the results by ANSYS HFSS. In addition, the scalability is verified by measuring the speed-up in terms of the number of processors used. The present electromagnetic scattering analysis is performed for a perfect electric conductor sphere, isotropic/anisotropic dielectric sphere, and the missile configuration. The algorithm of the present program will be applied to the finite element and tearing method, aiming for the further extended parallel computing performance.

Performance Analysis of DoS/DDoS Attack Detection Algorithms using Different False Alarm Rates (False Alarm Rate 변화에 따른 DoS/DDoS 탐지 알고리즘의 성능 분석)

  • Jang, Beom-Soo;Lee, Joo-Young;Jung, Jae-Il
    • Journal of the Korea Society for Simulation
    • /
    • v.19 no.4
    • /
    • pp.139-149
    • /
    • 2010
  • Internet was designed for network scalability and best-effort service which makes all hosts connected to Internet to be vulnerable against attack. Many papers have been proposed about attack detection algorithms against the attack using IP spoofing and DoS/DDoS attack. Purpose of DoS/DDoS attack is achieved in short period after the attack begins. Therefore, DoS/DDoS attack should be detected as soon as possible. Attack detection algorithms using false alarm rates consist of the false negative rate and the false positive rate. Moreover, they are important metrics to evaluate the attack detections. In this paper, we analyze the performance of the attack detection algorithms using the impact of false negative rate and false positive rate variation to the normal traffic and the attack traffic by simulations. As the result of this, we find that the number of passed attack packets is in the proportion to the false negative rate and the number of passed normal packets is in the inverse proportion to the false positive rate. We also analyze the limits of attack detection due to the relation between the false negative rate and the false positive rate. Finally, we propose a solution to minimize the limits of attack detection algorithms by defining the network state using the ratio between the number of packets classified as attack packets and the number of packets classified as normal packets. We find the performance of attack detection algorithm is improved by passing the packets classified as attacks.

Validity of Linear Combination Approach based on Net Damping Analysis of Cable-Damper System (케이블-댐퍼 시스템의 전체감쇠비 해석을 통한 선형조합 접근법의 유효성)

  • Kim, Hyeon Kyeom;Hwang, Jae Woong;Lee, Myeong Jae
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.29 no.5A
    • /
    • pp.467-475
    • /
    • 2009
  • Existing studies have suggested Universal Curve only for supplemental damping by damper. Therefore net damping has been determined by means of arithmetic summation between intrinsic, aero-damping of cable and supplemental damping of damper. However linear combination approach by means of the arithmetic summation is not enough theoretical background. So validity of this approach should be verified in order to design adequate cable-damper system by engineers. This study establishes governing differential equation which can consider intrinsic, aero-damping and supplemental damping as well. And also analysis method is solved by combination of muller method and successive iteration method. Consequently, this study succeeds in verification for validity of linear combination approach. As a result of this study, linear combination approach is limitedly effective in case of low stiffness and optimum damping coefficient of damper, short distance from support to damper, lower vibration mode, low aero-damping, and normal windy environment. Whereas this study will be effective in case of opposite conditions, and existing studies or linear combination approach occur to further error. Meaning of this study presents exact solution for net damping of cable-damper system, and verifies linear combination approach by means of the analysis method. In the future, if monitoring of optimum damping coefficient of a damper against aero-damping is feasible on time, algorithm of this study will be available for control of cable and semi-active damper system such as magneto-rheological damper.

The Study of Digitalization of Analog Gauge using Image Processing (이미지 처리를 이용한 아날로그 게이지 디지털화에 관한 연구)

  • Seon-Deok Kim;Cherl-O Bae;Kyung-Min Park;Jae-Hoon Jee
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.4
    • /
    • pp.389-394
    • /
    • 2023
  • In recent years, use of machine automation is rising in the industry. Ships also obtain machine condition information from sensor as digital information. However, on ships, crew members regularly surveil the engine room to check the condition of equipment and their information through analog gauges. This is a time-consuming and tedious process and poses safety risks to the crew while on surveillance. To address this, engine room surveillance using an autonomous mobile robot is being actively explored as a solution because it can reduce time, costs, and the safety risks for crew. Analog gauge reading using an autonomous mobile robot requires digitization for the robot to recognize the gauge value. In this study, image processing techniques were applied to achieve this. Analog gauge images were subjected to image preprocessing to remove noise and highlight their features. The center point, indicator point, minimum value and maximum value of the analog gauge were detected through image processing. Through the straight line connecting these points, the angle from the minimum value to the maximum value and the angle from the minimum value to indicator point were obtained. The obtained angle is digitized as the value currently indicated by the analog gauge through a formula. It was confirmed from the experiments that the digitization of the analog gauge using image processing was successful, indicating the equivalent current value shown by the gauge. When applied to surveillance robots, this algorithm can minimize safety risks and time and opportunity costs of crew members for engine room surveillance.

Development of a Portable-Based Smart Structural Response Monitoring System and Evaluation of Field Applicability (포터블 기반 스마트 구조 응답 모니터링 시스템 개발 및 현장 적용성 평가)

  • Sangki Park;Dong-Woo Seo;Ki-Tae Park;Hojin Kim;Thanh Bui-Tien;Lan Nguyen-Ngoc
    • Journal of Korean Society of Disaster and Security
    • /
    • v.16 no.4
    • /
    • pp.147-156
    • /
    • 2023
  • Because the behavior of cable bridges is dominated by dynamic response and is relatively complex, short- and long-term field monitoring are often required to evaluate the bridge condition. If a permanent SHMS (Structural Health Monitoring System) is not installed, a portable monitoring system is needed for the checking of bridge condition. In this case, it can be difficult to operate the portable monitoring system due to limited conditions such as power and communication according to the location and type of the bridge. In this study, the portable-based smart structural response monitoring system is developed that can be effectively used for short- and long-term monitoring of cable bridges in Korea and Southeast Asia. The developed system is a multi-channel portable data acquisition and analyzer that can be operated for a long time in the field using its own power supply system, and is included with the automated analysis algorithm for the dynamic characteristics of cable bridges using real-time data. In order to evaluate the field applicability of the developed system, field demonstration was conducted on cable bridges in Korea and Vietnam. Through the demonstration, the reliability and efficiency of field operation of the developed system were confirmed, and additionally, the possibility of application to overseas markets was confirmed in cable bridge monitoring field.

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

Performance Characteristics of 3D GSO PET/CT Scanner (Philips GEMINI PET/DT) (3차원 GSO PET/CT 스캐너(Philips GEMINI PET/CT의 특성 평가)

  • Kim, Jin-Su;Lee, Jae-Sung;Lee, Byeong-Il;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.4
    • /
    • pp.318-324
    • /
    • 2004
  • Purpose: Philips GEMINI is a newly introduced whole-body GSO PET/CT scanner. In this study, performance of the scanner including spatial resolution, sensitivity, scatter fraction, noise equivalent count ratio (NECR) was measured utilizing NEMA NU2-2001 standard protocol and compared with performance of LSO, BGO crystal scanner. Methods: GEMINI is composed of the Philips ALLEGRO PET and MX8000 D multi-slice CT scanners. The PET scanner has 28 detector segments which have an array of 29 by 22 GSO crystals ($4{\times}6{\times}20$ mm), covering axial FOV of 18 cm. PET data to measure spatial resolution, sensitivity, scatter fraction, and NECR were acquired in 3D mode according to the NEMA NU2 protocols (coincidence window: 8 ns, energy window: $409[\sim}664$ keV). For the measurement of spatial resolution, images were reconstructed with FBP using ramp filter and an iterative reconstruction algorithm, 3D RAMLA. Data for sensitivity measurement were acquired using NEMA sensitivity phantom filled with F-18 solution and surrounded by $1{\sim}5$ aluminum sleeves after we confirmed that dead time loss did not exceed 1%. To measure NECR and scatter fraction, 1110 MBq of F-18 solution was injected into a NEMA scatter phantom with a length of 70 cm and dynamic scan with 20-min frame duration was acquired for 7 half-lives. Oblique sinograms were collapsed into transaxial slices using single slice rebinning method, and true to background (scatter+random) ratio for each slice and frame was estimated. Scatter fraction was determined by averaging the true to background ratio of last 3 frames in which the dead time loss was below 1%. Results: Transverse and axial resolutions at 1cm radius were (1) 5.3 and 6.5 mm (FBP), (2) 5.1 and 5.9 mm (3D RAMLA). Transverse radial, transverse tangential, and axial resolution at 10 cm were (1) 5.7, 5.7, and 7.0 mm (FBP), (2) 5.4, 5.4, and 6.4 mm (3D RAMLA). Attenuation free values of sensitivity were 3,620 counts/sec/MBq at the center of transaxial FOV and 4,324 counts/sec/MBq at 10 cm offset from the center. Scatter fraction was 40.6%, and peak true count rate and NECR were 88.9 kcps @ 12.9 kBq/mL and 34.3 kcps @ 8.84 kBq/mL. These characteristics are better than that of ECAT EXACT PET scanner with BGO crystal. Conclusion: The results of this field test demonstrate high resolution, sensitivity and count rate performance of the 3D PET/CT scanner with GSO crystal. The data provided here will be useful for the comparative study with other 3D PET/CT scanners using BGO or LSO crystals.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.