• Title/Summary/Keyword: 신뢰도 기반 최적화

Search Result 254, Processing Time 0.029 seconds

A Tree-Based Routing Algorithm Considering An Optimization for Efficient Link-Cost Estimation in Military WSN Environments (무선 센서 네트워크에서 링크 비용 최적화를 고려한 감시·정찰 환경의 트리 기반 라우팅 알고리즘에 대한 연구)

  • Kong, Joon-Ik;Lee, Jae-Ho;Kang, Ji-Heon;Eom, Doo-Seop
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8B
    • /
    • pp.637-646
    • /
    • 2012
  • Recently, Wireless Sensor Networks (WSNs) are used in many applications. When sensor nodes are deployed on special areas, where humans have any difficulties to get in, the nodes form network topology themselves. By using the sensor nodes, users are able to obtain environmental information. Due to the lack of the battery capability, sensor nodes should be efficiently managed with energy consumption in WSNs. In specific applications (e.g. in intrusion detections), intruders tend to occur unexpectedly. For the energy efficiency in the applications, an appropriate algorithm is strongly required. In this paper, we propose tree-based routing algorithm for the specific applications, which based on the intrusion detection. In addition, In order to decrease traffic density, the proposed algorithm provides enhanced method considering link cost and load balance, and it establishes efficient links amongst the sensor nodes. Simultaneously, by using the proposed scheme, parent and child nodes are (re-)defined. Furthermore, efficient routing table management facilitates to improve energy efficiency especially in the limited power source. In order to apply a realistic military environment, in this paper, we design three scenarios according to an intruder's moving direction; (1) the intruder is passing along a path where sensor nodes have been already deployed. (2) the intruders are crossing the path. (3) the intruders, who are moving as (1)'s scenario, are certainly deviating from the middle of the path. In conclusion, through the simulation results, we obtain the performance results in terms of latency and energy consumption, and analyze them. Finally, we validate our algorithm is highly able to adapt on such the application environments.

Geographical Name Denoising by Machine Learning of Event Detection Based on Twitter (트위터 기반 이벤트 탐지에서의 기계학습을 통한 지명 노이즈제거)

  • Woo, Seungmin;Hwang, Byung-Yeon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.10
    • /
    • pp.447-454
    • /
    • 2015
  • This paper proposes geographical name denoising by machine learning of event detection based on twitter. Recently, the increasing number of smart phone users are leading the growing user of SNS. Especially, the functions of short message (less than 140 words) and follow service make twitter has the power of conveying and diffusing the information more quickly. These characteristics and mobile optimised feature make twitter has fast information conveying speed, which can play a role of conveying disasters or events. Related research used the individuals of twitter user as the sensor of event detection to detect events that occur in reality. This research employed geographical name as the keyword by using the characteristic that an event occurs in a specific place. However, it ignored the denoising of relationship between geographical name and homograph, it became an important factor to lower the accuracy of event detection. In this paper, we used removing and forecasting, these two method to applied denoising technique. First after processing the filtering step by using noise related database building, we have determined the existence of geographical name by using the Naive Bayesian classification. Finally by using the experimental data, we earned the probability value of machine learning. On the basis of forecast technique which is proposed in this paper, the reliability of the need for denoising technique has turned out to be 89.6%.

Boosting the Performance of Python-based Geodynamic Code using the Just-In-Time Compiler (Just-In-Time 컴파일러를 이용한 파이썬 기반 지구동역학 코드 가속화 연구)

  • Park, Sangjin;An, Soojung;So, Byung-Dal
    • Geophysics and Geophysical Exploration
    • /
    • v.24 no.2
    • /
    • pp.35-44
    • /
    • 2021
  • As the execution speed of Python is slower than those of other programming languages (e.g., C, C++, and FORTRAN), Python is not considered to be efficient for writing numerical geodynamic code that requires numerous iterations. Recently, many computational techniques, such as the Just-In-Time (JIT) compiler, have been developed to enhance the calculation speed of Python. Here, we developed two-dimensional (2D) numerical geodynamic code that was optimized for the JIT compiler, based on Python. Our code simulates mantle convection by combining the Particle-In-Cell (PIC) scheme and the finite element method (FEM), which are both commonly used in geodynamic modeling. We benchmarked well-known mantle convection problems to evaluate the reliability of our code, which confirmed that the root mean square velocity and Nusselt number obtained from our numerical modeling were consistent with those of the mantle convection problems. The matrix assembly and PIC processes in our code, when run with the JIT compiler, successfully achieved a speed-up 30× and 258× faster than without the JIT compiler, respectively. Our Python-based FEM-PIC code shows the high potential of Python for geodynamic modeling cases that require complex computations.

Analytical Methods for the Analysis of Structural Connectivity in the Mouse Brain (마우스 뇌의 구조적 연결성 분석을 위한 분석 방법)

  • Im, Sang-Jin;Baek, Hyeon-Man
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.4
    • /
    • pp.507-518
    • /
    • 2021
  • Magnetic resonance imaging (MRI) is a key technology that has been seeing increasing use in studying the structural and functional innerworkings of the brain. Analyzing the variability of brain connectome through tractography analysis has been used to increase our understanding of disease pathology in humans. However, there lacks standardization of analysis methods for small animals such as mice, and lacks scientific consensus in regard to accurate preprocessing strategies and atlas-based neuroinformatics for images. In addition, it is difficult to acquire high resolution images for mice due to how significantly smaller a mouse brain is compared to that of humans. In this study, we present an Allen Mouse Brain Atlas-based image data analysis pipeline for structural connectivity analysis involving structural region segmentation using mouse brain structural images and diffusion tensor images. Each analysis method enabled the analysis of mouse brain image data using reliable software that has already been verified with human and mouse image data. In addition, the pipeline presented in this study is optimized for users to efficiently process data by organizing functions necessary for mouse tractography among complex analysis processes and various functions.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

A Study on Load-carrying Capacity Design Criteria of Jack-up Rigs under Environmental Loading Conditions (환경하중을 고려한 Jack-up rig의 내하력 설계 기준에 대한 연구)

  • Park, Joo Shin;Ha, Yeon Chul;Seo, Jung Kwan
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.26 no.1
    • /
    • pp.103-113
    • /
    • 2020
  • Jack-up drilling rigs are widely used in the offshore oil and gas exploration industry. Although originally designed for use in shallow waters, trends in the energy industry have led to a growing demand for their use in deep sea and harsh environmental conditions. To extend the operating range of jack-up units, their design must be based on reliable analysis while eliminating excessive conservatism. In current industrial practice, jack-up drilling rigs are designed using the working(or allowable) stress design (WSD) method. Recently, classifications have been developed for specific regulations based on the load and resistance factor design (LRFD) method, which emphasises the reliability of the methods. This statistical method utilises the concept of limit state design and uses factored loads and resistance factors to account for uncertainly in the loads and computed strength of the leg components in a jack-up drilling rig. The key differences between the LRFD method and the WSD method must be identified to enable appropriate use of the LRFD method for designing jack-up rigs. Therefore, the aim of this study is to compare and quantitatively investigate the differences between actual jack-up lattice leg structures, which are designed by the WSD and LRFD methods, and subject to different environmental load-to-dead-load ratios, thereby delineating the load-to-capacity ratios of rigs designed using theses methods under these different enviromental conditions. The comparative results are significantly advantageous in the leg design of jack-up rigs, and determine that the jack-up rigs designed using the WSD and LRFD methods with UC values differ by approximately 31 % with respect to the API-RP code basis. It can be observed that the LRFD design method is more advantageous to structure optimization compared to the WSD method.

A Study on The Security Vulnerability Analysis of Open an Automatic Demand Response System (개방형 자동 수요 반응 시스템 보안 취약성 분석에 관한 연구)

  • Chae, Hyeon-Ho;Lee, June-Kyoung;Lee, Kyoung-Hak
    • Journal of Digital Convergence
    • /
    • v.14 no.5
    • /
    • pp.333-339
    • /
    • 2016
  • Technology to optimize and utilize the use and supply of the electric power between consumer and supplier has been on the rise among the smart grid power market network in electric power demand management based on the Internet. Open Automated Demand Response system protocol, which can deliver Demand Response needed in electric power demand management to electricity supplier, system supplier and even the user is openADR 2.0b. This paper used the most credible, cosmopolitanly proliferated EPRI open source and analysed the variety of security vulnerability that developed VEN and VTN system may have. Using the simulator for attacking openADR protocol, the VEN/VTN system that has been implemented as EPRI open source was conducted to attack in a variety of ways. As a result of the analysis, we were able to get the results that the VEN/VTN system has security vulnerabilities to the parameter tampering attacks and service flow falsification attack. In conclusion, if you want to implement the openADR2.0b protocol system in the open or two-way communication environment smart grid network, considering a variety of security vulnerability should be sure to seek security technology and services.

Color Component Analysis For Image Retrieval (이미지 검색을 위한 색상 성분 분석)

  • Choi, Young-Kwan;Choi, Chul;Park, Jang-Chun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Recently, studies of image analysis, as the preprocessing stage for medical image analysis or image retrieval, are actively carried out. This paper intends to propose a way of utilizing color components for image retrieval. For image retrieval, it is based on color components, and for analysis of color, CLCM (Color Level Co-occurrence Matrix) and statistical techniques are used. CLCM proposed in this paper is to project color components on 3D space through geometric rotate transform and then, to interpret distribution that is made from the spatial relationship. CLCM is 2D histogram that is made in color model, which is created through geometric rotate transform of a color model. In order to analyze it, a statistical technique is used. Like CLCM, GLCM (Gray Level Co-occurrence Matrix)[1] and Invariant Moment [2,3] use 2D distribution chart, which use basic statistical techniques in order to interpret 2D data. However, even though GLCM and Invariant Moment are optimized in each domain, it is impossible to perfectly interpret irregular data available on the spatial coordinates. That is, GLCM and Invariant Moment use only the basic statistical techniques so reliability of the extracted features is low. In order to interpret the spatial relationship and weight of data, this study has used Principal Component Analysis [4,5] that is used in multivariate statistics. In order to increase accuracy of data, it has proposed a way to project color components on 3D space, to rotate it and then, to extract features of data from all angles.

KSTAR 토카막 플라즈마 가열을 위한 중성 입자빔 입사장치용 이온원 개발 현황

  • Kim, Tae-Seong;Jeong, Seung-Ho;Jang, Du-Hui;Lee, Gwang-Won;In, Sang-Yeol;O, Byeong-Hun;Jang, Dae-Sik;Jin, Jeong-Tae;Song, U-Seop
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2013.02a
    • /
    • pp.559-559
    • /
    • 2013
  • KSTAR (Korea Superconducting Tokamak Advanced Research) 장치는 차세대 에너지원 중의 하나인 핵융합로를 위한 과학기술 기반을 마련하기 위해 개발된 중형급 토카막 실험장치로서 토카막 운전 영역의 확장과 안정성 확보, 정상상태 운전 도달을 위한 방법 연구, 최적화된 플라즈마 상태와 연속 운전 실현 등을 주요 목표로 하고 있다. 이를 위해 핵융합 반응에 의한 점화조건과 가까운 상태로 플라즈마를 가열해주어야 하며, 토카막 장치의 저항가열 이외에도 외부에서 추가 가열이 반드시 필요하다. 중성 입자빔 입사 장치는 현재 토카막에서 사용되고 있는 가열장치 중 가장 신뢰성있는 추가 가열 장치라 할 수 있으며 한국 원자력연구원에서는 1997년부터 KSTAR 토카막 실험 장치에 사용될 중성 입자빔 입사 장치를 개발해왔었다. 중성빔 입사 장치는 크게 이온원, 진공함, 열량계, 진공 펌프, 중성화 장치, 이온덤프와 전자석으로 이루어져 있으며, 이중 이온원은 중성빔의 성능을 좌우하는 핵심적인 장치라 할 수 있다. 최근 한국원자력연구원에서는 2 MW 중성 입자빔 입사장치용 이온원 개발을 완료하여 KSTAR 토카막 장치에 설치하였으며, 2013년 현재 KSTAR에는 총 두 개의 이온원이 장착되어 최대 약 3 MW 이상의 중수소 중성 입자빔을 입사하여 KSTAR 토카막 실험의 H-mode 달성과 운전 시나리오 연구에 많은 기여를 하고 있다. 한국원자력연구원에서 최초로 개발된 이온원은 미국 TFTR 장치에서 사용되었던 US LPIS (Long Pulse Ion Source)를 기본으로 하여 국내 개발을 수행하였다. 이 온원은 크게 플라즈마를 발생시키는 플라즈마 발생부와 발생된 이온을 인출 및 가속시키는 가속부로 구성되는데, 개발과정에서 가장 먼저 KSTAR의 장주기 운전에 적합하도록 플라즈마 방전부와 가속부의 냉각회로를 요구되는 열부하에 맞게 설계 수정하였다. 그 후 플라즈마 방전부는 방전 시간과 안정성, 플라즈마 밀도의 균일도, 정격 운전, 방전 효율 등을 고려하여 수정 보완하며 개발을 진행하여왔다. 가속부의 경우 국내 제작기술의 한계를 극복하기 위해 빔 인출그리드를 TFTR의 US LPIS 모델의 슬릿형 그리드 타입에서 원형 인출구 타입으로 변경하였으며, 이후 가속 전극의 고전압 내전력 문제, 빔 인출 전류와 전력, 인출 빔의 광학적 질(quality), 빔 인출 시간 동안의 안정성 등을 위해 그리드의 크기와 간격, 모양 등을 변경하여 개발을 수 행하여 왔다. 이 논문은 한국원자력연구원에서 개발이 진행되어 왔던 이온원들을 시간적으로 되짚어 보면서 현재까지의 성과와 문제점, 그리고 앞으로의 개발 방향에 대해 논의하고자 한다.

  • PDF

A Study on Regionalization of Parameters for Sacramento Continuous Rainfall-Runoff Model Using Watershed Characteristics (유역특성인자를 활용한 Sacramento 장기유출모형의 매개변수 지역화 기법 연구)

  • Kim, Tae-Jeong;Jeong, Ga-In;Kim, Ki-Young;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.48 no.10
    • /
    • pp.793-806
    • /
    • 2015
  • The simulation of natural streamflow at ungauged basins is one of the fundamental challenges in hydrology community. The key to runoff simulation in ungauged basins is generally involved with a reliable parameter estimation in a rainfall-runoff model. However, the parameter estimation of the rainfall-runoff model is a complex issue due to an insufficient hydrologic data. This study aims to regionalize the parameters of a continuous rainfall-runoff model in conjunction with a Bayesian statistical technique to consider uncertainty more precisely associated with the parameters. First, this study employed Bayesian Markov Chain Monte Carlo scheme for the estimation of the Sacramento rainfall-runoff model. The Sacramento model is calibrated against observed daily runoff data, and finally, the posterior density function of the parameters is derived. Second, we applied a multiple linear regression model to the set of the parameters with watershed characteristics, to obtain a functional relationship between pairs of variables. The proposed model was also validated with gauged watersheds in accordance with the efficiency criteria such as the Nash-Sutcliffe efficiency, index of agreement and the coefficient of correlation.