• Title/Summary/Keyword: dynamic analysis method

Search Result 5,934, Processing Time 0.041 seconds

A Study on Ransomware Detection Methods in Actual Cases of Public Institutions (공공기관 실제 사례로 보는 랜섬웨어 탐지 방안에 대한 연구)

  • Yong Ju Park;Huy Kang Kim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.3
    • /
    • pp.499-510
    • /
    • 2023
  • Recently, an intelligent and advanced cyber attack attacks a computer network of a public institution using a file containing malicious code or leaks information, and the damage is increasing. Even in public institutions with various information protection systems, known attacks can be detected, but unknown dynamic and encryption attacks can be detected when existing signature-based or static analysis-based malware and ransomware file detection methods are used. vulnerable to The detection method proposed in this study extracts the detection result data of the system that can detect malicious code and ransomware among the information protection systems actually used by public institutions, derives various attributes by combining them, and uses a machine learning classification algorithm. Results are derived through experiments on how the derived properties are classified and which properties have a significant effect on the classification result and accuracy improvement. In the experimental results of this paper, although it is different for each algorithm when a specific attribute is included or not, the learning with a specific attribute shows an increase in accuracy, and later detects malicious code and ransomware files and abnormal behavior in the information protection system. It is expected that it can be used for property selection when creating algorithms.

A Study on the Calculation of Optimal Compensation Capacity of Reactive Power for Grid Connection of Offshore Wind Farms (해상풍력단지 전력계통 연계를 위한 무효전력 최적 보상용량 계산에 관한 연구)

  • Seong-Min Han;Joo-Hyuk Park;Chang-Hyun Hwang;Chae-Joo Moon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.65-76
    • /
    • 2024
  • With the recent activation of the offshore wind power industry, there has been a development of power plants with a scale exceeding 400MW, comparable to traditional thermal power plants. Renewable energy, characterized by intermittency depending on the energy source, is a prominent feature of modern renewable power generation facilities, which are structured based on controllable inverter technology. As the integration of renewable energy sources into the grid expands, the grid codes for power system connection are progressively becoming more defined, leading to active discussions and evaluations in this area. In this paper, we propose a method for selecting optimal reactive power compensation capacity when multiple offshore wind farms are integrated and connected through a shared interconnection facility to comply with grid codes. Based on the requirements of the grid code, we analyze the reactive power compensation and excessive stability of the 400MW wind power generation site under development in the southwest sea of Jeonbuk. This analysis involves constructing a generation site database using PSS/E (Power System Simulation for Engineering), incorporating turbine layouts and cable data. The study calculates reactive power due to charging current in internal and external network cables and determines the reactive power compensation capacity at the interconnection point. Additionally, static and dynamic stability assessments are conducted by integrating with the power system database.

Analysis of the Impact of Reflected Waves on Deep Neural Network-Based Heartbeat Detection for Pulsatile Extracorporeal Membrane Oxygenator Control (반사파가 박동형 체외막산화기 제어에 사용되는 심층신경망의 심장 박동 감지에 미치는 영향 분석)

  • Seo Jun Yoon;Hyun Woo Jang;Seong Wook Choi
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.3
    • /
    • pp.128-137
    • /
    • 2024
  • It is necessary to develop a pulsatile Extracorporeal Membrane Oxygenator (p-ECMO) with counter-pulsation control(CPC), which ejects blood during the diastolic phase of the heart rather than the systolic phase, due to the known issues with conventional ECMO causing fatal complications such as ventricular dilation and pulmonary edema. A promising method to simultaneously detect the pulsations of the heart and p-ECMO is to analyze blood pressure waveforms using deep neural network technology(DNN). However, the accurate detection of cardiac rhythms by DNNs is challenging due to various noises such as pulsations from p-ECMO, reflected waves in the vessels, and other dynamic noises. This study aims to evaluate the accuracy of DNNs developed for CPC in p-ECMO, using human-like blood pressure waveforms reproduced in an in-vitro experiment. Especially, an experimental setup that reproduces reflected waves commonly observed in actual patients was developed, and the impact of these waves on DNN judgments was assessed using a multiple DNN (m-DNN) that provides accurate determinations along with a separate index for heartbeat recognition ability. In the experimental setup inducing reflected waves, it was observed that the shape of the blood pressure waveform became increasingly complex, which coincided with an increase in harmonic components, as evident from the Fast Fourier Transform results of the blood pressure wave. It was observed that the recognition score (RS) of DNNs decreased in blood pressure waveforms with significant harmonic components, separate from the frequency components caused by the heart and p-ECMO. This study demonstrated that each DNN trained on blood pressure waveforms without reflected waves showed low RS when faced with waveforms containing reflected waves. However, the accuracy of the final results from the m-DNN remained high even in the presence of reflected waves.

Fundamental Study on Establishing the Subgrade Compaction Control Criteria of DCPT with Laboratory Test and In-situ Tests (실내 및 현장실험를 통한 DCPT의 노상토 다짐관리기준 정립에 관한 기초연구)

  • Choi, Jun-Seong
    • International Journal of Highway Engineering
    • /
    • v.10 no.4
    • /
    • pp.103-116
    • /
    • 2008
  • In this study, in-situ testing method, Dynamic Cone Penetration Test(DCPT) was presented to establish a new compaction control criteria with using mechanical property like elastic modulus instead of unit weight for field compaction control. Soil chamber tests and in-situ tests were carried out to confirm DCPT tests can predict the designed elastic modulus after field compaction, and correlation analysis among the DCPT, CBR and resilient modulus of sub grade were performed. Also, DCPT test spacing criteria in the construction site was proposed from the literature review. In the result of laboratory tests, Livneh's equation was the best in correlation between PR of DCPT and CBR, George and Pradesh's equation was the best in the predicted resilient modulus. In the resilient modulus using FWD, Gudishala's equation estimates little larger than predicted resilient modulus and Chen's equation estimates little smaller. And KICT's equation estimates the modulus smaller than predicted resilient modulus. But using the results of laboratory resilient modulus tests considering the deviatoric and confining stress from the moving vehicle, the KICT's equation was the best. In the results of In-situ DCPT tests, the variation of PR can occur according to size distribution of penetrate points. So DCPT test spacing was proposed to reduce the difference of PR. Also it was shows that average PR was different according to subgrade materials although the subgrade was satisfied the degree of compaction. Especially large sized materials show smaller PR, and it is also found that field water contents have influence a lot of degree of compaction but a little on the average PR of the DCPT tests.

  • PDF

A Study on the Nature observation and Scientific methodology in Zhōuyì周易 - Focusing on its association with Contemporary Science (『주역(周易)』의 자연관찰과 과학적 방법론에 관한 연구 - 『주역(周易)』에 나타난 현대자연과학적 의미를 중심으로 -)

  • Shin, Jungwon
    • (The)Study of the Eastern Classic
    • /
    • no.71
    • /
    • pp.99-128
    • /
    • 2018
  • Zhōuyì周易 is intended to explain the affairs of human beings by observing the images and works of all things in the universe, abstracting them into the $b{\bar{a}}gu{\grave{a}}$八卦, calculating the process and inducing the outcome by the method of stalk divination, in which this paper finds the origin of natural scientific thought of Zhōuyì. The way of Zhōuyì's thought on the natural science is distinguished from that of the Western's. In the West, people dismantled the objects into the parts until they reached the atom and analyzed them by the principle of causality to draw an axiomatic truth. In the meantime Zhōuyì observed and studied the dynamic functions and changes of all things for the convergence of the whole. While the way of Zhōuyì's thinking could have not contributed to the development of modern scientific development, that of the West overwhelmed Asian development passing through the period of enlightenment during 16-17 century. This paper tries to articulate the points where Zhōuyì can share its theory with the contemporary science by finding the traces of scientific thoughts in Zhōuyì. It encounters its ground from the methodology of natural science and scientific statements proposed by Zhōuyì. The essential concepts of Zhōuyì are induced from all things in nature. This can be considered as the idea of '法自然'(emulating the patterns and examples from nature). Also they observed the images and changes seen by the habits of animals, plants and human beings to sense and perceive their laws. These are regarded as the methodology of natural science in Zhōuyì. As a book of divination, the way of stalk divination is designed to calculate the future by using the system of 'numbers'. 'tàijí太極', ' yīnyáng陰陽', 'four symbols四象', '$b{\bar{a}}gu{\grave{a}}$八卦' and 'wǔxíng五行' are the essential concepts of Zhōuyì to represents the dynamic phenomena and changes of the natural order. Among them '$b{\bar{a}}gu{\grave{a}}$八卦' is a presentment to explain the structure of the world not by the individual analysis of things but by the unification of the whole through the contradictions and interchanges among them to reach the new orders. As of now, the studies of Zhōuyì in Korea have focused on the traditional perspectives, such as political and ethical philosophy. Some of recent studies, having interpreted Zhōuyì with scientific inclination have generated controversy 'Can Zhōuyì be a science?', for which scholars have hard time to reach the agreement. This paper tries to find the headwaters of the contemporary natural science by elaborating the methodology of natural science stated in Zhōuyì.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Forest Community Structure of Mt. Bukhan Area (북한산 지역의 삼림군집구조에 관한 연구)

  • 박인협;이경재;조재창
    • Korean Journal of Environment and Ecology
    • /
    • v.1 no.1
    • /
    • pp.1-23
    • /
    • 1987
  • To investigate the forest structure of Mt. Bukhan. ranging from Seoul to Kyongkido, twenty plots were set up by the vegetation physiognomy and vegetation analysis was carried out. According to the leading dominant tree species in canopy stratum, forest communities were classified into three large groups of natural forest communities, semi-natural forest communities and artificial forest communities, and each of them covered 82.64, 7.03, and 5.71% of Mt. Bukhan area, respectively. Pure or mixed natural forest communities of Pinus densiflora and Quercus mongolica were major forest communities and covered 70.8% of Mt. Bukhan area. The important planted tree species were Robinia pseudoacacia, Pinus rigida, and Alnus birsuta and they were mainly planted at the southern slope and roadside. The degree of human disturbance of vegetation of 8, 7, and 6 area covered 82.64, 0, and 12.74%, respectively. According to forest dimensions, most of forest communities were young aged forests of which mean DBH was 20cm and canopy height below 10m. However, a few mature forest communities of Pinus densiflora or Quercus mongolica were found in the small area. The range of Shannon's species diversity of major natural forest communities, pure or mixed forest communities of Pinus densiflora and Quercus mongolica was 1.085~1.242. According to stand dynamic analysis by DBH class distribution, the present Quercus mongolica communities arid Robinia pseudoacacia communities may last long their present forest structure and most of other communities may be succeeded to Quercus mongolica communities, however, a few communities invaded by Robinia pseudoacacia and Quercus aliena-Quercus acutissima communities may be succeeded to Robinia pseudoacacia communities and Quercus aliena communities, respectively. DCA was the most effective method of this study. DCA ordination were showed that successional trends of tree species seem to be from Pinus densiflora through Quercus serrata. Prunus sargrntii. Sorbus alnifolia to Q. mongolica. Fraxinus mandsburica, F. rhynchophylla in the upper layer and from Zanthoxylum schinifolium, Lespedeza crytobotrya trough Rhus trichocarpa. Rh. verniciflua. Rhododendron mucronulatum. Rh. schlippenbachii to Acer pseudo-sieboldianus. Magnolia sieboldii, Euonymus sieboldianus.

  • PDF

An Evaluation on Visitor Satisfaction in Waterfront Park (수변공원의 이용 만족도 평가)

  • Chang, Min-Sook;Chang, Byung-MKoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.38 no.3
    • /
    • pp.41-52
    • /
    • 2010
  • The purpose of this paper is to evaluate visitor satisfaction(VS) in waterfront parks in terms of resources, facilities, embodiment of theme(ET), site composition(SC), relaxation activity space(RAS), and dynamic activity space(DAS), which are supply-side components in the planning process of waterfront parks, in order to answer the research question; 'How is visitor satisfaction of waterfront parks determined?' After reviewing the literature on parks and the building process of waterfront parks in Korea, we constructed a conceptual framework and have ascertained a research hypothesis. We had obtained data through a questionnaire survey from 327 visitors at waterfront parks, based on the quota sampling method. We have analyzed the data using the path analysis method. We found that: 1) The direct effects of resources and facilities on VS turned out to be 0.273 and 0.306, respectively while the indirect effects are 0.114, 0.170, respectively. 2) The direct effects of SC, as a component of the planning process on VS, turned out to be 0.243 while that of ET had no affect on VS. The indirect effect of ET and SC on VS turned out to be 0.059 and 0.018, respectively. 3) The direct effects of RAS on VS turned out to be 0.129 while the indirect effects of RAS and DAS on VS turned out to be 0.002 and 0.017, respectively. 4) The size of causal effect, in order, were facilities, resources, SC, RAS, ET, and DAS. 5) Resources and facilities, as a park foundation, compose 64.84 percent of total causal effect while ET and SC have 24.04 percent and RAS and DAS have 11.12 percent, respectively. These research results imply that: 1) Existing waterfront parks should be regenerated with the embodiment of water related theme and with improved facilities for RAS and visitor programs and/or facilities for DAS. 2) The relationship among ET, SC, RAS and DAS should be increased for a significant improvement of VS, and 3) A process-oriented approach turned out to be highly useful for the development of substantive theory and methodology. It is recommended that a structural equation model on waterfront parks be developed using more empirical data and this approach be widely applied for testing its validity.

Research Trend Analysis Using Bibliographic Information and Citations of Cloud Computing Articles: Application of Social Network Analysis (클라우드 컴퓨팅 관련 논문의 서지정보 및 인용정보를 활용한 연구 동향 분석: 사회 네트워크 분석의 활용)

  • Kim, Dongsung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.195-211
    • /
    • 2014
  • Cloud computing services provide IT resources as services on demand. This is considered a key concept, which will lead a shift from an ownership-based paradigm to a new pay-for-use paradigm, which can reduce the fixed cost for IT resources, and improve flexibility and scalability. As IT services, cloud services have evolved from early similar computing concepts such as network computing, utility computing, server-based computing, and grid computing. So research into cloud computing is highly related to and combined with various relevant computing research areas. To seek promising research issues and topics in cloud computing, it is necessary to understand the research trends in cloud computing more comprehensively. In this study, we collect bibliographic information and citation information for cloud computing related research papers published in major international journals from 1994 to 2012, and analyzes macroscopic trends and network changes to citation relationships among papers and the co-occurrence relationships of key words by utilizing social network analysis measures. Through the analysis, we can identify the relationships and connections among research topics in cloud computing related areas, and highlight new potential research topics. In addition, we visualize dynamic changes of research topics relating to cloud computing using a proposed cloud computing "research trend map." A research trend map visualizes positions of research topics in two-dimensional space. Frequencies of key words (X-axis) and the rates of increase in the degree centrality of key words (Y-axis) are used as the two dimensions of the research trend map. Based on the values of the two dimensions, the two dimensional space of a research map is divided into four areas: maturation, growth, promising, and decline. An area with high keyword frequency, but low rates of increase of degree centrality is defined as a mature technology area; the area where both keyword frequency and the increase rate of degree centrality are high is defined as a growth technology area; the area where the keyword frequency is low, but the rate of increase in the degree centrality is high is defined as a promising technology area; and the area where both keyword frequency and the rate of degree centrality are low is defined as a declining technology area. Based on this method, cloud computing research trend maps make it possible to easily grasp the main research trends in cloud computing, and to explain the evolution of research topics. According to the results of an analysis of citation relationships, research papers on security, distributed processing, and optical networking for cloud computing are on the top based on the page-rank measure. From the analysis of key words in research papers, cloud computing and grid computing showed high centrality in 2009, and key words dealing with main elemental technologies such as data outsourcing, error detection methods, and infrastructure construction showed high centrality in 2010~2011. In 2012, security, virtualization, and resource management showed high centrality. Moreover, it was found that the interest in the technical issues of cloud computing increases gradually. From annual cloud computing research trend maps, it was verified that security is located in the promising area, virtualization has moved from the promising area to the growth area, and grid computing and distributed system has moved to the declining area. The study results indicate that distributed systems and grid computing received a lot of attention as similar computing paradigms in the early stage of cloud computing research. The early stage of cloud computing was a period focused on understanding and investigating cloud computing as an emergent technology, linking to relevant established computing concepts. After the early stage, security and virtualization technologies became main issues in cloud computing, which is reflected in the movement of security and virtualization technologies from the promising area to the growth area in the cloud computing research trend maps. Moreover, this study revealed that current research in cloud computing has rapidly transferred from a focus on technical issues to for a focus on application issues, such as SLAs (Service Level Agreements).

Glass Dissolution Rates From MCC-1 and Flow-Through Tests

  • Jeong, Seung-Young
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2004.06a
    • /
    • pp.257-258
    • /
    • 2004
  • The dose from radionuclides released from high-level radioactive waste (HLW) glasses as they corrode must be taken into account when assessing the performance of a disposal system. In the performance assessment (PA) calculations conducted for the proposed Yucca Mountain, Nevada, disposal system, the release of radionuclides is conservatively assumed to occur at the same rate the glass matrix dissolves. A simple model was developed to calculate the glass dissolution rate of HLW glasses in these PA calculations [1]. For the PA calculations that were conducted for Site Recommendation, it was necessary to identify ranges of parameter values that bounded the dissolution rates of the wide range of HLW glass compositions that will be disposed. The values and ranges of the model parameters for the pH and temperature dependencies were extracted from the results of SPFT, static leach tests, and Soxhlet tests available in the literature. Static leach tests were conducted with a range of glass compositions to measure values for the glass composition parameter. The glass dissolution rate depends on temperature, pH, and the compositions of the glass and solution, The dissolution rate is calculated using Eq. 1: $rate{\;}={\;}k_{o}10^{(ph){\eta})}{\cdot}e^{(-Ea/RT)}{\cdot}(1-Q/K){\;}+{\;}k_{long}$ where $k_{0},\;{\eta}$ and Eaare the parameters for glass composition, pH, $\eta$ and temperature dependence, respectively, and R is the gas constant. The term (1-Q/K) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_{0},\;{\eta}\;and\;E_{a}$ are the parameters for glass composition, pH, and temperature dependence, respectively, and R is the gas constant. The term (1-Q/C) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_0$, and Ea are determined under test conditions where the value of Q is maintained near zero, so that the value of the affinity term remains near 1. The dissolution rate under conditions in which the value of the affinity term is near 1 is referred to as the forward rate. This is the highest dissolution rate that can occur at a particular pH and temperature. The value of the parameter K is determined from experiments in which the value of the ion activity product approaches the value of K. This results in a decrease in the value of the affinity term and the dissolution rate. The highly dilute solutions required to measure the forward rate and extract values for $k_0$, $\eta$, and Ea can be maintained by conducting dynamic tests in which the test solution is removed from the reaction cell and replaced with fresh solution. In the single-pass flow-through (PFT) test method, this is done by continuously pumping the test solution through the reaction cell. Alternatively, static tests can be conducted with sufficient solution volume that the solution concentrations of dissolved glass components do not increase significantly during the test. Both the SPFT and static tests can ve conducted for a wide range of pH values and temperatures. Both static and SPFt tests have short-comings. the SPFT test requires analysis of several solutions (typically 6-10) at each of several flow rates to determine the glass dissolution rate at each pH and temperature. As will be shown, the rate measured in an SPFt test depends on the solution flow rate. The solutions in static tests will eventually become concentrated enough to affect the dissolution rate. In both the SPFt and static test methods. a compromise is required between the need to minimize the effects of dissolved components on the dissolution rate and the need to attain solution concentrations that are high enough to analyze. In the paper, we compare the results of static leach tests and SPFT tests conducted with simple 5-component glass to confirm the equivalence of SPFT tests and static tests conducted with pH buffer solutions. Tests were conducted over the range pH values that are most relevant for waste glass disssolution in a disposal system. The glass and temperature used in the tests were selected to allow direct comparison with SPFT tests conducted previously. The ability to measure parameter values with more than one test method and an understanding of how the rate measured in each test is affected by various test parameters provides added confidence to the measured values. The dissolution rate of a simple 5-component glass was measured at pH values of 6.2, 8.3, and 9.6 and $70^{\circ}C$ using static tests and single-pass flow-through (SPFT) tests. Similar rates were measured with the two methods. However, the measured rates are about 10X higher than the rates measured previously for a glass having the same composition using an SPFT test method. Differences are attributed to effects of the solution flow rate on the glass dissolution reate and how the specific surface area of crushed glass is estimated. This comparison indicates the need to standardize the SPFT test procedure.

  • PDF