• Title/Summary/Keyword: Network analysis method

Search Result 4,114, Processing Time 0.043 seconds

Investigating Key Security Factors in Smart Factory: Focusing on Priority Analysis Using AHP Method (스마트팩토리의 주요 보안요인 연구: AHP를 활용한 우선순위 분석을 중심으로)

  • Jin Hoh;Ae Ri Lee
    • Information Systems Review
    • /
    • v.22 no.4
    • /
    • pp.185-203
    • /
    • 2020
  • With the advent of 4th industrial revolution, the manufacturing industry is converging with ICT and changing into the era of smart manufacturing. In the smart factory, all machines and facilities are connected based on ICT, and thus security should be further strengthened as it is exposed to complex security threats that were not previously recognized. To reduce the risk of security incidents and successfully implement smart factories, it is necessary to identify key security factors to be applied, taking into account the characteristics of the industrial environment of smart factories utilizing ICT. In this study, we propose a 'hierarchical classification model of security factors in smart factory' that includes terminal, network, platform/service categories and analyze the importance of security factors to be applied when developing smart factories. We conducted an assessment of importance of security factors to the groups of smart factories and security experts. In this study, the relative importance of security factors of smart factory was derived by using AHP technique, and the priority among the security factors is presented. Based on the results of this research, it contributes to building the smart factory more securely and establishing information security required in the era of smart manufacturing.

A Study on the Calculation of Optimal Compensation Capacity of Reactive Power for Grid Connection of Offshore Wind Farms (해상풍력단지 전력계통 연계를 위한 무효전력 최적 보상용량 계산에 관한 연구)

  • Seong-Min Han;Joo-Hyuk Park;Chang-Hyun Hwang;Chae-Joo Moon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.65-76
    • /
    • 2024
  • With the recent activation of the offshore wind power industry, there has been a development of power plants with a scale exceeding 400MW, comparable to traditional thermal power plants. Renewable energy, characterized by intermittency depending on the energy source, is a prominent feature of modern renewable power generation facilities, which are structured based on controllable inverter technology. As the integration of renewable energy sources into the grid expands, the grid codes for power system connection are progressively becoming more defined, leading to active discussions and evaluations in this area. In this paper, we propose a method for selecting optimal reactive power compensation capacity when multiple offshore wind farms are integrated and connected through a shared interconnection facility to comply with grid codes. Based on the requirements of the grid code, we analyze the reactive power compensation and excessive stability of the 400MW wind power generation site under development in the southwest sea of Jeonbuk. This analysis involves constructing a generation site database using PSS/E (Power System Simulation for Engineering), incorporating turbine layouts and cable data. The study calculates reactive power due to charging current in internal and external network cables and determines the reactive power compensation capacity at the interconnection point. Additionally, static and dynamic stability assessments are conducted by integrating with the power system database.

Comparison of Molecular Characterization and Antimicrobial Resistance in Carbapenem-Resistant Klebsiella pneumoniae ST307 and Non-ST307 (Carbapenem 내성 Klebsiella pneumoniae ST307과 Non-ST307의 분자 특성 및 항균제 내성 비교)

  • Hye Hyun Cho
    • Microbiology and Biotechnology Letters
    • /
    • v.51 no.4
    • /
    • pp.500-506
    • /
    • 2023
  • Carbapenem-resistant Klebsiella pneumoniae (CRKP) is emerging as a worldwide public health threat. Recently, Klebsiella pneumoniae carbapenemase-2 (KPC-2)-producing sequence type (ST) 307 was identified main clone of CRKP, and dissemination of ST307 was reported in South Korea. This study examined the molecular characteristic and antimicrobial resistance pattern of 50 CRKP isolated from a tertiary hospital in Daejeon, from March 2020 to December 2021. Epidemiological relationship was analyzed by Multilocus sequence typing (MLST) and antimicrobial susceptibility test was determined using disk-diffusion method. PCR and DNA sequence analysis were performed to identify carbapenemase genes. CRKP infections were significantly more frequent in males and the patients aged ≥ 60 years. Among the 50 CRKP isolates, 46 isolates (92.0%) were multidrug-resistant (MDR), and 44 isolates (88.0%) were carbapenemase-producing K. pneumoniae (CPKP). The major carbapenemase type was KPC-2 (36 isolates, 72.0%) and New Delhi metalloenzyme-1 (NDM-1) and NDM-5 were identified in 7 isolates (14.0%) and 1 isolate (2.0%), respectively. In particular, 88.9% (32/36) of KPC-2-producing K. pneumoniae belonged to ST307, whereas 87.5% (7/8) of NDM-1,-5-producing K. pneumoniae belonged to non-ST307. These results suggest that proper infection control and effective surveillance network need to prevent not olny the spread of ST307, but also the development of non-ST307.

Research trends in Journal of The Korean Society for School & Community Health Education on Vulnerable Populations from 2000 to 2023: Based on the elderly and people with disabilities (한국학교·지역보건교육학회지 2000년~2023년 취약 계층 연구 동향: 노인과 장애인을 중심으로)

  • Ye-Soon Kim;Young-Hee Nam
    • The Journal of Korean Society for School & Community Health Education
    • /
    • v.25 no.2
    • /
    • pp.71-81
    • /
    • 2024
  • Purpose: This study aims to identify research trends in papers related to the elderly and the disabled published in the journal of Korean society for school & community health education from 2000 to 2023 and seek the direction of the academic development of this journal in the future. Method: A total of 26 articles related to the elderly and the disabled, who are vulnerable groups, were analyzed by year by analyzing the specific subjects, research themes, research design, data collection methods, and keywords of papers published from 2000 to 2023. Results: Looking at the research subjects, studies on the elderly (18 studies) accounted for a larger proportion than studies on the disabled (8 studies). Research themes in the field of healthy living practices for the elderly (44.4%) and research in the field of mental health management (37.5%) for the disabled accounted for a high proportion. The design of research were mostly quantitative and cross-sectional studies. Data collection is mostly based on secondary data. In studies targeting the elderly, keywords appeared in the following order: 'Health' and 'Elderly'. And research targeting the disabled appeared in the following order: 'Disabilities', 'Health', and 'COVID-19'. Additionally, research on the elderly and the disabled has recently shown an increasing trend. Conclusion: Research on the elderly and the disabled has been conducted in line with the purpose of the Korean society for school & community health education, However, In terms of quantitative expansion and qualitative research, research themes, research designs, and data collection methods must be diversified. Methods, public perception. Additionally, research on vulnerable groups that fit the public health promotion and health education paradigm is needed.

Research on Dispersion Prediction Technology and Integrated Monitoring Systems for Hazardous Substances in Industrial Complexes Based on AIoT Utilizing Digital Twin (디지털트윈을 활용한 AIoT 기반 산업단지 유해물질 확산예측 및 통합관제체계 연구)

  • Min Ho Son;Il Ryong Kweon
    • Journal of the Society of Disaster Information
    • /
    • v.20 no.3
    • /
    • pp.484-499
    • /
    • 2024
  • Purpose: Recently, due to the aging of safety facilities in national industrial complexes, there has been an increase in the frequency and scale of safety accidents, highlighting the need for a shift toward a prevention-centered disaster management paradigm and the establishment of a digital safety network. In response, this study aims to provide an information system that supports more rapid and precise decision-making during disasters by utilizing digital twin-based integrated control technology to predict the spread of hazardous substances, trace the origin of accidents, and offer safe evacuation routes. Method: We considered various simulation results, such as surface diffusion, upper-level diffusion, and combined diffusion, based on the actual characteristics of hazardous substances and weather conditions, addressing the limitations of previous studies. Additionally, we designed an integrated management system to minimize the limitations of spatiotemporal monitoring by utilizing an IoT sensor-based backtracking model to predict leakage points of hazardous substances in spatiotemporal blind spots. Results: We selected two pilot companies in the Gumi Industrial Complex and installed IoT sensors. Then, we operated a living lab by establishing an integrated management system that provides services such as prediction of hazardous substance dispersion, traceback, AI-based leakage prediction, and evacuation information guidance, all based on digital twin technology within the industrial complex. Conclusion: Taking into account the limitations of previous research, we used digital twin-based AI analysis to predict hazardous chemical leaks, detect leakage accidents, and forecast three-dimensional compound dispersion and traceback diffusion.

Automated Data Extraction from Unstructured Geotechnical Report based on AI and Text-mining Techniques (AI 및 텍스트 마이닝 기법을 활용한 지반조사보고서 데이터 추출 자동화)

  • Park, Jimin;Seo, Wanhyuk;Seo, Dong-Hee;Yun, Tae-Sup
    • Journal of the Korean Geotechnical Society
    • /
    • v.40 no.4
    • /
    • pp.69-79
    • /
    • 2024
  • Field geotechnical data are obtained from various field and laboratory tests and are documented in geotechnical investigation reports. For efficient design and construction, digitizing these geotechnical parameters is essential. However, current practices involve manual data entry, which is time-consuming, labor-intensive, and prone to errors. Thus, this study proposes an automatic data extraction method from geotechnical investigation reports using image-based deep learning models and text-mining techniques. A deep-learning-based page classification model and a text-searching algorithm were employed to classify geotechnical investigation report pages with 100% accuracy. Computer vision algorithms were utilized to identify valid data regions within report pages, and text analysis was used to match and extract the corresponding geotechnical data. The proposed model was validated using a dataset of 205 geotechnical investigation reports, achieving an average data extraction accuracy of 93.0%. Finally, a user-interface-based program was developed to enhance the practical application of the extraction model. It allowed users to upload PDF files of geotechnical investigation reports, automatically analyze these reports, and extract and edit data. This approach is expected to improve the efficiency and accuracy of digitizing geotechnical investigation reports and building geotechnical databases.

Semi-automated Tractography Analysis using a Allen Mouse Brain Atlas : Comparing DTI Acquisition between NEX and SNR (알렌 마우스 브레인 아틀라스를 이용한 반자동 신경섬유지도 분석 : 여기수와 신호대잡음비간의 DTI 획득 비교)

  • Im, Sang-Jin;Baek, Hyeon-Man
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.2
    • /
    • pp.157-168
    • /
    • 2020
  • Advancements in segmentation methodology has made automatic segmentation of brain structures using structural images accurate and consistent. One method of automatic segmentation, which involves registering atlas information from template space to subject space, requires a high quality atlas with accurate boundaries for consistent segmentation. The Allen Mouse Brain Atlas, which has been widely accepted as a high quality reference of the mouse brain, has been used in various segmentations and can provide accurate coordinates and boundaries of mouse brain structures for tractography. Through probabilistic tractography, diffusion tensor images can be used to map comprehensive neuronal network of white matter pathways of the brain. Comparisons between neural networks of mouse and human brains showed that various clinical tests on mouse models were able to simulate disease pathology of human brains, increasing the importance of clinical mouse brain studies. However, differences between brain size of human and mouse brain has made it difficult to achieve the necessary image quality for analysis and the conditions for sufficient image quality such as a long scan time makes using live samples unrealistic. In order to secure a mouse brain image with a sufficient scan time, an Ex-vivo experiment of a mouse brain was conducted for this study. Using FSL, a tool for analyzing tensor images, we proposed a semi-automated segmentation and tractography analysis pipeline of the mouse brain and applied it to various mouse models. Also, in order to determine the useful signal-to-noise ratio of the diffusion tensor image acquired for the tractography analysis, images with various excitation numbers were compared.

Comparison of Reliability and Validity of Three Korean Versions of the 20-Item Toronto Alexithymia Scale (TAS-20의 한국판 3종간의 신뢰도 및 타당도 비교)

  • Chung, Un-Sun;Rim, Hyo-Deog;Lee, Yang-Hyun;Kim, Sang-Heon
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.11 no.1
    • /
    • pp.77-88
    • /
    • 2003
  • Objectives: The purpose of this study was to compare reliability and validity of three Korean versions of the 20-item Toronto Alexithymia scale and to confirm the most reliable and validated Korean translation of the 20-item Toronto Alexithymia Scale for both clinical and research purpose in Korea. The first one was a Korean version of the 20-Item Toronto Alexithymia Scale developed by Lee YH et al in 1996 which was designated as TAS-20K(1996) in this study. This scale had a problem with one item due to the cultural difference regarding the word 'analyzing' between western culture and Korean culture. The second one was the revised version of TAS-20K(1996) on that point by Lee YH et al in 1996 without validation which was designated as TAS-20K(2003) in this study. The third one was a 23-item Korean version developed by Sin HG and Won HT in 1997, which was somewhat different from the 20-item Toronto Alexithymia Scale(TAS-20) in the number of total item, the content of some items and the scoring method. This scale was designated as S-TAS here. Methods: 408 medical students were tested with one scale composed of all the different items randomly arranged from the three versions. We evaluated goodness-of-fit and Cronbach $\alpha$ coefficients of three scales for reliability. We used confirmatory factor analysis to compare validity. Results: TAS-20K(2003) showed that it had better internal consistency than TAS-20K(1996), which implied that the cultural difference should be considered in the Korean translation. Both TAS-20K(2003) and S-TAS replicated three-factor structures and had adequacy of fit, good internal consistency and acceptable validity. However, S-TAS had one item with poor item-factor correlation and didn't show high correlation between item 2 and factor 1 as before in 1997. Conclusion: Although S-TAS had added 3 items and changed the content of two items, it didn't show better reliability and validity than TAS-20K(2003). Therefore it is proposed to use TAS-20K (2003) as the Korean version of the 20-item Toronto Alexithymia Scale(TAS-20K) for international communication of results of Alexithymia research. It has good internal consistency and validity and maintains original items, the same construct and scoring method as the 20-item Toronto Alexithymia Scale.

  • PDF

Analysis of Q Values on the Crust of the Kimcheon and Mokpo Regions, South Korea (남한 김천.목포 일대 지각의 Q 값 분석)

  • Do, Ji-Young;Lee, Yoon-Joong;Kyung, Jai-Bok
    • Journal of the Korean earth science society
    • /
    • v.27 no.4
    • /
    • pp.475-485
    • /
    • 2006
  • The physical properties of the central and southwestern crust of South Korea were estimated by comparing values of ${Q_P}^{-1}\;and\;{Q_S}^{-1}$ in the Kimcheon and Mokpo areas. In order to get ${Q_P}^{-1}\;and\;{Q_S}^{-1}$ values, seismic data were collected from two stations of the KIGAM network (KMC and MUN) and four stations of the KMA network (CPN, KUC, MOP, and WAN). An extended coda-normalization method was applied to these data. Estimates of ${Q_P}^{-1}\;and\;{Q_S}^{-1}$ show variations depending on frequency. As frequencies vary from 3 Hz to 24 Hz, the estimates decrease from $(1.4{\pm}3.9){\times}10^{-3}\;to\;(2.3{\pm}3.5){\times}10^{-4}\;for\;{Q_P}^{-1}\;and\;(1.8{\pm}1.3){\times}10^{-3}\;to\;(1.9{\pm}1.5){\times}10^{-4}\;for\;{Q_S}^{-1}$ in central South Korea, and $(5.9{\pm}4.8){\times}10^{-3}\;to\;(2.2{\pm}3.8){\times}10^{-4}\;for\;{Q_P}^{-1}\;and\;(0.5{\pm}2.8){\times}10^{-3}\;to\;(1.8{\pm}1.6){\times}10^{-4}\;for\;{Q_S}^{-1}$ in southwestern South Korea. According that a frequency-dependent power law is applied to the data, the best fits of ${Q_P}^{-1}\;and\;{Q_S}^{-1}\;are\;0.003f^{-0.49}\;and\;0.005f^{-1.03}$ in central South Korea, and $0.026f^{-1.47}$ and $0.001f^{-0.49}$ in southwestern South Korea, respectively. These values almost correspond to those of seismically stable regions although ${Q_P}^{-1}$ values of southwestern South Korea are a little high due to lack of data used.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.