• Title/Summary/Keyword: 데이터 구조 유사도

Search Result 548, Processing Time 0.031 seconds

Low-Cost Elliptic Curve Cryptography Processor Based On Multi-Segment Multiplication (멀티 세그먼트 곱셈 기반 저비용 타원곡선 암호 프로세서)

  • LEE Dong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.8 s.338
    • /
    • pp.15-26
    • /
    • 2005
  • In this paper, we propose an efficient $GF(2^m)$ multi-segment multiplier architecture and study its application to elliptic curve cryptography processors. The multi-segment based ECC datapath has a very small combinational multiplier to compute partial products, most of its internal data buses are word-sized, and it has only a single m bit multiplexer and a single m bit register. Hence, the resource requirements of the proposed ECC datapath can be minimized as the segment number increases and word-size is decreased. Hence, as compared to the ECC processor based on digit-serial multiplication, the proposed ECC datapath is more efficient in resource usage. The resource requirement of ECC Processor implementation depends not only on the number of basic hardware components but also on the complexity of interconnection among them. To show the realistic area efficiency of proposed ECC processors, we implemented both the ECC processors based on the proposed multi-segment multiplication and digit serial multiplication and compared their FPGA resource usages. The experimental results show that the Proposed multi-segment multiplication method allows to implement ECC coprocessors, requiring about half of FPGA resources as compared to digit serial multiplication.

Relationship between Digital Informatization Capability, Digital Informatization Accessability and Life Satisfaction of Disabled People: Multigroup Analysis of Perceived Social Support Network (장애인의 디지털정보화역량, 디지털정보화활용 수준, 일상생활만족도 간 관계: 지각된 사회적 지지망 수준에 따른 다집단 분석)

  • Yeon, Eun Mo;Choi, Hyo-Sik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.636-644
    • /
    • 2019
  • The purpose of this study is to explore practical intervention strategies by identifying the relationships among digital informatization capacity, level of digital informatization accessability and life satisfaction of disabled people and to determine differences among these relationships depending on perceived level of social support networks. The participants were 1,639 disabled people from the 2017 digital information gap survey and the results, based on structural equation modeling and multi-group analysis, are as follows. First, digital informatization capacity has a positive influence on the level of digital informatization accessability(β=.65), and life satisfaction(β=.08). The level of digital informatization accessability also has positive influence on life satisfaction(β=.44). Second, the analysis result of the mediated effects of digital informatization accessability level between digital informatization capacity and life satisfaction was significant at a level (β=.29) even greater than the direct effect of digital informatization capacity on life satisfaction. Third, digital information capacity and digital informatization accessability have an influence on life satisfaction regardless of their perceived level of social support. The findings suggest that creating online environments where disabled people can enjoy leisure, culture, and social interaction with high accessibility and utility are as important as providing education for improving their digital informatization capacity.

Laboratory Validation of Bridge Finite Model Updating Approach By Static Load Input/Deflection Output Measurements (정적하중입력/변위출력관계를 이용한 단경간 교량의 유한요소모델개선기법: 실내실험검증)

  • Kim, Sehoon;Koo, Ki Young;Lee, Jong-Jae
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.20 no.3
    • /
    • pp.10-17
    • /
    • 2016
  • This paper presents a laboratory validation of a new approach for Finite Element Model Updating(FEMU) on short-span bridges by combining ambient vibration measurements with static load input-deflection output measurements. The conventional FEMU approach based on modal parameters requires the assumption on the system mass matrix for the eigen-value analysis. The proposed approach doesn't require the assumption and even provides a way to update the mass matrix. The proposed approach consists of two steps: 1) updating the stiffness matrix using the static input-deflection output measurements, and 2) updating the mass matrix using a few lower natural frequencies. For a validation of the proposed approach, Young's modulus of the laboratory model was updated by the proposed approach and compared with the value obtained from strain-stress tests in a Universal Testing Machine. Result of the conventional FEMU was also compared with the result of the proposed approach. It was found that proposed approach successfully estimated the Young's modulus and the mass density reasonably while the conventional FEMU showed a large error when used with higher-modes. In addition, the FE modeling error was discussed.

Research Trend Analysis for Fault Detection Methods Using Machine Learning (머신러닝을 사용한 단층 탐지 기술 연구 동향 분석)

  • Bae, Wooram;Ha, Wansoo
    • Economic and Environmental Geology
    • /
    • v.53 no.4
    • /
    • pp.479-489
    • /
    • 2020
  • A fault is a geological structure that can be a migration path or a cap rock of hydrocarbon such as oil and gas, formed from source rock. The fault is one of the main targets of seismic exploration to find reservoirs in which hydrocarbon have accumulated. However, conventional fault detection methods using lateral discontinuity in seismic data such as semblance, coherence, variance, gradient magnitude and fault likelihood, have problem that professional interpreters have to invest lots of time and computational costs. Therefore, many researchers are conducting various studies to save computational costs and time for fault interpretation, and machine learning technologies attracted attention recently. Among various machine learning technologies, many researchers are conducting fault interpretation studies using the support vector machine, multi-layer perceptron, deep neural networks and convolutional neural networks algorithms. Especially, researchers use not only their own convolution networks but also proven networks in image processing to predict fault locations and fault information such as strike and dip. In this paper, by investigating and analyzing these studies, we found that the convolutional neural networks based on the U-Net from image processing is the most effective one for fault detection and interpretation. Further studies can expect better results from fault detection and interpretation using the convolutional neural networks along with transfer learning and data augmentation.

Study on the Emergency Assessment about Seismic Safety of Cable-supported Bridges using the Comparison of Displacement due to Earthquake with Disaster Management Criteria (변위 비교를 통한 케이블지지교량의 긴급 지진 안전성 평가 방법의 고찰)

  • Park, Sung-Woo;Lee, Seung Han
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.22 no.6
    • /
    • pp.114-122
    • /
    • 2018
  • This study presents the emergency assessment method about seismic safety of cable-supported bridges using seismic acceleration sensors installed on the primary structural elements of them. The structural models of bridges are updated iteratively to make their dynamic characteristics to be similar to those of real bridges based on the comparison of their natural frequencies with those of real bridges estimated from acceleration data measured at ordinary times by the seismic acceleration sensor. The displacement at the location of each seismic acceleration sensor is derived by seismic analysis using design earthquake, and the peak value of them is determined as the disaster management criteria in advance. The displacement time history is calculated by the double integration of the acceleration time history which is recorded at each seismic acceleration sensor and filtered by high cut(low pass) and low cut(high pass) filters. Finally, the seismic safety is evaluated by the comparison of the peak value in calculated displacement time history with the disaster management criteria determined in advance. The applicability of proposed methodology is verified by performing the seismic safety assessment of 12 cable-supported bridges using the acceleration data recorded during Gyeongju earthquake.

An Analytical Study on Encased Steel Composite Columns Fire Resistance According to Axial Force Ratio (화재시 축력비에 따른 매입형 합성기둥의 내화성능에 대한 해석적 연구)

  • Kim, Ye-Som;Choi, Byong-Jeong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.24 no.1
    • /
    • pp.97-107
    • /
    • 2020
  • In this study, finite element analysis was carried out through the finite element analysis program (ANSYS) to investigate the fire resistance of composite columns in fire. Transient heat transfer analysis and static structural analysis were performed according to ASTM E 119 heating curve and axial force ratio 0.7, 0.6, 0.5 by applying stress-strain curves according to temperature, and loading heating experiments were carried out under the same conditions. In addition, the nominal compressive strength of the composite column according to the heating time according to the standard(Eurocode 4) was calculated and expressed as the axial force ratio and compared with the analytical and experimental values. Through the analysis, As a result of finite element analysis, the fire resistance time was 180 minutes and similar value to the experimental value was obtained, whereas the fire resistance time 150 minutes and 60 minutes were derived from the axial force ratios 0.6 and 0.7. In addition, it was confirmed that the fire resistance time according to the axial force ratio calculated according to the reference equation (Eurocode 4) was lower than the actual experimental value. However, it was confirmed that the standard(Eurocode 4) was higher than the experimental value at the axial force ratio of 0.7. Accordingly, it is possible to confirm the fire resistance characteristics(time-axial force ratio relationship) of the SRC column at high axial force, and to use the experimental and anaylsis data of the SRC column as the data for verification based on Eurocode.

Investigation of Elementary Students' Scientific Communication Competence Considering Grammatical Features of Language in Science Learning (과학 학습 언어의 문법적 특성을 고려한 초등학생의 과학적 의사소통 능력 고찰)

  • Maeng, Seungho;Lee, Kwanhee
    • Journal of Korean Elementary Science Education
    • /
    • v.41 no.1
    • /
    • pp.30-43
    • /
    • 2022
  • In this study, elementary students' science communication competence was investigated based on the grammatical features expressed in their language-use in classroom discourse and science writings. The classes were designed to integrate the evidence-based reasoning framework and traditional learning cycle and were conducted on fifth graders in an elementary school. Eight elementary students' discourse data and writings were analyzed using lexico-grammatical resource analysis, which examined the discourse text's content and logical relations. The results revealed that the student language used in analyzing data, interpreting evidence, or constructing explanations did not precisely conform to the grammatical features in science language use. However, they provided examples of grammatical metaphors by nominalizing observed events in the classroom discourses and those of causal relations in their writings. Thus, elementary students can use science language grammatically from science language-use experiences through listening to a teacher's instructional discourses or recognizing the grammatical structures of science texts in workbooks. The opportunities in which elementary students experience the language-use model in science learning need to be offered to understand the appropriate language use in the epistemic context of evidence-based reasoning and learn literacy skills in science.

Performance analysis of Frequent Itemset Mining Technique based on Transaction Weight Constraints (트랜잭션 가중치 기반의 빈발 아이템셋 마이닝 기법의 성능분석)

  • Yun, Unil;Pyun, Gwangbum
    • Journal of Internet Computing and Services
    • /
    • v.16 no.1
    • /
    • pp.67-74
    • /
    • 2015
  • In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.

Decision of Gaussian Function Threshold for Image Segmentation (영상분할을 위한 혼합 가우시안 함수 임계 값 결정)

  • Jung, Yong-Gyu;Choi, Gyoo-Seok;Heo, Go-Eun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.163-168
    • /
    • 2009
  • Most image segmentation methods are to represent observed feature vectors at each pixel, which are assumed as appropriated probability models. These models can be used by statistical estimating or likelihood clustering algorithms of feature vectors. EM algorithms have some calculation problems of maximum likelihood for unknown parameters from incomplete data and maximum value in post probability distribution. First, the performance is dependent upon starting positions and likelihood functions are converged on local maximum values. To solve these problems, we mixed the Gausian function and histogram at all the level values at the image, which are proposed most suitable image segmentation methods. This proposed algoritms are confirmed to classify most edges clearly and variously, which are implemented to MFC programs.

  • PDF

A method for selecting the evaluation index of defence R&D project by AHP (계층분석법에 의한 국방연구개발 평가지표 선정에 관한 연구)

  • Park, Seong;Hong, Yeon-Woong;Na, Joong-Kyung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.23 no.5
    • /
    • pp.961-970
    • /
    • 2012
  • To evaluate companies that participated in the defense R&D project, 27 variables are chosen through literature survey, feature analysis of defense R&D and interviews with military experts. 17 variables are selected after factor analysis which is applied to reduce the number of variables and to detect structure in the relationships among variables, that is to classify variables using Likert-type scales. And then 17 variables are prioritized by AHP (analytic hierarchy process) method. It is shown that communication skill & cooperation strategy, level of technology, possession of needs technology have high priorities. However, protection plan of technology leakage, expertise of subcontractors, software development plan have low priorities.