• Title/Summary/Keyword: algebraic approach

Search Result 175, Processing Time 0.019 seconds

Convergence of Nonlocal Integral Operator in Peridynamics (비국부 적분 연산기로 표현되는 페리다이나믹 방정식의 수렴성)

  • Jo, Gwanghyun;Ha, Youn Doh
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.34 no.3
    • /
    • pp.151-157
    • /
    • 2021
  • This paper is devoted to a convergence study of the nonlocal integral operator in peridynamics. The implicit formulation can be an efficient approach to obtain the static/quasi-static solution of crack propagation problems. Implicit methods require constly large-matrix operations. Therefore, convergence is important for improving computational efficiency. When the radial influence function is utilized in the nonlocal integral equation, the fractional Laplacian integral equation is obtained. It has been mathematically proved that the condition number of the system matrix is affected by the order of the radial influence function and nonlocal horizon size. We formulate the static crack problem with peridynamics and utilize Newton-Raphson methods with a preconditioned conjugate gradient scheme to solve this nonlinear stationary system. The convergence behavior and the computational time for solving the implicit algebraic system have been studied with respect to the order of the radial influence function and nonlocal horizon size.

Cryptanalysis of LILI-128 with Overdefined Systems of Equations (과포화(Overdefined) 연립방정식을 이용한 LILI-128 스트림 암호에 대한 분석)

  • 문덕재;홍석희;이상진;임종인;은희천
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.13 no.1
    • /
    • pp.139-146
    • /
    • 2003
  • In this paper we demonstrate a cryptanalysis of the stream cipher LILI-128. Our approach to analysis on LILI-128 is to solve an overdefined system of multivariate equations. The LILI-128 keystream generato $r^{[8]}$ is a LFSR-based synchronous stream cipher with 128 bit key. This cipher consists of two parts, “CLOCK CONTROL”, pan and “DATA GENERATION”, part. We focus on the “DATA GENERATION”part. This part uses the function $f_d$. that satisfies the third order of correlation immunity, high nonlinearity and balancedness. But, this function does not have highly nonlinear order(i.e. high degree in its algebraic normal form). We use this property of the function $f_d$. We reduced the problem of recovering the secret key of LILI-128 to the problem of solving a largely overdefined system of multivariate equations of degree K=6. In our best version of the XL-based cryptanalysis we have the parameter D=7. Our fastest cryptanalysis of LILI-128 requires $2^{110.7}$ CPU clocks. This complexity can be achieved using only $2^{26.3}$ keystream bits.

Image segmentation using fuzzy worm searching and adaptive MIN-MAX clustering based on genetic algorithm (유전 알고리즘에 기반한 퍼지 벌레 검색과 자율 적응 최소-최대 군집화를 이용한 영상 영역화)

  • Ha, Seong-Wook;Kang, Dae-Seong;Kim, Dai-Jin
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.12
    • /
    • pp.109-120
    • /
    • 1998
  • An image segmentation approach based on the fuzzy worm searching and MIN-MAX clustering algorithm is proposed in this paper. This algorithm deals with fuzzy worm value and min-max node at a gross scene level, which investigates the edge information including fuzzy worm action and spatial relationship of the pixels as the parameters of its objective function. But the conventional segmentation methods for edge extraction generally need the mask information for the algebraic model, and take long run times at mask operation, whereas the proposed algorithm has single operation according to active searching of fuzzy worms. In addition, we also propose both genetic fuzzy worm searching and genetic min-max clustering using genetic algorithm to complete clustering and fuzzy searching on grey-histogram of image for the optimum solution, which can automatically determine the size of ranges and has both strong robust and speedy calculation. The simulation results showed that the proposed algorithm adaptively divided the quantized images in histogram region and performed single searching methods, significantly alleviating the increase of the computational load and the memory requirements.

  • PDF

The Study on the Analysis of High School Students' Misconception in the Learning of the Conic Sections (이차곡선 학습에서 고등학생들의 오개념 분석)

  • Hong, Seong-Kowan;Park, Cheol-Ho
    • School Mathematics
    • /
    • v.9 no.1
    • /
    • pp.119-139
    • /
    • 2007
  • The purpose of this study is to analyze students' misconception in the teaming of the conic sections with the cognitive and pedagogical point of view. The conics sections is very important concept in the high school geometry. High school students approach the conic sections only with algebraic perspective or analytic geometry perspective. So they have various misconception in the conic sections. To achieve the purpose of this study, the research on the following questions is conducted: First, what types of misconceptions do the students have in the loaming of conic sections? Second, what types of errors appear in the problem-solving process related to the conic sections? With the preliminary research, the testing worksheet and the student interviews, the cause of error and the misconception of conic sections were analyzed: First, students lacked the experience in the constructing and manipulating of the conic sections. Second, students didn't link the process of constructing and the application of conic sections with the equation of tangent line of the conic sections. The conclusion of this study ls: First, students should have the experience to manipulate and construct the conic sections to understand mathematical formula instead of rote memorization. Second, as the process of mathematising about the conic sections, students should use the dynamic geometry and the process of constructing in learning conic sections. And the process of constructing should be linked with the equation of tangent line of the conic sections. Third, the mathematical misconception is not the conception to be corrected but the basic conception to be developed toward the precise one.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.