• Title/Summary/Keyword: 3-Dimensional data structuring

Search Result 6, Processing Time 0.019 seconds

Financial Performance Evaluation using Self-Organizing Maps: The Case of Korean Listed Companies (자기조직화 지도를 이용한 한국 기업의 재무성과 평가)

  • 민재형;이영찬
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.26 no.3
    • /
    • pp.1-20
    • /
    • 2001
  • The amount of financial information in sophisticated large data bases is huge and makes interfirm performance comparisons very difficult or at least very time consuming. The purpose of this paper is to investigate whether neural networks in the form of self-organizing maps (SOM) can be successfully employed to manage the complexity for competitive financial benchmarking. SOM is known to be very effective to visualize results by projecting multi-dimensional financial data into two-dimensional output space. Using the SOM, we overcome the problems of finding an appropriate underlying distribution and the functional form of data when structuring and analyzing a large data base, and show an efficient procedure of competitive financial benchmarking through clustering firms on two-dimensional visual space according to their respective financial competitiveness. For the empirical purpose, we analyze the data base of annual reports of 100 Korean listed companies over the years 1998, 1999, and 2000.

  • PDF

A New Focus Measure Method Based on Mathematical Morphology for 3D Shape Recovery (3차원 형상 복원을 위한 수학적 모폴로지 기반의 초점 측도 기법)

  • Mahmood, Muhammad Tariq;Choi, Young Kyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.1
    • /
    • pp.23-28
    • /
    • 2017
  • Shape from focus (SFF) is a technique used to reconstruct 3D shape of objects from a sequence of images obtained at different focus settings of the lens. In this paper, a new shape from focus method for 3D reconstruction of microscopic objects is described, which is based on gradient operator in Mathematical Morphology. Conventionally, in SFF methods, a single focus measure is used for measuring the focus quality. Due to the complex shape and texture of microscopic objects, single measure based operators are not sufficient, so we propose morphological operators with multi-structuring elements for computing the focus values. Finally, an optimal focus measure is obtained by combining the response of all focus measures. The experimental results showed that the proposed algorithm has provided more accurate depth maps than the existing methods in terms of three-dimensional shape recovery.

Clothing-ergonomical Analysis of Wearing Test According to the Basic Slacks' Patterns (I) (슬랙스 원형에 따른 착의 평가의 피복인간공학적 연구 (제1보))

  • 김혜경;문영애
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.21 no.2
    • /
    • pp.396-405
    • /
    • 1997
  • The purpose of this study was to investigate the wearing condition according to different basic slacks'patterns and to provide fundamental data on structuring slacks' pattern using the multi-dimensional measuring method. 3 different kinds of basic slacks' patterns (A, B, C) were used and cross-sectional measurements of 6 parts were analysed. The results were as follows; 1) It revealed that the girth of waist, hip and thigh effected the degree of ease amounts. 2) Moira pattern shape fully supported that the considerable body parts affected the change of ease amounts. 3) Basic pattern A was usually expected to be suitable for standard-sized or unmarried women who had not experienced body-type change. 4) Basic pattern B and C were suitable for large-sized or married women whose body·type had changed. Therefore the crotch length and depth, gredient of center back line has to be set up accurately.

  • PDF

An Architecture of Vector Processor Concept using Dimensional Counting Mechanism of Structured Data (구조성 데이터의 입체식 계수기법에 의한 벡터 처리개념의 설계)

  • Jo, Yeong-Il;Park, Jang-Chun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.1
    • /
    • pp.167-180
    • /
    • 1996
  • In the scalar processing oriented machine scalar operations must be performed for the vector processing as many as the number of vector components. So called a vector processing mechanism by the von Neumann operational principle. Accessing vector data hasto beperformed by theevery pointing ofthe instruction or by the address calculation of the ALU, because there is only a program counter(PC) for the sequential counting of the instructions as a memory accessing device. It should be here proposed that an access unit dimensionally to address components has to be designed for the compensation of the organizational hardware defect of the conventional concept. The necessity for the vector structuring has to be implemented in the instruction set and be performed in the mid of the accessing data memory overlapped externally to the data processing unit at the same time.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.