• Title/Summary/Keyword: Process Performance Graph

Search Result 117, Processing Time 0.027 seconds

A Study on Comparing algorithms for Boxing Motion Recognition (권투 모션 인식을 위한 알고리즘 비교 연구)

  • Han, Chang-Ho;Kim, Soon-Chul;Oh, Choon-Suk;Ryu, Young-Kee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.8 no.6
    • /
    • pp.111-117
    • /
    • 2008
  • In this paper, we describes the boxing motion recognition which is used in the part of games, animation. To recognize the boxing motion, we have used two algorithms, one is principle component analysis, the other is dynamic time warping algorithm. PCA is the simplest of the true eigenvector-based multivariate analyses and often used to reduce multidimensional data sets to lower dimensions for analysis. DTW is an algorithm for measuring similarity between two sequences which may vary in time or speed. We introduce and compare PCA and DTW algorithms respectively. We implemented the recognition of boxing motion on the motion capture system which is developed in out research, and depict the system also. The motion graph will be created by boxing motion data which is acquired from motion capture system, and will be normalized in a process. The result has implemented in the motion recognition system with five actors, and showed the performance of the recognition.

  • PDF

An Efficient Falsification Algorithm for Logical Expressions in DNF (DNF 논리식에 대한 효율적인 반증 알고리즘)

  • Moon, Gyo-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.662-668
    • /
    • 2001
  • Since the problem of disproving a tautology is as hard as the problem of proving it, no polynomial time algorithm for falsification(or testing invalidity) is feasible. Previous algorithms are mostly based on either divide-and-conquer or graph representation. Most of them demonstrated satisfactory results on a variety of input under certain constraints. However, they have experienced difficulties dealing with big input. We propose a new falsification algorithm using a Merge Rule to produce a counterexample by constructing a minterm which is not satisfied by an input expression in DNF(Disjunctive Normal Form). We also show that the algorithm is consistent and sound. The algorithm is based on a greedy method which would seek to maximize the number or terms falsified by the assignment made at each step of the falsification process. Empirical results show practical performance on big input to falsify randomized nontautological problem instances, consuming O(nm$^2$) time, where n is the number of variables and m is number of terms.

  • PDF

An Improved Automatic Text Summarization Based on Lexical Chaining Using Semantical Word Relatedness (단어 간 의미적 연관성을 고려한 어휘 체인 기반의 개선된 자동 문서요약 방법)

  • Cha, Jun Seok;Kim, Jeong In;Kim, Jung Min
    • Smart Media Journal
    • /
    • v.6 no.1
    • /
    • pp.22-29
    • /
    • 2017
  • Due to the rapid advancement and distribution of smart devices of late, document data on the Internet is on the sharp increase. The increment of information on the Web including a massive amount of documents makes it increasingly difficult for users to understand corresponding data. In order to efficiently summarize documents in the field of automated summary programs, various researches are under way. This study uses TextRank algorithm to efficiently summarize documents. TextRank algorithm expresses sentences or keywords in the form of a graph and understands the importance of sentences by using its vertices and edges to understand semantic relations between vocabulary and sentence. It extracts high-ranking keywords and based on keywords, it extracts important sentences. To extract important sentences, the algorithm first groups vocabulary. Grouping vocabulary is done using a scale of specific weight. The program sorts out sentences with higher scores on the weight scale, and based on selected sentences, it extracts important sentences to summarize the document. This study proved that this process confirmed an improved performance than summary methods shown in previous researches and that the algorithm can more efficiently summarize documents.

Analysis of Accuracy and Loss Performance According to Hyperparameter in RNN Model (RNN모델에서 하이퍼파라미터 변화에 따른 정확도와 손실 성능 분석)

  • Kim, Joon-Yong;Park, Koo-Rack
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.7
    • /
    • pp.31-38
    • /
    • 2021
  • In this paper, in order to obtain the optimization of the RNN model used for sentiment analysis, the correlation of each model was studied by observing the trend of loss and accuracy according to hyperparameter tuning. As a research method, after configuring the hidden layer with LSTM and the embedding layer that are most optimized to process sequential data, the loss and accuracy of each model were measured by tuning the unit, batch-size, and embedding size of the LSTM. As a result of the measurement, the loss was 41.9% and the accuracy was 11.4%, and the trend of the optimization model showed a consistently stable graph, confirming that the tuning of the hyperparameter had a profound effect on the model. In addition, it was confirmed that the decision of the embedding size among the three hyperparameters had the greatest influence on the model. In the future, this research will be continued, and research on an algorithm that allows the model to directly find the optimal hyperparameter will continue.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Web Site Keyword Selection Method by Considering Semantic Similarity Based on Word2Vec (Word2Vec 기반의 의미적 유사도를 고려한 웹사이트 키워드 선택 기법)

  • Lee, Donghun;Kim, Kwanho
    • The Journal of Society for e-Business Studies
    • /
    • v.23 no.2
    • /
    • pp.83-96
    • /
    • 2018
  • Extracting keywords representing documents is very important because it can be used for automated services such as document search, classification, recommendation system as well as quickly transmitting document information. However, when extracting keywords based on the frequency of words appearing in a web site documents and graph algorithms based on the co-occurrence of words, the problem of containing various words that are not related to the topic potentially in the web page structure, There is a difficulty in extracting the semantic keyword due to the limit of the performance of the Korean tokenizer. In this paper, we propose a method to select candidate keywords based on semantic similarity, and solve the problem that semantic keyword can not be extracted and the accuracy of Korean tokenizer analysis is poor. Finally, we use the technique of extracting final semantic keywords through filtering process to remove inconsistent keywords. Experimental results through real web pages of small business show that the performance of the proposed method is improved by 34.52% over the statistical similarity based keyword selection technique. Therefore, it is confirmed that the performance of extracting keywords from documents is improved by considering semantic similarity between words and removing inconsistent keywords.

Factors Influencing Frequency of Abnormal Peak in the Measurement of HbA1c by HPLC (HPLC법을 이용한 HbA1c 측정시 Abnormal Peak의 빈도와 원인)

  • Kim, Sun-Kyung;Bae, Ae-Young;Choi, Dae-Yong;Kim, Myung-Soo;Yoo, Kwang-Hyun;Ki, Chang-Seok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.2
    • /
    • pp.71-77
    • /
    • 2005
  • We experienced the specimen that contains a hemoglobin variant known as interference from HbAS in October 2003. It was the first case of Hb variants since Samsung Medical Center began conducting glycohemoglobin College of American Pathologists surveys in 1997. The purpose of this study is to share our experience with the specimen and promote the understanding of Hb variants & derivatives. We've performed cross checks to examine HbA1c by using two pieces of equipment; the TOSHOH G7 and BIO-RAD VARIANT-T(turbo), and Automatic High Performance Liquid Chromatography(HPLC) as an analytic measurement method. HPLC provides different fractional information of hemoglobin with a two-dimensional graph as well as numeric results. We have been performing a "Systematic Checking Process". Three specimen suspicious of Hb variants & derivatives were found through this process. College of American Pathologists notified that it is important for users to be aware of the limitation of their glycohemoglobin method to avoid reporting incorrect results due to interference from hemoglobin variants or hemoglobin adducts. Therefore, laboratory findings of Hb variants & derivatives are very important. The experience of qualified technicians with professional knowledge in Hb variants is the most important aspect in finding Hb variants. Korea is homogeneous in race and is not in an area with a higher finding rate of Hb variants. While 1,024 cases of Hb variants have been found in Japan, we do not have specific data on how many cases of Hb variants have been found in Korea. Considering Hb variant cases in Japan, which is geographically close to us, it is presumed that there must be various Hb variant cases in Korea. If domestic laboratories set a systemic protocol and build a network to share our experience in Hb variants, I expect the Korean Hb variants could also be listed on the world's Hb variant list.

  • PDF

Development of Lightweight Composite Sub-frame in Automotive Chassis Parts Considering Structure & NVH Performance (구조 및 NVH 성능을 고려한 복합재료 서브프레임 개발)

  • Han, Doo-Heun;Ha, Sung
    • Composites Research
    • /
    • v.32 no.1
    • /
    • pp.21-28
    • /
    • 2019
  • Recently, according to environmental regulations, the automobile industry has been conducting various research on the use of composite materials to increase fuel efficiency. However, there has not been much research on lightweight chassis components. Therefore, in this research, the purpose of this study is to apply composite materials to the sub-frame of chassis components to achieve equivalent levels of stiffness, strength, NVH performance and 50% lightweight compared to the steel sub-frame. First, the Natural frequency of steel and composite specimens was compared to the damping characteristics of composite materials. Then, in this study, the Lay-up Sequence was derived to maximize the stiffness and strength of the sub-frame by applying composite materials. And this lay-up Sequence is proposed to avoid heat shrinkage due to curing during manufacturing. This process was designed based on a FEM structural analysis, and a Natural frequency and frequency response function graph was confirmed based on a modal analysis. The prototype type composite sub-frame was manufactured based on the design and the F.E.M analysis was verified through a modal experiment. Furthermore, it was fitted to the actual vehicle to verify the natural frequency and the indoor noise vibration response, including idling and road noise. This result was confirmed to be equivalent to the steel sub-frame. Finally, the composite sub-frame weight was confirmed to be about 50% of the steel sub-frame.

Implementation of High-radix Modular Exponentiator for RSA using CRT (CRT를 이용한 하이래딕스 RSA 모듈로 멱승 처리기의 구현)

  • 이석용;김성두;정용진
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.10 no.4
    • /
    • pp.81-93
    • /
    • 2000
  • In a methodological approach to improve the processing performance of modulo exponentiation which is the primary arithmetic in RSA crypto algorithm, we present a new RSA hardware architecture based on high-radix modulo multiplication and CRT(Chinese Remainder Theorem). By implementing the modulo multiplier using radix-16 arithmetic, we reduced the number of PE(Processing Element)s by quarter comparing to the binary arithmetic scheme. This leads to having the number of clock cycles and the delay of pipelining flip-flops be reduced by quarter respectively. Because the receiver knows p and q, factors of N, it is possible to apply the CRT to the decryption process. To use CRT, we made two s/2-bit multipliers operating in parallel at decryption, which accomplished 4 times faster performance than when not using the CRT. In encryption phase, the two s/2-bit multipliers can be connected to make a s-bit linear multiplier for the s-bit arithmetic operation. We limited the encryption exponent size up to 17-bit to maintain high speed, We implemented a linear array modulo multiplier by projecting horizontally the DG of Montgomery algorithm. The H/W proposed here performs encryption with 15Mbps bit-rate and decryption with 1.22Mbps, when estimated with reference to Samsung 0.5um CMOS Standard Cell Library, which is the fastest among the publications at present.

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.