• Title/Summary/Keyword: computing speed

Search Result 898, Processing Time 0.025 seconds

Design and Implementation of High-Resolution Image Transmission Interface for Mobile Device (모바일용 고화질 영상 전송 인터페이스의 설계 및 구현)

  • Ahn, Yong-Beom;Lee, Sang-Wook;Kim, Eung-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.8
    • /
    • pp.1511-1518
    • /
    • 2007
  • As studies on ubiquitous computing are actively conducted, desire for various services, including image transmission storage, search and remote monitoring. has been expanding into mobile environment as well as to PCs. while CCTV (closed circuit TV) and un DVR (Digital video Recording) are used in places where security service such as intrusion detection system is required, these are high-end equipment. So it is not easy for ordinary users or household and small-sized companies to use them. Besides, they are difficult to be carried and camera solution for mobile device does not support high-quality function and provides low-definition of QVGA for picture quality. Therefore, in this study, design and implementation of embedded system of high-definition image transmission for ubiquitous mobile device which is not inferior to PC or DVR are described. To this end, usage of dedicated CPU for mobile device and design and implementation of MPEG-4 H/W CODEC also are examined. The implemented system showed excellent performance in mobile environment, in terms of speed, picture quality.

Big Data Processing and Performance Improvement for Ship Trajectory using MapReduce Technique

  • Kim, Kwang-Il;Kim, Joo-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.10
    • /
    • pp.65-70
    • /
    • 2019
  • In recently, ship trajectory data consisting of ship position, speed, course, and so on can be obtained from the Automatic Identification System device with which all ships should be equipped. These data are gathered more than 2GB every day at a crowed sea port and used for analysis of ship traffic statistic and patterns. In this study, we propose a method to process ship trajectory data efficiently with distributed computing resources using MapReduce algorithm. In data preprocessing phase, ship dynamic and static data are integrated into target dataset and filtered out ship trajectory that is not of interest. In mapping phase, we convert ship's position to Geohash code, and assign Geohash and ship MMSI to key and value. In reducing phase, key-value pairs are sorted according to the same key value and counted the ship traffic number in a grid cell. To evaluate the proposed method, we implemented it and compared it with IALA waterway risk assessment program(IWRAP) in their performance. The data processing performance improve 1 to 4 times that of the existing ship trajectory analysis program.

Detection Method of Vehicle Fuel-cut Driving with Deep-learning Technique (딥러닝 기법을 이용한 차량 연료차단 주행의 감지법)

  • Ko, Kwang-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.327-333
    • /
    • 2019
  • The Fuel-cut driving is started when the acceleration pedal released with transmission gear engaged. Fuel economy of the vehicle improves by active fuel-cut driving. A deep-learning technique is proposed to predict fuel-cut driving with vehicle speed, acceleration and road gradient data in the study. It's 3~10 of hidden layers and 10~20 of variables and is applied to the 9600 data obtained in the test driving of a vehicle in the road of 12km. Its accuracy is about 84.5% with 10 variables, 7 hidden layers and Relu as activation function. Its error is regarded from the fact that the change rate of input data is higher than the rate of fuel consumption data. Therefore the accuracy can be better by the normalizing process of input data. It's unnecessary to get the signal of vehicle injector or OBD, and a deep-learning technique applied to the data to be got easily, like GPS. It can contribute to eco-drive for the computing time small.

Effectiveness Analysis for Survival Probability of a Surface Warship Considering Static and Mobile Decoys (부유식 및 자항식 기만기의 혼합 운용을 고려한 수상함의 생존율에 대한 효과도 분석)

  • Shin, MyoungIn;Cho, Hyunjin;Lee, Jinho;Lim, Jun-Seok;Lee, Seokjin;Kim, Wan-Jin;Kim, Woo Shik;Hong, Wooyoung
    • Journal of the Korea Society for Simulation
    • /
    • v.25 no.3
    • /
    • pp.53-63
    • /
    • 2016
  • We consider simulation study combining static and mobile decoys for survivability of a surface warship against torpedo attack. It is assumed that an enemy torpedo is a passive acoustic homing torpedo and detects a target within its maximum target detection range and search beam angle by computing signal excess via passive sonar equation, and a warship conducts an evasive maneuvering with deploying static and mobile decoys simultaneously to counteract a torpedo attack. Suggesting the four different decoy deployment plans to achieve the best plan, we analyze an effectiveness for a warship's survival probability through Monte Carlo simulation, given a certain experimental environment. Furthermore, changing the speed and the source level of decoys, the maximum torpedo detection range of warship, and the maximum target detection range of torpedo, we observe the corresponding survival probabilities, which can provide the operational capabilities of an underwater defense system.

Parallel Structure Design Method for Mass Spring Simulation (질량스프링 시뮬레이션을 위한 병렬 구조 설계 방법)

  • Sung, Nak-Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.55-63
    • /
    • 2019
  • Recently, the GPU computing method has been utilized to improve the performance of the physics simulation field. In particular, in the case of a deformed object simulation requiring a large amount of computation, a GPU-based parallel processing algorithm is required to guarantee real-time performance. We have studied the parallel structure design method to improve the performance of the mass spring simulation method which is one of the methods of implementing the deformation object simulation. We used OpenGL's GLSL, a graphics library that allows direct access to the GPU, and implemented the GPGPU environment using an independent pipeline, the compute shader. In order to verify the effectiveness of the parallel structure design method, the mass - spring system was implemented based on CPU and GPU. Experimental results show that the proposed method improves computation speed by about 6,000% compared to the CPU Environment. It is expected that the lightweight simulation technology can be effectively applied to the augmented reality and the virtual reality field by using the design method proposed later in this research.

Development of Supervised Machine Learning based Catalog Entry Classification and Recommendation System (지도학습 머신러닝 기반 카테고리 목록 분류 및 추천 시스템 구현)

  • Lee, Hyung-Woo
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.57-65
    • /
    • 2019
  • In the case of Domeggook B2B online shopping malls, it has a market share of over 70% with more than 2 million members and 800,000 items are sold per one day. However, since the same or similar items are stored and registered in different catalog entries, it is difficult for the buyer to search for items, and problems are also encountered in managing B2B large shopping malls. Therefore, in this study, we developed a catalog entry auto classification and recommendation system for products by using semi-supervised machine learning method based on previous huge shopping mall purchase information. Specifically, when the seller enters the item registration information in the form of natural language, KoNLPy morphological analysis process is performed, and the Naïve Bayes classification method is applied to implement a system that automatically recommends the most suitable catalog information for the article. As a result, it was possible to improve both the search speed and total sales of shopping mall by building accuracy in catalog entry efficiently.

Feature-selection algorithm based on genetic algorithms using unstructured data for attack mail identification (공격 메일 식별을 위한 비정형 데이터를 사용한 유전자 알고리즘 기반의 특징선택 알고리즘)

  • Hong, Sung-Sam;Kim, Dong-Wook;Han, Myung-Mook
    • Journal of Internet Computing and Services
    • /
    • v.20 no.1
    • /
    • pp.1-10
    • /
    • 2019
  • Since big-data text mining extracts many features and data, clustering and classification can result in high computational complexity and low reliability of the analysis results. In particular, a term document matrix obtained through text mining represents term-document features, but produces a sparse matrix. We designed an advanced genetic algorithm (GA) to extract features in text mining for detection model. Term frequency inverse document frequency (TF-IDF) is used to reflect the document-term relationships in feature extraction. Through a repetitive process, a predetermined number of features are selected. And, we used the sparsity score to improve the performance of detection model. If a spam mail data set has the high sparsity, detection model have low performance and is difficult to search the optimization detection model. In addition, we find a low sparsity model that have also high TF-IDF score by using s(F) where the numerator in fitness function. We also verified its performance by applying the proposed algorithm to text classification. As a result, we have found that our algorithm shows higher performance (speed and accuracy) in attack mail classification.

A Malware Detection Method using Analysis of Malicious Script Patterns (악성 스크립트 패턴 분석을 통한 악성코드 탐지 기법)

  • Lee, Yong-Joon;Lee, Chang-Beom
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.7
    • /
    • pp.613-621
    • /
    • 2019
  • Recently, with the development of the Internet of Things (IoT) and cloud computing technologies, security threats have increased as malicious codes infect IoT devices, and new malware spreads ransomware to cloud servers. In this study, we propose a threat-detection technique that checks obfuscated script patterns to compensate for the shortcomings of conventional signature-based and behavior-based detection methods. Proposed is a malicious code-detection technique that is based on malicious script-pattern analysis that can detect zero-day attacks while maintaining the existing detection rate by registering and checking derived distribution patterns after analyzing the types of malicious scripts distributed through websites. To verify the performance of the proposed technique, a prototype system was developed to collect a total of 390 malicious websites and experiment with 10 major malicious script-distribution patterns derived from analysis. The technique showed an average detection rate of about 86% of all items, while maintaining the existing detection speed based on the detection rule and also detecting zero-day attacks.

Parallelization of Genome Sequence Data Pre-Processing on Big Data and HPC Framework (빅데이터 및 고성능컴퓨팅 프레임워크를 활용한 유전체 데이터 전처리 과정의 병렬화)

  • Byun, Eun-Kyu;Kwak, Jae-Hyuck;Mun, Jihyeob
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.8 no.10
    • /
    • pp.231-238
    • /
    • 2019
  • Analyzing next-generation genome sequencing data in a conventional way using single server may take several tens of hours depending on the data size. However, in order to cope with emergency situations where the results need to be known within a few hours, it is required to improve the performance of a single genome analysis. In this paper, we propose a parallelized method for pre-processing genome sequence data which can reduce the analysis time by utilizing the big data technology and the highperformance computing cluster which is connected to the high-speed network and shares the parallel file system. For the reliability of analytical data, we have chosen a strategy to parallelize the existing analytical tools and algorithms to the new environment. Parallelized processing, data distribution, and parallel merging techniques have been developed and performance improvements have been confirmed through experiments.

Extremely High-Definition Computer Generated Hologram Calculation Algorithm with Concave Lens Function (오목 렌즈 함수를 이용한 초 고해상도 Computer generated hologram 생성 기법)

  • Lee, Chang-Joo;Choi, Woo-Young;Oh, Kwan-Jung;Hong, Keehoon;Choi, Kihong;Cheon, Sang-Hoon;Park, Joongki;Lee, Seung-Yeol
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.836-844
    • /
    • 2020
  • A very large number of pixels is required to generate a computer generated hologram (CGH) with a large-size and wide viewing angle equivalent to that of an analog hologram, which incurs a very large amount of computation. For this reason, a high-performance computing device and long computation time were required to generate high-definition CGH. To solve these problems, in this paper, we propose a technique for generating high-definition CGH by arraying the pre-calculated low-definition CGH and multiplying the appropriately-shifted concave lens function. Using the proposed technique, 0.1 Gigapixel CGH recorded by the point cloud method can be used to calculate 2.5 Gigapixels CGH at a very high speed, and the recorded hologram image was successfully reconstructed through the experiment.