• Title/Summary/Keyword: block processing

Search Result 1,476, Processing Time 0.024 seconds

Design and Development of Modular Replaceable AI Server for Image Deep Learning in Social Robots on Edge Devices (엣지 디바이스인 소셜 로봇에서의 영상 딥러닝을 위한 모듈 교체형 인공지능 서버 설계 및 개발)

  • Kang, A-Reum;Oh, Hyun-Jeong;Kim, Do-Yun;Jeong, Gu-Min
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.470-476
    • /
    • 2020
  • In this paper, we present the design of modular replaceable AI server for image deep learning that separates the server from the Edge Device so as to drive the AI block and the method of data transmission and reception. The modular replaceable AI server for image deep learning can reduce the dependency between social robots and edge devices where the robot's platform will be operated to improve drive stability. When a user requests a function from an AI server for interaction with a social robot, modular functions can be used to return only the results. Modular functions in AI servers can be easily maintained and changed by each module by the server manager. Compared to existing server systems, modular replaceable AI servers produce more efficient performance in terms of server maintenance and scale differences in the programs performed. Through this, more diverse image deep learning can be included in robot scenarios that allow human-robot interaction, and more efficient performance can be achieved when applied to AI servers for image deep learning in addition to robot platforms.

HFN-Based Right Management for IoT Health Data Sharing (IoT 헬스 데이터 공유를 위한 HFN 기반 권한 관리)

  • Kim, Mi-sun;Park, Yongsuk;Seo, Jae-Hyun
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.88-98
    • /
    • 2021
  • As blockchain technology has emerged as a security issue for IoT, technology which integrates block chain into IoT is being studied. In this paper is a research concerning token-based IoT service access control technology for data sharing, which propose a possessor focused data sharing technic by using the permissioned blockchain. To share IoT health data, a Hyperledger Fabric Network consisting of three organizations was designed to provide a way to share data by applying different access control policies centered on device owners for different services. In the proposed system, the device owner issues access control tokens with different security levels applied to the participants in the organization, and the token issue information is shared through the distributed ledger of the HFN. In IoT, it is possible to lightweight the access control processing of IoT devices by granting tokens to service requesters who request access to data. Furthmore, by sharing token issuance information among network participants using HFN, the integrity of the token is guaranteed and all network participants can trust the token. The device owners can trust that their data is being used within their authorized rights, and control the collection and use of data.

Development of The Safe Driving Reward System for Truck Digital Tachograph using Hyperledger Fabric (하이퍼레저 패브릭을 이용한 화물차 디지털 운행기록 단말기의 안전운행 보상시스템 구현)

  • Kim, Yong-bae;Back, Juyong;Kim, Jongweon
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.47-56
    • /
    • 2022
  • The safe driving reward system aims to reduce the loss of life and property by reducing the occurrence of accidents by motivating safe driving and encouraging active participation by providing direct reward to vehicle drivers who have performed safe driving. In the case of the existing digital tachograph, the goal is to limit dangerous driving by recording the driving status of the vehicle whereas the safe driving reward system is a support measure to increase the effect of accident prevention and induces safe driving with financial reward when safe driving is performed. In other words, in an area where accidents due to speeding are high, direct reward is provided to motivate safe driving to prevent traffic accidents when safe driving instructions such as speed compliance, maintaining distance between vehicles, and driving in designated lanes are performed. Since these safe operation data and reward histories must be managed transparently and safely, the reward evidences and histories were constructed using the closed blockchain Hyperledger Fabric. However, while transparency and safety are guaranteed in the blockchain system, low data processing speed is a problem. In this study, the sequential block generation speed was as low as 10 TPS(transaction per second), and as a result of applying the acceleration function a high-performance network of 1,000 TPS or more was implemented.

Blocking Intelligent Dos Attack with SDN (SDN과 허니팟 기반 동적 파라미터 조절을 통한 지능적 서비스 거부 공격 차단)

  • Yun, Junhyeok;Mun, Sungsik;Kim, Mihui
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.1
    • /
    • pp.23-34
    • /
    • 2022
  • With the development of network technology, the application area has also been diversified, and protocols for various purposes have been developed and the amount of traffic has exploded. Therefore, it is difficult for the network administrator to meet the stability and security standards of the network with the existing traditional switching and routing methods. Software Defined Networking (SDN) is a new networking paradigm proposed to solve this problem. SDN enables efficient network management by programming network operations. This has the advantage that network administrators can flexibly respond to various types of attacks. In this paper, we design a threat level management module, an attack detection module, a packet statistics module, and a flow rule generator that collects attack information through the controller and switch, which are components of SDN, and detects attacks based on these attributes of SDN. It proposes a method to block denial of service attacks (DoS) of advanced attackers by programming and applying honeypot. In the proposed system, the attack packet can be quickly delivered to the honeypot according to the modifiable flow rule, and the honeypot that received the attack packets analyzed the intelligent attack pattern based on this. According to the analysis results, the attack detection module and the threat level management module are adjusted to respond to intelligent attacks. The performance and feasibility of the proposed system was shown by actually implementing the proposed system, performing intelligent attacks with various attack patterns and attack levels, and checking the attack detection rate compared to the existing system.

Analysis of Grover Attack Cost and Post-Quantum Security Strength Evaluation for Lightweight Cipher SPARKLE SCHWAEMM (경량암호 SPARKLE SCHWAEMM에 대한 Grover 공격 비용 분석 및 양자 후 보안 강도 평가)

  • Yang, Yu Jin;Jang, Kyung Bae;Kim, Hyun Ji;Song, Gyung Ju;Lim, Se Jin;Seo, Hwa Jeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.12
    • /
    • pp.453-460
    • /
    • 2022
  • As high-performance quantum computers are expected to be developed, studies are being actively conducted to build a post-quantum security system that is safe from potential quantum computer attacks. When the Grover's algorithm, a representative quantum algorithm, is used to search for a secret key in a symmetric key cryptography, there may be a safety problem in that the security strength of the cipher is reduced to the square root. NIST presents the post-quantum security strength estimated based on the cost of the Grover's algorithm required for an attack of the cryptographic algorithm as a post-quantum security requirement for symmetric key cryptography. The estimated cost of Grover's algorithm for the attack of symmetric key cryptography is determined by the quantum circuit complexity of the corresponding encryption algorithm. In this paper, the quantum circuit of the SCHWAEMM algorithm, AEAD family of SPARKLE, which was a finalist in NIST's lightweight cryptography competition, is efficiently implemented, and the quantum cost to apply the Grover's algorithm is analyzed. At this time, the cost according to the CDKM ripple-carry adder and the unbounded Fan-Out adder is compared together. Finally, we evaluate the post-quantum security strength of the lightweight cryptography SPARKLE SCHWAEMM algorithm based on the analyzed cost and NIST's post-quantum security requirements. A quantum programming tool, ProjectQ, is used to implement the quantum circuit and analyze its cost.

A Security SoC embedded with ECDSA Hardware Accelerator (ECDSA 하드웨어 가속기가 내장된 보안 SoC)

  • Jeong, Young-Su;Kim, Min-Ju;Shin, Kyung-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.7
    • /
    • pp.1071-1077
    • /
    • 2022
  • A security SoC that can be used to implement elliptic curve cryptography (ECC) based public-key infrastructures was designed. The security SoC has an architecture in which a hardware accelerator for the elliptic curve digital signature algorithm (ECDSA) is interfaced with the Cortex-A53 CPU using the AXI4-Lite bus. The ECDSA hardware accelerator, which consists of a high-performance ECC processor, a SHA3 hash core, a true random number generator (TRNG), a modular multiplier, BRAM, and control FSM, was designed to perform the high-performance computation of ECDSA signature generation and signature verification with minimal CPU control. The security SoC was implemented in the Zynq UltraScale+ MPSoC device to perform hardware-software co-verification, and it was evaluated that the ECDSA signature generation or signature verification can be achieved about 1,000 times per second at a clock frequency of 150 MHz. The ECDSA hardware accelerator was implemented using hardware resources of 74,630 LUTs, 23,356 flip-flops, 32kb BRAM, and 36 DSP blocks.

Design and Implementation of User-Level FileSystem in the Combat Management System

  • Kang, Seok-Hyun;Kim, Keun-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.12
    • /
    • pp.9-16
    • /
    • 2022
  • In this paper, we propose a plan to design and utilize the RDBS(Record Block Data file management System) so that data can be recovered when data files in the Combat Management System are mismatched. The CMS(Combat Management System) manages the same files in multiple IPN(Infomation Processing Node) repositories to support multiplexing. However, mismatches in data files can occur due to equipment maintenance or user immaturity. The existing CMS does not manage the history of changes in data files, and when a mismatch occurs, data file were synchronized based on the latest date. But, It is difficult to say that files with the latest date have the highest reliability, and once the file synchronization has progressed, it cannot be recovered with pre-synchronization data. To solve this problem, data was stored and synchronized in units of record blocks using RDBS proposed in this paper, and the Rsync algorithm was used to reduce the overhead of file synchronization due to units of record blocks. SW applied with RDBS was tested for performance in a simulated environment, and it was confirmed that it could be applied to CMS through normal operation confirmation.

Method of Biological Information Analysis Based-on Object Contextual (대상객체 맥락 기반 생체정보 분석방법)

  • Kim, Kyung-jun;Kim, Ju-yeon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.41-43
    • /
    • 2022
  • In order to prevent and block infectious diseases caused by the recent COVID-19 pandemic, non-contact biometric information acquisition and analysis technology is attracting attention. The invasive and attached biometric information acquisition method accurately has the advantage of measuring biometric information, but has a risk of increasing contagious diseases due to the close contact. To solve these problems, the non-contact method of extracting biometric information such as human fingerprints, faces, iris, veins, voice, and signatures with automated devices is increasing in various industries as data processing speed increases and recognition accuracy increases. However, although the accuracy of the non-contact biometric data acquisition technology is improved, the non-contact method is greatly influenced by the surrounding environment of the object to be measured, which is resulting in distortion of measurement information and poor accuracy. In this paper, we propose a context-based bio-signal modeling technique for the interpretation of personalized information (image, signal, etc.) for bio-information analysis. Context-based biometric information modeling techniques present a model that considers contextual and user information in biometric information measurement in order to improve performance. The proposed model analyzes signal information based on the feature probability distribution through context-based signal analysis that can maximize the predicted value probability.

  • PDF

Application Development for Text Mining: KoALA (텍스트 마이닝 통합 애플리케이션 개발: KoALA)

  • Byeong-Jin Jeon;Yoon-Jin Choi;Hee-Woong Kim
    • Information Systems Review
    • /
    • v.21 no.2
    • /
    • pp.117-137
    • /
    • 2019
  • In the Big Data era, data science has become popular with the production of numerous data in various domains, and the power of data has become a competitive power. There is a growing interest in unstructured data, which accounts for more than 80% of the world's data. Along with the everyday use of social media, most of the unstructured data is in the form of text data and plays an important role in various areas such as marketing, finance, and distribution. However, text mining using social media is difficult to access and difficult to use compared to data mining using numerical data. Thus, this study aims to develop Korean Natural Language Application (KoALA) as an integrated application for easy and handy social media text mining without relying on programming language or high-level hardware or solution. KoALA is a specialized application for social media text mining. It is an integrated application that can analyze both Korean and English. KoALA handles the entire process from data collection to preprocessing, analysis and visualization. This paper describes the process of designing, implementing, and applying KoALA applications using the design science methodology. Lastly, we will discuss practical use of KoALA through a block-chain business case. Through this paper, we hope to popularize social media text mining and utilize it for practical and academic use in various domains.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.