• Title/Summary/Keyword: software process

Search Result 4,749, Processing Time 0.031 seconds

PVD Image Steganography with Locally-fixed Number of Embedding Bits (지역적 삽입 비트를 고정시킨 PVD 영상 스테가노그래피)

  • Kim, Jaeyoung;Park, Hanhoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.350-365
    • /
    • 2017
  • Steganography is a technique for secret data communication, which is not perceived by third person between a receiver and a transmitter. It has been developed for thousands of years for the transmission of military, diplomatic or business information. The development of digital media and communication has led to the development of steganography techniques in modern times. Technic of image steganography include the LSB, which fixes the number of embedded bits into a pixel, and PVD, which exploits the difference value in the neighboring pixel pairs. In the case of PVD image steganography, a large amount of information is embedded fluidly by difference value in neighboring pixel pairs and the designed range table. However, since the secret information in order is embedded, if an error of the number of embedded bits occurs in a certain pixel pair, all subsequent information will be destroyed. In this paper, we proposes the method, which improve the vulnerability of PVD property about external attack or various noise and extract secret information. Experimental process is comparison analysis about stego-image, which embedded various noise. PVD shows that it is not possible to preserve secret information at all about noise, but it was possible to robustly extract secret information for partial noise of stego-image in case of the proposed PVD image steganography with locally-fixed number of embedding bits.

Bio-Sensing Convergence Big Data Computing Architecture (바이오센싱 융합 빅데이터 컴퓨팅 아키텍처)

  • Ko, Myung-Sook;Lee, Tae-Gyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.2
    • /
    • pp.43-50
    • /
    • 2018
  • Biometric information computing is greatly influencing both a computing system and Big-data system based on the bio-information system that combines bio-signal sensors and bio-information processing. Unlike conventional data formats such as text, images, and videos, biometric information is represented by text-based values that give meaning to a bio-signal, important event moments are stored in an image format, a complex data format such as a video format is constructed for data prediction and analysis through time series analysis. Such a complex data structure may be separately requested by text, image, video format depending on characteristics of data required by individual biometric information application services, or may request complex data formats simultaneously depending on the situation. Since previous bio-information processing computing systems depend on conventional computing component, computing structure, and data processing method, they have many inefficiencies in terms of data processing performance, transmission capability, storage efficiency, and system safety. In this study, we propose an improved biosensing converged big data computing architecture to build a platform that supports biometric information processing computing effectively. The proposed architecture effectively supports data storage and transmission efficiency, computing performance, and system stability. And, it can lay the foundation for system implementation and biometric information service optimization optimized for future biometric information computing.

Design and frnplernentation of a Query Processing Algorithm for Dtstributed Semistructlred Documents Retrieval with Metadata hterface (메타데이타 인터페이스를 이용한 분산된 반구조적 문서 검색을 위한 질의처리 알고리즘 설계 및 구현)

  • Choe Cuija;Nam Young-Kwang
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.554-569
    • /
    • 2005
  • In the semistructured distributed documents, it is very difficult to formalize and implement the query processing system due to the lack of structure and rule of the data. In order to precisely retrieve and process the heterogeneous semistructured documents, it is required to handle multiple mappings such as 1:1, 1:W and W:1 on an element simultaneously and to generate the schema from the distributed documents. In this paper, we have proposed an query processing algorithm for querying and answering on the heterogeneous semistructured data or documents over distributed systems and implemented with a metadata interface. The algorithm for generating local queries from the global query consists of mapping between g1oba1 and local nodes, data transformation according to the mapping types, path substitution, and resolving the heterogeneity among nodes on a global input query with metadata information. The mapping, transformation, and path substitution algorithms between the global schema and the local schemas have been implemented the metadata interface called DBXMI (for Distributed Documents XML Metadata Interface). The nodes with the same node name and different mapping or meanings is resolved by automatically extracting node identification information from the local schema automatically. The system uses Quilt as its XML query language. An experiment testing is reported over 3 different OEM model semistructured restaurant documents. The prototype system is developed under Windows system with Java and JavaCC compiler.

3D Visualization of Satellite Remote-Sensing Images Using an Array DBMS (Array DBMS을 이용한 위성원격탐사 영상의 3차원 시각화)

  • Choi, Jong Hyeok;Lee, Jong Yun
    • Journal of Digital Convergence
    • /
    • v.13 no.2
    • /
    • pp.193-204
    • /
    • 2015
  • An array DBMS has been expected widely from scientists because it is convenient to store and analyze the data of array type. In this paper, we describe how to handle satellite remote-sensing images in the array DBMS. However, previous works in their visualization have two problems as follows. First, the images are visualized as a state of distorted by the curvature of the earth. Second, it is difficult to apply the results of visualization by pre-written queries to other analyses. Therefore, this paper proposes a three dimensional visualization method of satellite remote-sensing images, not traditional 2D visualization. Our research contents are as follows. First, we describe how to store, process, and analyze the satellite remote-sensing images in the array DBMS. Second, we propose a three-dimensional visualization method for their processed outputs. Lastly, our contributions can be summarized that we propose a method of handling satellite remote-sensing images in the array DBMS and their 3D visualization techniques. It is also expected that their use be available widely in many industrial areas.

Framework for Technology Valuation of Early Stage Technologies (초기단계 기술의 가치평가 방법론 적용 프레임워크)

  • Park, Hyun-Woo;Lee, Jong-Taik
    • Journal of Korea Technology Innovation Society
    • /
    • v.15 no.2
    • /
    • pp.242-261
    • /
    • 2012
  • Early stages of technology valuation have been often overlooked or under-represented. The early stage technologies are even riskier due to their inadequacy of commercial development and market applicability. More than 95% of patents fail to earn any revenues so that the majority of patents were valueless. Technology transfers from laboratories at universities and research institutes to industrial firms have increased to acquire value from invented technologies. Technology transfer, a process of transferring discoveries and innovations resulted from research to commercial sectors, typically comprises several steps: disclosing the discoveries and innovations, i.e., intellectual property (IP), evaluating the IP's economic prospects, securing a patent, copyright or trademark for the IP, commercializing the technology through licensing, forming a joint venture, or selling. At each of those stages in the research and development of technology, the value of technology would play a very important role of making decision on the movement toward the next step, however, the financial value of technology is not easy to determine due to a great amount of uncertainty in the course of research and development, and commercialization. This paper refers to technology embodied as devices, equipment, software or processes primarily developed at public research institutions such as universities. Sometimes it is also as the result of externally financed projects contracted with industry. Nearly always technology developed at public research entities results in laboratory prototypes. When it is required to define the technology transfer contract terms for the license of the university patrimonial rights to external funding companies or other interested parties, a question arises: what is the monetary value? In this paper, we present a method for technology valuation based on the identification of specific value points related to its development. The final technology value must be within previously defined value limits. This paper consists of the review of issues related to technology transfer and commercialization, the identification of characteristics of technologies in the early stage of technology development, the formulation of framework of methods to value the early stage technologies, and the conclusion and implication of the previous review.

  • PDF

Machine-Learning Based Biomedical Term Recognition (기계학습에 기반한 생의학분야 전문용어의 자동인식)

  • Oh Jong-Hoon;Choi Key-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.8
    • /
    • pp.718-729
    • /
    • 2006
  • There has been increasing interest in automatic term recognition (ATR), which recognizes technical terms for given domain specific texts. ATR is composed of 'term extraction', which extracts candidates of technical terms and 'term selection' which decides whether terms in a term list derived from 'term extraction' are technical terms or not. 'term selection' is a process to rank a term list depending on features of technical term and to find the boundary between technical term and general term. The previous works just use statistical features of terms for 'term selection'. However, there are limitations on effectively selecting technical terms among a term list using the statistical feature. The objective of this paper is to find effective features for 'term selection' by considering various aspects of technical terms. In order to solve the ranking problem, we derive various features of technical terms and combine the features using machine-learning algorithms. For solving the boundary finding problem, we define it as a binary classification problem which classifies a term in a term list into technical term and general term. Experiments show that our method records 78-86% precision and 87%-90% recall in boundary finding, and 89%-92% 11-point precision in ranking. Moreover, our method shows higher performance than the previous work's about 26% in maximum.

The Design of Manufacturing Simulation Modeling Based on Digital Twin Concept (Digital Twin 개념을 적용한 제조환경 시뮬레이션 모형 설계)

  • Hwang, Sung-Bum;Jeong, Suk-Jae;Yoon, Sung-Wook
    • Journal of the Korea Society for Simulation
    • /
    • v.29 no.2
    • /
    • pp.11-20
    • /
    • 2020
  • As the manufacturing environment becomes more complex, traditional simulation models alone are having a lot of difficulties in reflecting real-time manufacturing situations. Although the Digital Twin concept is actively discussed as an alternative to overcome theses issues, many studies are being carried out only in the product design phase. This research presents a Digital Twin-based manufacturing environment framework for applying the Digital Twin concept to the manufacturing process. Twin model that is operated in virtual space, physical system and databases describing the actual manufacturing environment, are proposed as detailed components that make up the framework. To check the applicability of proposed framework, a simple Digital Twin-based manufacturing system was simulated in a conveyor system using Arena software and Excel VBA. Experiment results have shown that the twin model is transmitted real time data from the physical system via DB and were operating in the same time unit. The Excel VBA fitted parameters defined by cycle time based on historical data that real-time and training data are being accumulated together. This study proposes operating method of digital twin model through the simple experiment examples. The results lead to the applicability of Digital twin model.

An Investigation of Social Commerce Service Quality on Consumer's Satisfaction (소셜커머스의 서비스품질과 소비자 만족도의 상관관계 분석)

  • Shin, Seung-Soo;Shin, Miyea;Jeong, Yoon-Su;Lee, Jihea
    • Journal of Convergence Society for SMB
    • /
    • v.5 no.2
    • /
    • pp.27-32
    • /
    • 2015
  • Recently, service-related products have gained more attention than general products on the existing social commerce sites. Based on the situation, the effect that the service quality of social commerce has on customer satisfaction was analyzed in this study. It is a study that analyzes how much the service quality affects the customer satisfaction after the purchase, targeting consumers who have made purchases of social commerce products. In the case of social commerce, it is well-known that the diversity and convenience of products have a significant effect on customer satisfaction. Social commerce is currently being dumped beyond the 900 sites and dozens of cases of news, real-time searches of popular portal sites appeared not to be bored enough to related sites to drive the popularity coming quickly dug into our everyday lives of human beings. Yet the perception of social commerce seems not properly established because of the new concept was suddenly going to go through penetration without a collective interpretation and acceptance process. Most of the companies that often mimic the syoseol commerce is large, the blame did not depart from the forms of social shopping. We believe that personal and exhibit their skills and talents, and to wonder to see the social rather than the individuals who make unilateral companies.

  • PDF

A Encryption Technique of JPEG2000 Image Using 3-Dimensional Chaotic Cat Map (3차원 카오스 캣맵을 이용한 JPEG2000 영상의 암호화 기술)

  • Choi, Hyun-Jun;Kim, Soo-Min;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.173-180
    • /
    • 2005
  • In this paper, we proposed the image hiding method which decreases calculation amount by encrypt partial data using discrete wavelet transform(DWT) and linear scale quantization which were adopted as the main technique for frequency transform in JPEG2000 standard. Also we used the chaotic system and cat map which has smaller calculation amount than other encryption algorithms and then dramatically decreased calculation amount. This method operates encryption process between quantization and entropy coding for preserving compression ratio of images and uses the subband selection method. Also, suggested encryption method to JPEG2000 progressive transmission. The experiments have been performed with the Proposed methods implemented in software for about 500 images. Consequently, we are sure that the proposed is efficient image encryption methods to acquire the high encryption effect with small amount of encryption. It has been shown that there exits a relation of trade-off between the execution time and the effect of the encryption. It means that the proposed methods can be selectively used according to the application areas.

Integrated Parallelization of Video Decoding on Multi-core Systems (멀티코어 시스템에서의 통합된 비디오 디코딩 병렬화)

  • Hong, Jung-Hyun;Kim, Won-Jin;Chung, Ki-Seok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.49 no.7
    • /
    • pp.39-49
    • /
    • 2012
  • Demand for high resolution video services leads to active studies on high speed video processing. Especially, widespread deployment of multi-core systems accelerates researches on high resolution video processing based on parallelization of multimedia software. Previously proposed parallelization approach could improve the decoding performance. However, some parallelization methods did not consider the entropy decoding and others considered only a partial decoding parallelization. Therefore, we consider parallel entropy decoding integrated with other parallel video decoding process on a multi-core system. We propose a novel parallel decoding method called Integrated Parallelization. We propose a method on how to optimize the parallelization of video decoding when we have a multi-core system with many cores. We parallelized the KTA 2.7 decoder with the proposed technique on an Intel i7 Quad-Core platform with Intel Hyper-Threading technology and multi-threads scheduling. We achieved up to 70% performance improvement using IP method.