• Title/Summary/Keyword: neural network.

Search Result 11,770, Processing Time 0.033 seconds

Detection of Colluded Multimedia Fingerprint by Neural Network (신경회로망에 의한 공모된 멀티미디어 핑거프린트의 검출)

  • Noh Jin-Soo;Rhee Kang-Hyeon
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.4 s.310
    • /
    • pp.80-87
    • /
    • 2006
  • Recently, the distribution and using of the digital multimedia contents are easy by developing the internet application program and related technology. However, the digital signal is easily duplicated and the duplicates have the same quality compare with original digital signal. To solve this problem, there is the multimedia fingerprint which is studied for the protection of copyright. Fingerprinting scheme is a techniques which supports copyright protection to track redistributors of electronic inform on using cryptographic techniques. Only regular user can know the inserted fingerprint data in fingerprinting schemes differ from a symmetric/asymmetric scheme and the scheme guarantee an anonymous before recontributed data. In this paper, we present a new scheme which is the detection of colluded multimedia fingerprint by neural network. This proposed scheme is consists of the anti-collusion code generation and the neural network for the error correction. Anti-collusion code based on BIBD(Balanced Incomplete Block Design) was made 100% collusion code detection rate about the average linear collusion attack, and the hopfield neural network using (n,k)code designing for the error bits correction confirmed that can correct error within 2bits.

Development of Sasang Type Diagnostic Test with Neural Network (신경망을 사용한 사상체질 진단검사 개발 연구)

  • Chae, Han;Hwang, Sang-Moon;Eom, Il-Kyu;Kim, Byoung-Chul;Kim, Young-In;Kim, Byung-Joo;Kwon, Young-Kyu
    • Journal of Physiology & Pathology in Korean Medicine
    • /
    • v.23 no.4
    • /
    • pp.765-771
    • /
    • 2009
  • The medical informatics for clustering Sasang types with collected clinical data is important for the personalized medicine, but it has not been thoroughly studied yet. The purpose of this study was to examine the usefulness of neural network data mining algorithm for traditional Korean medicine. We used Kohonen neural network, the Self-Organizing Map (SOM), for the analysis of biomedical information following data pre-processing and calculated the validity index as percentage correctly predicted and type-specific sensitivity. We can extract 12 data fields from 30 after data pre-processing with correlation analysis and latent functional relationship analysis. The profile of Myers-Briggs Type Inidcator and Bio-Impedance Analysis data which are clustered with SOM was similar to that of original measurements. The percentage correctly predicted was 56%, and sensitivity for So-Yang, Tae-Eum and So-Eum type were 56%, 48%, and 61%, respectively. This study showed that the neural network algorithm for clustering Sasang types based on clinical data is useful for the sasang type diagnostic test itself. We discussed the importance of data pre-processing and clustering algorithm for the validity of medical devices in traditional Korean medicine.

Objective assessment of cleft lip nose deformity by neural network (구순열 비변형의 객관적 평가를 위한 Neural Network의 적용)

  • Park, Joong-Hoon;Kim, Jin-Tae;Hong, Hyun-Ki;Kim, Soo-Chan;Kim, Deok-Won
    • Proceedings of the KIEE Conference
    • /
    • 2006.04a
    • /
    • pp.45-47
    • /
    • 2006
  • Cleft palate is a congenital deformity condition with separation of the two sides of the lip resulting in nose deformity. Evaluation of surgical corrections and outcome assessments for nose deformity due to the cleft lip depends mainly on doctor's subjective judgment. An objective method for evaluation of the condition and surgical outcome of nose deformity due to the cleft palate is needed. This study aimed at objective assessment of a cleft palate nose deformity condition by analyzing the following parameters obtained from photographic images of a cleft palate patients: (1) angle difference between two nostril axes. (2) center of the nostril and distance between two centers. (3) overlapped area of two nostrils, and (4) the overlapped area ratio of the two nostrils. A regression equation of doctor's grades was obtained using the eight parameters. Three plastic surgeons gave us the grades for the each photographic image by 10 increments with maximum grade of 100. The average reproducibility of the grades given by the three plastic surgeons and the three laymen using the developed program was $10.8{\pm}4.6%$ and $7.4{\pm}1.8%$, respectively. Kappa values representing the degree of consensus of the plastic surgeons and the three laymen were 0.43 and 0.83. respectively. Correlation coefficient of the grades evaluated by the surgeons and obtained by the neural network was 0.798. In conclusion. the developed neural network model provided us better reproducibility and much better consensus than doctor's subjective evaluation in addition to objectiveness and easy application.

  • PDF

Automatic Color Transformation of Characters Between 2D Animation Scenes Using Neural Network (신경회로망을 이용한 2D 애니메이션 장면 간의 캐릭터 자동 색 변환)

  • Jung, Hyun-Sun;Lee, Jae-Sik;Kim, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.9
    • /
    • pp.1286-1295
    • /
    • 2008
  • Colors of 2D animation characters are generally assigned by art directors' subjective color sense. Even same characters should be colored differently according to the mood of animation scenes. In this study, we introduce the model for automatic color transformation of characters by using neural network. It can not only create automatically colors of characters which are good matched with 2D animation scenes but also reproduce art directors' subjective color sense. Specifically, this neural network model is initially made to learn the patterns of color change between basic colors of characters and colors of characters in various scene. Then if you know basic colors of some characters, you can derive colors of characters under other light source environments using the learned neural network. Subjective ratings(which is adopted to verify the proposed model) by color experts on the automatically transformed colors showed that the colors created by the model tended to be evaluated natural.

  • PDF

Development for Estimation Model of Runway Visual Range using Deep Neural Network (심층신경망을 활용한 활주로 가시거리 예측 모델 개발)

  • Ku, SungKwan;Hong, SeokMin
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.5
    • /
    • pp.435-442
    • /
    • 2017
  • The runway visual range affected by fog and so on is one of the important indicators to determine whether aircraft can take off and land at the airport or not. In the case of airports where transportation airplanes are operated, major weather forecasts including the runway visual range for local area have been released and provided to aviation workers for recognizing that. This paper proposes a runway visual range estimation model with a deep neural network applied recently to various fields such as image processing, speech recognition, natural language processing, etc. It is developed and implemented for estimating a runway visual range of local airport with a deep neural network. It utilizes the past actual weather observation data of the applied airfield for constituting the learning of the neural network. It can show comparatively the accurate estimation result when it compares the results with the existing observation data. The proposed model can be used to generate weather information on the airfield for which no other forecasting function is available.

Facial Expression Classification Using Deep Convolutional Neural Network (깊은 Convolutional Neural Network를 이용한 얼굴표정 분류 기법)

  • Choi, In-kyu;Song, Hyok;Lee, Sangyong;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.162-172
    • /
    • 2017
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. To overcome the disadvantages of existing facial expression databases, various databases are used. In the proposed technique, we construct six facial expression data sets such as 'expressionless', 'happiness', 'sadness', 'angry', 'surprise', and 'disgust'. Pre-processing and data augmentation techniques are also applied to improve efficient learning and classification performance. In the existing CNN structure, the optimal CNN structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of fully-connected layer nodes. Experimental results show that the proposed scheme achieves the highest classification performance of 96.88% while it takes the least time to pass through the CNN structure compared to other models.

The Parallel ANN(Artificial Neural Network) Simulator using Mobile Agent (이동 에이전트를 이용한 병렬 인공신경망 시뮬레이터)

  • Cho, Yong-Man;Kang, Tae-Won
    • The KIPS Transactions:PartB
    • /
    • v.13B no.6 s.109
    • /
    • pp.615-624
    • /
    • 2006
  • The objective of this paper is to implement parallel multi-layer ANN(Artificial Neural Network) simulator based on the mobile agent system which is executed in parallel in the virtual parallel distributed computing environment. The Multi-Layer Neural Network is classified by training session, training data layer, node, md weight in the parallelization-level. In this study, We have developed and evaluated the simulator with which it is feasible to parallel the ANN in the training session and training data parallelization because these have relatively few network traffic. In this results, we have verified that the performance of parallelization is high about 3.3 times in the training session and training data. The great significance of this paper is that the performance of ANN's execution on virtual parallel computer is similar to that of ANN's execution on existing super-computer. Therefore, we think that the virtual parallel computer can be considerably helpful in developing the neural network because it decreases the training time which needs extra-time.

Face Region Detection using a Color Union Model and The Levenberg-Marquadt Algorithm (색상 조합 모델과 LM(Levenberg-Marquadt)알고리즘을 이용한 얼굴 영역 검출)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.255-262
    • /
    • 2007
  • This paper proposes an enhanced skin color-based detection method to find a region of human face in color images. The proposed detection method combines three color spaces, RGB, $YC_bC_r$, YIQ and builds color union histograms of luminance and chrominance components respectively. Combined color union histograms are then fed in to the back-propagation neural network for training and Levenberg-Marquadt algorithm is applied to the iteration process of training. Proposed method with Levenberg-Marquadt algorithm applied to training process of neural network contributes to solve a local minimum problem of back-propagation neural network, one of common methods of training for face detection, and lead to make lower a detection error rate. Further, proposed color-based detection method using combined color union histograms which give emphasis to chrominance components divided from luminance components inputs more confident values at the neural network and shows higher detection accuracy in comparison to the histogram of single color space. The experiments show that these approaches perform a good capability for face region detection, and these are robust to illumination conditions.

Categorization of Korean News Articles Based on Convolutional Neural Network Using Doc2Vec and Word2Vec (Doc2Vec과 Word2Vec을 활용한 Convolutional Neural Network 기반 한국어 신문 기사 분류)

  • Kim, Dowoo;Koo, Myoung-Wan
    • Journal of KIISE
    • /
    • v.44 no.7
    • /
    • pp.742-747
    • /
    • 2017
  • In this paper, we propose a novel approach to improve the performance of the Convolutional Neural Network(CNN) word embedding model on top of word2vec with the result of performing like doc2vec in conducting a document classification task. The Word Piece Model(WPM) is empirically proven to outperform other tokenization methods such as the phrase unit, a part-of-speech tagger with substantial experimental evidence (classification rate: 79.5%). Further, we conducted an experiment to classify ten categories of news articles written in Korean by feeding words and document vectors generated by an application of WPM to the baseline and the proposed model. From the results of the experiment, we report the model we proposed showed a higher classification rate (89.88%) than its counterpart model (86.89%), achieving a 22.80% improvement. Throughout this research, it is demonstrated that applying doc2vec in the document classification task yields more effective results because doc2vec generates similar document vector representation for documents belonging to the same category.

A Deep Neural Network Architecture for Real-Time Semantic Segmentation on Embedded Board (임베디드 보드에서 실시간 의미론적 분할을 위한 심층 신경망 구조)

  • Lee, Junyeop;Lee, Youngwan
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.94-98
    • /
    • 2018
  • We propose Wide Inception ResNet (WIR Net) an optimized neural network architecture as a real-time semantic segmentation method for autonomous driving. The neural network architecture consists of an encoder that extracts features by applying a residual connection and inception module, and a decoder that increases the resolution by using transposed convolution and a low layer feature map. We also improved the performance by applying an ELU activation function and optimized the neural network by reducing the number of layers and increasing the number of filters. The performance evaluations used an NVIDIA Geforce GTX 1080 and TX1 boards to assess the class and category IoU for cityscapes data in the driving environment. The experimental results show that the accuracy of class IoU 53.4, category IoU 81.8 and the execution speed of $640{\times}360$, $720{\times}480$ resolution image processing 17.8fps and 13.0fps on TX1 board.