• Title/Summary/Keyword: Computer Training

Search Result 2,428, Processing Time 0.025 seconds

Algorithms for Handling Incomplete Data in SVM and Deep Learning (SVM과 딥러닝에서 불완전한 데이터를 처리하기 위한 알고리즘)

  • Lee, Jong-Chan
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.3
    • /
    • pp.1-7
    • /
    • 2020
  • This paper introduces two different techniques for dealing with incomplete data and algorithms for learning this data. The first method is to process the incomplete data by assigning the missing value with equal probability that the missing variable can have, and learn this data with the SVM. This technique ensures that the higher the frequency of missing for any variable, the higher the entropy so that it is not selected in the decision tree. This method is characterized by ignoring all remaining information in the missing variable and assigning a new value. On the other hand, the new method is to calculate the entropy probability from the remaining information except the missing value and use it as an estimate of the missing variable. In other words, using a lot of information that is not lost from incomplete learning data to recover some missing information and learn using deep learning. These two methods measure performance by selecting one variable in turn from the training data and iteratively comparing the results of different measurements with varying proportions of data lost in the variable.

A Study on the Virtual Vision System Image Creation and Transmission Efficiency (가상 비전 시스템 이미지 생성 및 전송 효율에 관한 연구)

  • Kim, Won
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.9
    • /
    • pp.15-20
    • /
    • 2020
  • Software-related training can be considered essential in situations where software is an important factor in national innovation, growth and value creation. As one of the implementation methods for engineering education, various education through virtual simulations that can educate difficult situations in a similar environment are being conducted. Recently, the construction of smart factories at production and manufacturing sites is spreading, and product inspections using vision systems are being conducted. However, it has many difficulties due to lack of operation technology of vision system, but it requires a lot of cost to construct the system for education of vision system. In this paper, provide an educational virtual simulation model that integrates computer and physics engine camera functions and can extract and transmit video. It is possible to generate an image of 30Hz or more at an average of 35.4FPS of the experimental results of the proposed model, and it is possible to send and receive images in a time of 22.7ms, which can be utilized in an educational virtual simulation educational environment.

Dynamic Hand Gesture Recognition Using CNN Model and FMM Neural Networks (CNN 모델과 FMM 신경망을 이용한 동적 수신호 인식 기법)

  • Kim, Ho-Joon
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.2
    • /
    • pp.95-108
    • /
    • 2010
  • In this paper, we present a hybrid neural network model for dynamic hand gesture recognition. The model consists of two modules, feature extraction module and pattern classification module. We first propose a modified CNN(convolutional Neural Network) a pattern recognition model for the feature extraction module. Then we introduce a weighted fuzzy min-max(WFMM) neural network for the pattern classification module. The data representation proposed in this research is a spatiotemporal template which is based on the motion information of the target object. To minimize the influence caused by the spatial and temporal variation of the feature points, we extend the receptive field of the CNN model to a three-dimensional structure. We discuss the learning capability of the WFMM neural networks in which the weight concept is added to represent the frequency factor in training pattern set. The model can overcome the performance degradation which may be caused by the hyperbox contraction process of conventional FMM neural networks. From the experimental results of human action recognition and dynamic hand gesture recognition for remote-control electric home appliances, the validity of the proposed models is discussed.

The Application to the Programming Education Using UML and LabVIEW OOP (UML과 LVOOP를 이용한 프로그래밍 교육의 적용 방안)

  • Jung, Deok-Gil;Jung, Min-Po;Cho, Hyuk-Gyu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2011.10a
    • /
    • pp.375-378
    • /
    • 2011
  • To learn a programming language as a text-based programming and a computer language suitable for a wide range, learner thinks it is very difficult. To represent a visual program is one way to solve this problem easily. The visual language such as Visual C++, Visual Basic and Delphi is represented an interface as the visual component and represented a component action as a text-based. The programmer is very difficult about the component action with text-based and dislikes programming. In this paper, so solve these problems, we use the UML for representing a logical thinking and supporting and object-oriented programming. We suggest for programming education method to replace text-based programming to LabVIEW OOP. In addition, we conduct a survey on how programming education and analyze the training effect.

  • PDF

An Experience Type Virtual Reality Training System for CT(Computerized Tomography) Operations (컴퓨터 단층 촬영기(CT)의 가상 실습을 위한 3차원 체험형 교육 시스템)

  • Shin, Yong-Min;Kim, Young-Ho;Kim, Byung-Ki
    • The KIPS Transactions:PartD
    • /
    • v.14D no.5
    • /
    • pp.501-508
    • /
    • 2007
  • Simulation system was introduced and used a lot in the fields of aviation, vessel, and medical treatment. 3D Simulation system has been used quite insufficiently as it requires a lot of system resource and huge amount of computer calculation. As the graphic card performance and simulation function developed, however, PC based simulation has been activated and is verified of its possibility as an educational software. However, educational institutions need to invest huge amount of budget and manpower to purchase and maintain CT Equipment. For such a reason, educational institutions entrust their students to hospitals for indirect experience of operation or for mere observation. This study, therefore, developed a CT Virtual reality education system with which medical CT Equipment can be directly operated in PC based 3D Virtual environment.

Head Pose Estimation with Accumulated Historgram and Random Forest (누적 히스토그램과 랜덤 포레스트를 이용한 머리방향 추정)

  • Mun, Sung Hee;Lee, Chil woo
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.38-43
    • /
    • 2016
  • As smart environment is spread out in our living environments, the needs of an approach related to Human Computer Interaction(HCI) is increases. One of them is head pose estimation. it related to gaze direction estimation, since head has a close relationship to eyes by the body structure. It's a key factor in identifying person's intention or the target of interest, hence it is an essential research in HCI. In this paper, we propose an approach for head pose estimation with pre-defined several directions by random forest classifier. We use canny edge detector to extract feature of the different facial image which is obtained between input image and averaged frontal facial image for extraction of rotation information of input image. From that, we obtain the binary edge image, and make two accumulated histograms which are obtained by counting the number of pixel which has non-zero value along each of the axes. This two accumulated histograms are used to feature of the facial image. We use CAS-PEAL-R1 Dataset for training and testing to random forest classifier, and obtained 80.6% accuracy.

Real Time Eye and Gaze Tracking (실시간 눈과 시선 위치 추적)

  • 이영식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.477-483
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks(GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Futhermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.

Lip-reading System based on Bayesian Classifier (베이지안 분류를 이용한 립 리딩 시스템)

  • Kim, Seong-Woo;Cha, Kyung-Ae;Park, Se-Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Pronunciation recognition systems that use only video information and ignore voice information can be applied to various customized services. In this paper, we develop a system that applies a Bayesian classifier to distinguish Korean vowels via lip shapes in images. We extract feature vectors from the lip shapes of facial images and apply them to the designed machine learning model. Our experiments show that the system's recognition rate is 94% for the pronunciation of 'A', and the system's average recognition rate is approximately 84%, which is higher than that of the CNN tested for comparison. Our results show that our Bayesian classification method with feature values from lip region landmarks is efficient on a small training set. Therefore, it can be used for application development on limited hardware such as mobile devices.

A Study on Multi-Object Tracking Method using Color Clustering in ISpace (컬러 클러스터링 기법을 이용한 공간지능화의 다중이동물체 추척 기법)

  • Jin, Tae-Seok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.11
    • /
    • pp.2179-2184
    • /
    • 2007
  • The Intelligent Space(ISpace) provides challenging research fields for surveillance, human-computer interfacing, networked camera conferencing, industrial monitoring or service and training applications. ISpace is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. This paper described appearance based unknown object tracking with the distributed vision system in intelligent space. First, we discuss how object color information is obtained and how the color appearance based model is constructed from this data. Then, we discuss the global color model based on the local color information. The process of learning within global model and the experimental results are also presented.

Deep Learning Model for Mental Fatigue Discrimination System based on EEG (뇌파기반 정신적 피로 판별을 위한 딥러닝 모델)

  • Seo, Ssang-Hee
    • Journal of Digital Convergence
    • /
    • v.19 no.10
    • /
    • pp.295-301
    • /
    • 2021
  • Individual mental fatigue not only reduces cognitive ability and work performance, but also becomes a major factor in large and small accidents occurring in daily life. In this paper, a CNN model for EEG-based mental fatigue discrimination was proposed. To this end, EEG in the resting state and task state were collected and applied to the proposed CNN model, and then the model performance was analyzed. All subjects who participated in the experiment were right-handed male students attending university, with and average age of 25.5 years. Spectral analysis was performed on the measured EEG in each state, and the performance of the CNN model was compared and analyzed using the raw EEG, absolute power, and relative power as input data of the CNN model. As a result, the relative power of the occipital lobe position in the alpha band showed the best performance. The model accuracy is 85.6% for training data, 78.5% for validation, and 95.7% for test data. The proposed model can be applied to the development of an automated system for mental fatigue detection.