• Title/Summary/Keyword: Facial Model

Search Result 519, Processing Time 0.029 seconds

Classification Model of Facial Acne Using Deep Learning (딥 러닝을 이용한 안면 여드름 분류 모델)

  • Jung, Cheeoh;Yeo, Ilyeon;Jung, Hoekyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-387
    • /
    • 2019
  • The limitations of applying a variety of artificial intelligence to the medical community are, first, subjective views, extensive interpreters and physical fatigue in interpreting the image of an interpreter's illness. And there are questions about how long it takes to collect annotated data sets for each illness and whether to get sufficient training data without compromising the performance of the developed deep learning algorithm. In this paper, when collecting basic images based on acne data sets, the selection criteria and collection procedures are described, and a model is proposed to classify data into small loss rates (5.46%) and high accuracy (96.26%) in the sequential structure. The performance of the proposed model is compared and verified through a comparative experiment with the model provided by Keras. Similar phenomena are expected to be applied to the field of medical and skin care by applying them to the acne classification model proposed in this paper in the future.

Automatic Generation of Rule-based Caricature Image (규칙 기반 캐리커쳐 자동 생성 기법)

  • Lee, Eun-Jung;Kwon, Ji-Yong;Lee, In-Kwon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.12 no.4
    • /
    • pp.17-22
    • /
    • 2006
  • We present the technique that automatically generates caricatures from input face images. We get the mean-shape of training images and extract input image's feature point using AAM(Active Appearance Model). From literature of caricature artists, we define exaggeration rules and apply our rules to input feature points, then we can get exaggerated feature points. To change our results into cartoon-like images, we apply some cartoon-stylizing method to input image and combine it with facial sketch. The input image is warped to exaggerated feature point for final results. Our method can automatically generate a caricature image while it minimizes user interaction.

  • PDF

Depth Image Restoration Using Generative Adversarial Network (Generative Adversarial Network를 이용한 손실된 깊이 영상 복원)

  • Nah, John Junyeop;Sim, Chang Hun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.23 no.5
    • /
    • pp.614-621
    • /
    • 2018
  • This paper proposes a method of restoring corrupted depth image captured by depth camera through unsupervised learning using generative adversarial network (GAN). The proposed method generates restored face depth images using 3D morphable model convolutional neural network (3DMM CNN) with large-scale CelebFaces Attribute (CelebA) and FaceWarehouse dataset for training deep convolutional generative adversarial network (DCGAN). The generator and discriminator equip with Wasserstein distance for loss function by utilizing minimax game. Then the DCGAN restore the loss of captured facial depth images by performing another learning procedure using trained generator and new loss function.

A Study on Visual Perception based Emotion Recognition using Body-Activity Posture (사용자 행동 자세를 이용한 시각계 기반의 감정 인식 연구)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.18B no.5
    • /
    • pp.305-314
    • /
    • 2011
  • Research into the visual perception of human emotion to recognize an intention has traditionally focused on emotions of facial expression. Recently researchers have turned to the more challenging field of emotional expressions through body posture or activity. Proposed work approaches recognition of basic emotional categories from body postures using neural model applied visual perception of neurophysiology. In keeping with information processing models of the visual cortex, this work constructs a biologically plausible hierarchy of neural detectors, which can discriminate 6 basic emotional states from static views of associated body postures of activity. The proposed model, which is tolerant to parameter variations, presents its possibility by evaluating against human test subjects on a set of body postures of activities.

Management of the Intractable Huge Intracranial Osteoma Based on the 3D Printing Model

  • Choi, Jong-Woo
    • Journal of International Society for Simulation Surgery
    • /
    • v.3 no.2
    • /
    • pp.77-79
    • /
    • 2016
  • Osteoma is one of the benign tumor that occurs on the bones all over the body. Mostly the simple excision is known to be enough. However, sometimes we encounter the troublesome situation where the osteoma is located in very challenging area, which results in the recurrence. 26 year female presented with the intractable intracranial osteoma. Given the disease entity of the osteoma, the simple excision would be enough or conservative management. But this osteoma turned out to be huge and recurrent in spite of the endoscopic resections, which causes the facial disappearance accompanied by the orbital vertical dystopia. Moreover, the patient's main concern was the pain. We performed the intracranial resection of the whole lesion and reconstructed the skull base and frontal bone as well as the part of the orbital wall. In order to restore the original bony anatomy, the 3D printing model was used based on the titanium mesh. I report this unusual case of the intractable intracranial huge osteoma. This report may be helpful for the other surgeons to make a decision on their similar cases in the future.

Face region detection algorithm of natural-image (자연 영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-shin
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.7 no.1
    • /
    • pp.55-60
    • /
    • 2014
  • In this paper, we proposed a method for face region extraction by skin-color hue, saturation and facial feature extraction in natural images. The proposed algorithm is composed of lighting correction and face detection process. In the lighting correction step, performing correction function for a lighting change. The face detection process extracts the area of skin color by calculating Euclidian distances to the input images using as characteristic vectors color and chroma in 20 skin color sample images. Eye detection using C element in the CMY color model and mouth detection using Q element in the YIQ color model for extracted candidate areas. Face area detected based on human face knowledge for extracted candidate areas. When an experiment was conducted with 10 natural images of face as input images, the method showed a face detection rate of 100%.

Recent advances in the reconstruction of cranio-maxillofacial defects using computer-aided design/computer-aided manufacturing

  • Oh, Ji-hyeon
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.40
    • /
    • pp.2.1-2.7
    • /
    • 2018
  • With the development of computer-aided design/computer-aided manufacturing (CAD/CAM) technology, it has been possible to reconstruct the cranio-maxillofacial defect with more accurate preoperative planning, precise patient-specific implants (PSIs), and shorter operation times. The manufacturing processes include subtractive manufacturing and additive manufacturing and should be selected in consideration of the material type, available technology, post-processing, accuracy, lead time, properties, and surface quality. Materials such as titanium, polyethylene, polyetheretherketone (PEEK), hydroxyapatite (HA), poly-DL-lactic acid (PDLLA), polylactide-co-glycolide acid (PLGA), and calcium phosphate are used. Design methods for the reconstruction of cranio-maxillofacial defects include the use of a pre-operative model printed with pre-operative data, printing a cutting guide or template after virtual surgery, a model after virtual surgery printed with reconstructed data using a mirror image, and manufacturing PSIs by directly obtaining PSI data after reconstruction using a mirror image. By selecting the appropriate design method, manufacturing process, and implant material according to the case, it is possible to obtain a more accurate surgical procedure, reduced operation time, the prevention of various complications that can occur using the traditional method, and predictive results compared to the traditional method.

Research and Development of Image Synthesis Model Based on Emotion for the Mobile Environment (모바일 환경에서 감성을 기반으로 한 영상 합성 기법 연구 및 개발)

  • Sim, SeungMin;Lee, JiYeon;Yoon, YongIk
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.51-58
    • /
    • 2013
  • Camera performance of smartphone recently has been developed as much as the digital camera. Interest in applications As a result, many people take pictures and the number of people who are interested in application according to photos has been steadily increasing. However, there are only synthesis programs which are arraying some photos, overlapping multiple images. The model proposed in this paper, base on the emotion that is extracted from the facial expressions by combining the background and applying effects filters. And it can be also utilized in various fields more than any other synthesis programs.

Korean Lip-Reading: Data Construction and Sentence-Level Lip-Reading (한국어 립리딩: 데이터 구축 및 문장수준 립리딩)

  • Sunyoung Cho;Soosung Yoon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.27 no.2
    • /
    • pp.167-176
    • /
    • 2024
  • Lip-reading is the task of inferring the speaker's utterance from silent video based on learning of lip movements. It is very challenging due to the inherent ambiguities present in the lip movement such as different characters that produce the same lip appearances. Recent advances in deep learning models such as Transformer and Temporal Convolutional Network have led to improve the performance of lip-reading. However, most previous works deal with English lip-reading which has limitations in directly applying to Korean lip-reading, and moreover, there is no a large scale Korean lip-reading dataset. In this paper, we introduce the first large-scale Korean lip-reading dataset with more than 120 k utterances collected from TV broadcasts containing news, documentary and drama. We also present a preprocessing method which uniformly extracts a facial region of interest and propose a transformer-based model based on grapheme unit for sentence-level Korean lip-reading. We demonstrate that our dataset and model are appropriate for Korean lip-reading through statistics of the dataset and experimental results.

3D Printed customized sports mouthguard (3D 프린터로 제작하는 마우스가드)

  • Ryu, Jae Jun;Lee, Soo Young
    • The Journal of the Korean dental association
    • /
    • v.58 no.11
    • /
    • pp.700-712
    • /
    • 2020
  • The conventional mouthguard fabrication process consists of elastomeric impression taking and followed gypsum model making is now into intraoral scanning and direct mouthguard 3D printing with an additive manufacturing process. Also, dental professionals can get various diagnostic data collection such as facial scans, cone-beam CT, jaw motion tracking, and intraoral scan data to superimpose them for making virtual patient datasets. To print mouthguards, dental CAD software allows dental professionals to design mouthguards with ease. This article shows how to make 3D printed mouthguard step by step.

  • PDF