• Title/Summary/Keyword: Facial ms

Search Result 23, Processing Time 0.023 seconds

The Study on the Effect of the 30's Females Forehead muscular-cutaneous (by SUKI® intervention) (30대 여성의 이마근피에 미치는 영향 연구(SUKI®중재에 의한))

  • Jeon, Jong-Mo;Hong, Seong-Gyun
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.5
    • /
    • pp.194-201
    • /
    • 2022
  • This study purpose was to know the effects 30 aged females forehead ms by SUKI intervention(4weeks). Total tested group were 18 persons, It used for SUKI intervention of SUKI process C1, SUKI process C2, SUKI process C3, SUKI process C4 were adopted three times a week in 4 weeks. The research conclusion like this. EG was a significant difference in forehead ms. Therefore, to maintaining elasticity on forehead ms. showed as some research of SUKI intervention effects for the forehead ms(p<.05). In this study, even though limited, it was judged that wrinkles due to a decrease in elasticity of the forehead ms located on the upper surface of a woman's face in her 30s could have a profound effect on women's external appearance, so SUKI intervention was applied. In addition, the role of the elastic on the forehead ms was to suggest an alternative semester method that can effectively control the management of external appearance by managing which it suitable for the life cycle of women in their 30s. In conclusion, we hope that in the future, various experiments will be used as new research data on how to prevent females facial skin beauty and wrinkles and help improve elasticity of facial ms around the face.

Gaze Detection System by Wide and Narrow View Camera (광각 및 협각 카메라를 이용한 시선 위치 추적 시스템)

  • 박강령
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.12C
    • /
    • pp.1239-1249
    • /
    • 2003
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Previous gaze detection system uses a wide view camera, which can capture the whole face of user. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we implement the gaze detection system with a wide view camera and a narrow view camera. In order to detect the position of user's eye changed by facial movements, the narrow view camera has the functionalities of auto focusing and auto pan/tilt based on the detected 3D facial feature positions. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 3.1 cm of RMS error in case of Permitting facial movements and 3.57 cm in case of permitting facial and eye movement. The processing time is so short as to be implemented in real-time system(below 30 msec in Pentium -IV 1.8 GHz)

Can a spontaneous smile invalidate facial identification by photo-anthropometry?

  • Pinto, Paulo Henrique Viana;Rodrigues, Caio Henrique Pinke;Rozatto, Juliana Rodrigues;da Silva, Ana Maria Bettoni Rodrigues;Bruni, Aline Thais;da Silva, Marco Antonio Moreira Rodrigues;da Silva, Ricardo Henrique Alves
    • Imaging Science in Dentistry
    • /
    • v.51 no.3
    • /
    • pp.279-290
    • /
    • 2021
  • Purpose: Using images in the facial image comparison process poses a challenge for forensic experts due to limitations such as the presence of facial expressions. The aims of this study were to analyze how morphometric changes in the face during a spontaneous smile influence the facial image comparison process and to evaluate the reproducibility of measurements obtained by digital stereophotogrammetry in these situations. Materials and Methods: Three examiners used digital stereophotogrammetry to obtain 3-dimensional images of the faces of 10 female participants(aged between 23 and 45 years). Photographs of the participants' faces were captured with their faces at rest (group 1) and with a spontaneous smile (group 2), resulting in a total of 60 3-dimensional images. The digital stereophotogrammetry device obtained the images with a 3.5-ms capture time, which prevented undesirable movements of the participants. Linear measurements between facial landmarks were made, in units of millimeters, and the data were subjected to multivariate and univariate statistical analyses using Pirouette® version 4.5 (InfoMetrix Inc., Woodinville, WA, USA) and Microsoft Excel® (Microsoft Corp., Redmond, WA, USA), respectively. Results: The measurements that most strongly influenced the separation of the groups were related to the labial/buccal region. In general, the data showed low standard deviations, which differed by less than 10% from the measured mean values, demonstrating that the digital stereophotogrammetry technique was reproducible. Conclusion: The impact of spontaneous smiles on the facial image comparison process should be considered, and digital stereophotogrammetry provided good reproducibility.

Action Unit Based Facial Features for Subject-independent Facial Expression Recognition (인물에 독립적인 표정인식을 위한 Action Unit 기반 얼굴특징에 관한 연구)

  • Lee, Seung Ho;Kim, Hyung-Il;Park, Sung Yeong;Ro, Yong Man
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2015.04a
    • /
    • pp.881-883
    • /
    • 2015
  • 실제적인 표정인식 응용에서는 테스트 시 등장하는 인물이 트레이닝 데이터에 존재하지 않는 경우가 빈번하여 성능 저하가 발생한다. 본 논문에서는 인물에 독립적인(subject-independent) 표정인식을 위한 얼굴특징을 제안한다. 제안방법은 인물에 공통적인 얼굴 근육 움직임(Action Unit(AU))에 기반한 기하학 정보를 표정 특징으로 사용한다. 따라서 인물의 고유 아이덴티티(identity)의 영향은 감소되고 표정과 관련된 정보는 강조된다. 인물에 독립적인 표정인식 실험결과, 86%의 높은 표정인식률과 테스트 비디오 시퀀스 당 3.5ms(Matlab 기준)의 매우 빠른 분류속도를 달성하였다.

Driver Drowsiness Detection Algorithm based on Facial Features (얼굴 특징점 기반의 졸음운전 감지 알고리즘)

  • Oh, Meeyeon;Jeong, Yoosoo;Park, Kil-Houm
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.11
    • /
    • pp.1852-1861
    • /
    • 2016
  • Drowsy driving is a significant factor in traffic accidents, so driver drowsiness detection system based on computer vision for convenience and safety has been actively studied. However, it is difficult to accurately detect the driver drowsiness in complex background and environmental change. In this paper, it proposed the driver drowsiness detection algorithm to determine whether the driver is drowsy through the measurement standard of a yawn, eyes drowsy status, and nod based on facial features. The proposed algorithm detect the driver drowsiness in the complex background, and it is robust to changes in the environment. The algorithm can be applied in real time because of the processing speed faster. Throughout the experiment, we confirmed that the algorithm reliably detected driver drowsiness. The processing speed of the proposed algorithm is about 0.084ms. Also, the proposed algorithm can achieve an average detection rate of 98.48% and 97.37% for a yawn, drowsy eyes, and nod in the daytime and nighttime.

Benefits of lateral cephalogram during landmark identification on posteroanterior cephalograms

  • Hwang, Sel-Ae;Lee, Jae-Seo;Hwang, Hyeon-Shik;Lee, Kyung-Min
    • The korean journal of orthodontics
    • /
    • v.49 no.1
    • /
    • pp.32-40
    • /
    • 2019
  • Objective: Precise identification of landmarks on posteroanterior (PA) cephalograms is necessary when evaluating lateral problems such as facial asymmetry. The aim of the present study was to investigate whether the use of lateral (LA) cephalograms can reduce errors in landmark identification on PA cephalograms. Methods: Five examiners identified 16 landmarks (Cg, N, ANS, GT, Me, RO, Lo, FM, Z, Or, Zyg, Cd, NC, Ms, M, and Ag) on 32 PA cephalograms with and without LA cephalograms at the same time. The positions of the landmarks were recorded and saved in the horizontal and vertical direction. The mean errors and standard deviation of landmarks location according to the use of LA cephalograms were compared for each landmark. Results: Relatively small errors were found for ANS, Me, Ms, and Ag, while relatively large errors were found for N, GT, Z, Or, and Cd. No significant difference was found between the horizontal and vertical errors for Z and Or, while large vertical errors were found for N, GT, and Cd. The value of identification error was lower when the landmarks were identified using LA cephalograms. Statistically significant error reductions were found at N and Cd with LA cephalograms, especially in the vertical direction. Conclusions: The use of LA cephalograms during identification of landmarks on PA cephalograms could help reduce identification errors.

Determination of Appropriate Exposure Angles for the Reverse Water's View using a Head Phantom (두부 팬텀을 이용한 Reverse Water's View에 관한 적절한 촬영 각도 분석)

  • Lee, Min-Su;Lee, Keun-Ohk;Choi, Jae-Ho;Jung, Jae-Hong
    • Journal of radiological science and technology
    • /
    • v.40 no.2
    • /
    • pp.187-195
    • /
    • 2017
  • Early diagnosis for upper facial trauma is difficult by using the standard Water's view (S-Water's) in general radiograph due to overlapping of anatomical structures, the uncertainty of patient positioning, and specific patients with obese, pediatric, old, or high-risk. The purpose of this study was to analyze appropriate exposure angles through a comparison of two different protocols (S-Water's vs. reverse Water's view (R-Water's)) by using a head phantom. A head phantom and general radiograph with 75 kVp, 400 mA, 45 ms 18 mAs, and SID 100 cm. Images of R-Water's were obtained by different angles in the range of $0^{\circ}$ to $50^{\circ}$, which adjusted an angle at 1 degree interval in supine position. Survey elements were developed and three observers were evaluated with four elements including the maxillary sinus, zygomatic arch, petrous ridge, and image distortion. Statistical significant analysis were used the Krippendorff's alpha and Fleiss' kappa. The intra-class correlation (ICC) coefficient for three observers were high with maxillary, 0.957 (0.903, 0.995); zygomatic arch, 0.939 (0.866, 0.987); petrous ridge, 0.972 (0.897, 1.000); and image distortion, 0.949 (0.830, 1.000). The high-quality image (HI) and perfect agreement (PA) for acquired exposure angles were high in range of the maxillary sinus ($36^{\circ}-44^{\circ}C$), zygomatic arch ($33^{\circ}-40^{\circ}$), petrous ridge ($32^{\circ}-50^{\circ}$), and image distortion ($44^{\circ}-50^{\circ}$). Consequently, an appropriate exposure angles for the R-Water's view in the supine position for patients with facial trauma are in the from $36^{\circ}$ to $40^{\circ}$ in this phantom study. The results of this study will be helpful for the rapid diagnosis of facial fractures by simple radiography.

Display of Irradiation Location of Ultrasonic Beauty Device Using AR Scheme (증강현실 기법을 이용한 초음파 미용기의 조사 위치 표시)

  • Kang, Moon-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.25-31
    • /
    • 2020
  • In this study, for the safe use of a portable ultrasonic skin-beauty device, an android app was developed to show the irradiation locations of focused ultrasound to a user through augmented reality (AR) and enable stable self-surgery. The utility of the app was assessed through testing. While the user is making a facial treatment with the beauty device, the user's face and the ultrasonic irradiation location on the face are detected in real-time with a smart-phone camera. The irradiation location is then indicated on the face image and shown to the user so that excessive ultrasound is not irradiated to the same area during treatment. To this end, ML-Kit is used to detect the user's face landmarks in real-time, and they are compared with a reference face model to estimate the pose of the face, such as rotation and movement. After mounting a LED on the ultrasonic irradiation part of the device and operating the LED during irradiation, the LED light was searched to find the position of the ultrasonic irradiation on the smart-phone screen, and the irradiation position was registered and displayed on the face image based on the estimated face pose. Each task performed in the app was implemented through the thread and the timer, and all tasks were executed within 75 ms. The test results showed that the time taken to register and display 120 ultrasound irradiation positions was less than 25ms, and the display accuracy was within 20mm when the face did not rotate significantly.

Affective Priming Effect on Cognitive Processes Reflected by Event-related Potentials (ERP로 확인되는 인지정보 처리에 대한 정서 점화효과)

  • Kim, Choong-Myung
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.5
    • /
    • pp.242-250
    • /
    • 2016
  • This study was conducted to investigate whether Stroop-related cognitive task will be affected according to the preceding affective valence factored by matchedness in response time(RT) and whether facial recognition will be indexed by specific event-related potentials(ERPs) signature in normal person as in patients suffering from affective disorder. ERPs primed by subliminal(30ms) facial stimuli were recorded when presented with four pairs of affect(positive or negative) and cognitive task(matched or mismatched) to get ERP effects(N2 and P300) in terms of its amplitude and peak latency variations. Behavioral response analysis based on RTs confirmed that subliminal affective stimuli primed the target processing in all affective condition except for the neutral stimulus. Additional results for the ERPs performed in the negative affect with mismatched condition reached significance of emotional-face specificity named N2 showing more amplitude and delayed peak latency compared to the positive counterpart. Furthermore the condition shows more positive amplitude and earlier peak latency of P300 effect denoting cognitive closure than the corresponding positive affect condition. These results are suggested to reflect that negative affect stimulus in subliminal level is automatically inhibited such that this effect had influence on accelerating detection of the affect and facilitating response allowing adequate reallocation of attentional resources. The functional and cognitive significance with these findings was implied in terms of subliminal effect and affect-related recognition modulating the cognitive tasks.

Validity of midsagittal reference planes constructed in 3D CT images (전산화단층사진을 이용한 3차원 영상에서 정중시상기준평면 설정에 관한 연구)

  • Jeon, Ye-Na;Lee, Ki-Heon;Hwang, Hyeon-Shik
    • The korean journal of orthodontics
    • /
    • v.37 no.3 s.122
    • /
    • pp.182-191
    • /
    • 2007
  • Objective: The purpose of this study was to evaluate the validity of midsagittal reference (MSR) planes constructed in maxillofacial 3D images. Methods: Maxillofacial computed tomography (CT) images were obtained in 36 normal occlusion individuals who did not have apparent facial asymmetry, and 3D images were reconstructed using a computer software. Six MSR planes (Cg-ANS-Ba, Cg-ANS-Op, Cg-PNS-Ba, Cg-PNS-OP, FH${\perp}$(Cg, Ba), FH${\perp}$(Cg, Op)) were constructed using the landmarks located in the midsagittal area of the maxillofacial structure, such as Cg, ANS, PNS, Ba and Op, and FH plane constructed with Po and Or. The six pairs of landmarks (Z, Fr, Fs, Zy, Mx, Ms), which represent right and left symmetry in the maxillofacial structure, were selected. Statistically significant differences of the right and the left measurements were examined through t-test, and the difference of the right and the left measurement was compared among the six MSR planes. Results: The distances from the right and the left landmarks in each pair to each MSR plane did not show a statistically significant difference. The reproducibility of the landmark identification was excellent. Conclusion: All the six planes constructed in this study can be used as a MSR plane in maxillofacial 3D analysis, particularly, the planes including Cg and ANS.