• Title/Summary/Keyword: Head and face

Search Result 462, Processing Time 0.029 seconds

Effects of the Method of Changing Compression Ratio on Engine Performance in an SI Engine (가솔린 엔진에서 압축비 변경 방법이 성능에 미치는 영향)

  • 이원근;엄인용
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.9 no.4
    • /
    • pp.27-33
    • /
    • 2001
  • In this study, it is observed that the distribution of combustion chamber volume affects the volumetric efficiency. The distribution ratio was adjusted by controlling combustion chamber volume of head and piston bowl one. Four cases were investigated, which are the combination of different distribution ratios and different compression ratios (9.8-10.0). A commercial SOHC 3-valve engine was modified by cutting the bottom face of the head and/or replacing the piston by the one that has different volume. The result shows that the less the head side volume, the more volumetric efficiency is achieved under the same compression ratio. It is also observed that increasing volumetric efficiency results in early knock occurrence due to increased "real" compression ratio. To consider reliability in estimating the volumetric efficiency, we examined the sensitivity of the AFR equation to possible errors in emission measurements. It is shown that the volumetric efficiency, which is calculated by measuring AFR and fuel consumption, can be controlled in 1% error. 1% error.

  • PDF

Estimation of a Gaze Point in 3D Coordinates using Human Head Pose (휴먼 헤드포즈 정보를 이용한 3차원 공간 내 응시점 추정)

  • Shin, Chae-Rim;Yun, Sang-Seok
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.177-179
    • /
    • 2021
  • This paper proposes a method of estimating location of a target point at which an interactive robot gazes in an indoor space. RGB images are extracted from low-cost web-cams, user head pose is obtained from the face detection (Openface) module, and geometric configurations are applied to estimate the user's gaze direction in the 3D space. The coordinates of the target point at which the user stares are finally measured through the correlation between the estimated gaze direction and the plane on the table plane.

  • PDF

Development of Headforms for the Labor Population in Selection, Use and Maintenance of Respirators in Korea (호흡보호구의 선정, 사용 및 관리를 위한 한국형 노동인구의 인두 개발)

  • Jung-Keun Park;Se-Dong Kim;Eun-Ji Lee
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.34 no.3
    • /
    • pp.279-291
    • /
    • 2024
  • Objective: This was to develop headforms for the labor population, based on a three-dimensional(3D) face dimensions data base(DB) and a principal component analysis(PCA) fit test panel, in selection, use and maintenance of respirators in Korea. Methods: This study was part of a two-year-project initiated in 2021. The study was designed and conducted in line with ISO 16976-2 while subjects were those employed in the development of the PCA fit test panel. The approaches included literature review; examination on conformity of the 3D face dimensions DB; and development of headforms representing the labor population. The mean data were used in order to construct each model of the headforms through a way of 3D modeling and 3D printing technology. Results: A total of 2,752 subjects were determined. Five models of headforms(small, medium, large, long-narrow, short-wide) were completely constructed for the labor population. For example, means of the 10 face dimensions for medium headform model were: minimum frontal breadth 106 mm, face width 136 mm, jaw width 127 mm, face length 111 mm, interpupillary distance 69 mm, head breadth 164 mm, nose protrusion 12 mm, nose breadth 34 mm, nasal root breadth 35 mm, and nose length 50 mm. Conclusions: Five models of headforms were newly constructed using the study data. It is likely desirable that the constructed headforms, together with the 3D face dimensions DB as well as the PCA fit test panel, can be utilized more effectively in selection, use and maintenance of respirators for users including the labor population.

The Body Cathexis Difference between Naked Body and After Appearence management Body of 20-30 yrs College Students (나체상태와 외모관리 후의 신체만족도 차이 -20대 남녀 대학생을 중심으로-)

  • Kim, Jung-Won;Yoon, Jong-Hee
    • Fashion & Textile Research Journal
    • /
    • v.1 no.2
    • /
    • pp.127-136
    • /
    • 1999
  • The purpose of this research was to investigate the difference between perceptions of the nude body and of the clothed body as measured by body cathexis scale. Subjects were 274 college male and female between 20~30 yrs. Data were analyzed by using frequency, T test, cluster analysis, Duncan test by using Spss for window 8.0 PC program. Significant difference were found between mean scores of male and female on the nude body cathexis (NBC) and clothed body cathexis (CBC) Scales for hair texture, hair color, face, face color, shape of head, eye, lips, forehead, back, trunk, waist, bust, leg of shape, chest, hip. On the difference between male and female, significant differences were found between NBC and CBC scales for all body parts except hair texture, face color, ears, eyes, teech. Male had higher satisfaction than female in both body cathexis. The taller men, the higher body satisfaction with face shape, body shape, height in both body cathexis. Before appearence management, the bigger men, the higher body satisfaction with musle, waist, height, chest, body shape in both body cathexis. Male had higher satisfaction than female in both body cathexis. The taller women, the higher body satisfaction with neck, body shape, height in before appearence management. The bigger women, the higher body satisfaction with heights, weight distributions, waist, height in both.

  • PDF

Development of Face Robot Actuated by Artificial Muscle

  • Choi, H.R.;Kwak, J.W.;Chi, H.J.;Jung, K.M.;Hwang, S.H.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1229-1234
    • /
    • 2004
  • Face robots capable of expressing their emotional status, can be adopted as an e cient tool for friendly communication between the human and the machine. In this paper, we present a face robot actuated with arti cial muscle based on dielectric elastomer. By exploiting the properties of polymers, it is possible to actuate the covering skin, and provide human-like expressivity without employing complicated mechanisms. The robot is driven by seven types of actuator modules such as eye, eyebrow, eyelid, brow, cheek, jaw and neck module corresponding to movements of facial muscles. Although they are only part of the whole set of facial motions, our approach is su cient to generate six fundamental facial expressions such as surprise, fear, angry, disgust, sadness, and happiness. Each module communicates with the others via CAN communication protocol and according to the desired emotional expressions, the facial motions are generated by combining the motions of each actuator module. A prototype of the robot has been developed and several experiments have been conducted to validate its feasibility.

  • PDF

A Face Robot Actuated With Artificial Muscle Based on Dielectric Elastomer

  • Kwak Jong Won;Chi Ho June;Jung Kwang Mok;Koo Ja Choon;Jeon Jae Wook;Lee Youngkwan;Nam Jae-do;Ryew Youngsun;Choi Hyouk Ryeol
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.2
    • /
    • pp.578-588
    • /
    • 2005
  • Face robots capable of expressing their emotional status, can be adopted as an efficient tool for friendly communication between the human and the machine. In this paper, we present a face robot actuated with artificial muscle based on dielectric elastomer. By exploiting the properties of dielectric elastomer, it is possible to actuate the covering skin, eyes as well as provide human-like expressivity without employing complicated mechanisms. The robot is driven by seven actuator modules such eye, eyebrow, eyelid, brow, cheek, jaw and neck module corresponding to movements of facial muscles. Although they are only part of the whole set of facial motions, our approach is sufficient to generate six fundamental facial expressions such as surprise, fear, angry, disgust, sadness, and happiness. In the robot, each module communicates with the others via CAN communication protocol and according to the desired emotional expressions, the facial motions are generated by combining the motions of each actuator module. A prototype of the robot has been developed and several experiments have been conducted to validate its feasibility.

Robust pupil detection and gaze tracking under occlusion of eyes

  • Lee, Gyung-Ju;Kim, Jin-Suh;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.10
    • /
    • pp.11-19
    • /
    • 2016
  • The size of a display is large, The form becoming various of that do not apply to previous methods of gaze tracking and if setup gaze-track-camera above display, can solve the problem of size or height of display. However, This method can not use of infrared illumination information of reflected cornea using previous methods. In this paper, Robust pupil detecting method for eye's occlusion, corner point of inner eye and center of pupil, and using the face pose information proposes a method for calculating the simply position of the gaze. In the proposed method, capture the frame for gaze tracking that according to position of person transform camera mode of wide or narrow angle. If detect the face exist in field of view(FOV) in wide mode of camera, transform narrow mode of camera calculating position of face. The frame captured in narrow mode of camera include gaze direction information of person in long distance. The method for calculating the gaze direction consist of face pose estimation and gaze direction calculating step. Face pose estimation is estimated by mapping between feature point of detected face and 3D model. To calculate gaze direction the first, perform ellipse detect using splitting from iris edge information of pupil and if occlusion of pupil, estimate position of pupil with deformable template. Then using center of pupil and corner point of inner eye, face pose information calculate gaze position at display. In the experiment, proposed gaze tracking algorithm in this paper solve the constraints that form of a display, to calculate effectively gaze direction of person in the long distance using single camera, demonstrate in experiments by distance.

Bayesian Network Model for Human Fatigue Recognition (피로 인식을 위한 베이지안 네트워크 모델)

  • Lee Young-sik;Park Ho-sik;Bae Cheol-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.9C
    • /
    • pp.887-898
    • /
    • 2005
  • In this paper, we introduce a probabilistic model based on Bayesian networks BNs) for recognizing human fatigue. First of all, we measured face feature information such as eyelid movement, gaze, head movement, and facial expression by IR illumination. But, an individual face feature information does not provide enough information to determine human fatigue. Therefore in this paper, a Bayesian network model was constructed to fuse as many as possible fatigue cause parameters and face feature information for probabilistic inferring human fatigue. The MSBNX simulation result ending a 0.95 BN fatigue index threshold. As a result of the experiment, when comparisons are inferred BN fatigue index and the TOVA response time, there is a mutual correlation and from this information we can conclude that this method is very effective at recognizing a human fatigue.

Effective real-time identification using Bayesian statistical methods gaze Network (베이지안 통계적 방안 네트워크를 이용한 효과적인 실시간 시선 식별)

  • Kim, Sung-Hong;Seok, Gyeong-Hyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.3
    • /
    • pp.331-338
    • /
    • 2016
  • In this paper, we propose a GRNN(: Generalized Regression Neural Network) algorithms for new eyes and face recognition identification system to solve the points that need corrective action in accordance with the existing problems of facial movements gaze upon it difficult to identify the user and. Using a Kalman filter structural information elements of a face feature to determine the authenticity of the face was estimated future location using the location information of the current head and the treatment time is relatively fast horizontal and vertical elements of the face using a histogram analysis the detected. And the light obtained by configuring the infrared illuminator pupil effects in real-time detection of the pupil, the pupil tracking was - to extract the text print vector.

Sequential reconstruction for recurrent head and neck cancer: A 10-year experience

  • Chung, Soon Won;Byun, Il Hwan;Lee, Won Jai
    • Archives of Plastic Surgery
    • /
    • v.46 no.5
    • /
    • pp.449-454
    • /
    • 2019
  • Background Most patients with head and neck cancer successfully undergo oncologic resection followed by free or local flap reconstruction, depending on the tumor's size and location. Despite effective curative resection and reconstruction, head and neck cancer patients still face a high risk of recurrence and the possibility of a second primary cancer. Moreover, surgeons hesitate to perform sequential reconstruction following curative resection for several reasons. Few large-scale studies on this subject are available. Therefore, we retrospectively evaluated the outcome of sequential head and neck reconstruction to determine the possible risks. Methods In total, 467 patients underwent head and neck reconstruction following cancer resection at our center from 2008 to 2017. Of these cases, we retrospectively reviewed the demographic and clinical features of 58 who had sequential head and neck reconstruction following resection of recurrent cancer. Results Our study included 43 males (74.1%) and 15 females (25.9%). The mean age at the initial operation was $55.4{\pm}15.3years$, while the mean age at the most recent operation was $59.0{\pm}14.3years$. The interval between the first and second operations was $49.2{\pm}62.4months$. Twelve patients (20.7%) underwent surgery on the tongue, and 12 (20.7%) had procedures on the oropharynx. Thirty-four patients (58.6%) received a sequential free flap reconstruction, and 24 patients (41.4%) were treated using locoregional flaps. No cases of flap failure occurred. Conclusions Our findings suggest that patients who need additional operations with recurrent head and neck cancer could optimally benefit from sequential curative resections and reconstructions.