• Title/Summary/Keyword: Korean human dataset

Search Result 165, Processing Time 0.028 seconds

Analysis of Cross Sectional Ease Values for Fit Analysis from 3D Body Scan Data Taken in Working Positions

  • Nam, Jin-Hee;Branson, Donna H.;Ashdown, Susan P.;Cao, Huantian;Carnrite, Erica
    • International Journal of Human Ecology
    • /
    • v.12 no.1
    • /
    • pp.87-99
    • /
    • 2011
  • Purpose- The purpose of this study was to compare the fit of two prototype liquid cooled vests using a 3D body scanner and accompanying software. The objectives of this study were to obtain quantitative measurements of ease values, and to use these data to evaluate the fit of two cooling vests in active positions and to develop methodological protocol to resolve alignment issues between the scans using software designed for the alignment of 3D objects. Design/methodology/approach- Garment treatments and body positions were two independent variables with three levels each. Quantitative dataset were dependent variables, and were manipulated in 3x3 factorial designs with repeated measures. Scan images from eight subjects were used and ease values were obtained to compare the fit. Two different types of analyses were conducted in order to compare the fit using t-test; those were radial mean distance value analysis and radial distance distribution rate analysis. Findings- Overall prototype II achieved a closer fit than prototype I with both analyses. These were consistent results with findings from a previous study that used a different approach for evaluation. Research limitations/implications- The main findings can be used as practical feedback for prototype modification/selection in the design process, making use of 3D body scanner as an evaluation tool. Originality/value- Methodological protocols that were devised to eliminate potential sources of errors can contribute to application of data from 3D body scanners.

Head Pose Estimation with Accumulated Historgram and Random Forest (누적 히스토그램과 랜덤 포레스트를 이용한 머리방향 추정)

  • Mun, Sung Hee;Lee, Chil woo
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.38-43
    • /
    • 2016
  • As smart environment is spread out in our living environments, the needs of an approach related to Human Computer Interaction(HCI) is increases. One of them is head pose estimation. it related to gaze direction estimation, since head has a close relationship to eyes by the body structure. It's a key factor in identifying person's intention or the target of interest, hence it is an essential research in HCI. In this paper, we propose an approach for head pose estimation with pre-defined several directions by random forest classifier. We use canny edge detector to extract feature of the different facial image which is obtained between input image and averaged frontal facial image for extraction of rotation information of input image. From that, we obtain the binary edge image, and make two accumulated histograms which are obtained by counting the number of pixel which has non-zero value along each of the axes. This two accumulated histograms are used to feature of the facial image. We use CAS-PEAL-R1 Dataset for training and testing to random forest classifier, and obtained 80.6% accuracy.

Post space preparation timing of root canals sealed with AH Plus sealer

  • Kim, Hae-Ri;Kim, Young Kyung;Kwon, Tae-Yub
    • Restorative Dentistry and Endodontics
    • /
    • v.42 no.1
    • /
    • pp.27-33
    • /
    • 2017
  • Objectives: To determine the optimal timing for post space preparation of root canals sealed with epoxy resin-based AH Plus sealer in terms of its polymerization and influence on apical leakage. Materials and Methods: The epoxy polymerization of AH Plus (Dentsply DeTrey) as a function of time after mixing (8, 24, and 72 hours, and 1 week) was evaluated using Fourier transform infrared (FTIR) spectroscopy and microhardness measurements. The change in the glass transition temperature ($T_g$) of the material with time was also investigated using differential scanning calorimetry (DSC). Fifty extracted human single-rooted premolars were filled with gutta-percha and AH Plus, and randomly separated into five groups (n = 10) based on post space preparation timing (immediately after root canal obturation and 8, 24, and 72 hours, and 1 week after root canal obturation). The extent of apical leakage (mm) of the five groups was compared using a dye leakage test. Each dataset was statistically analyzed by one-way analysis of variance and Tukey's post hoc test (${\alpha}=0.05$). Results: Continuous epoxy polymerization of the material with time was observed. Although the $T_g$ values of the material gradually increased with time, the specimens presented no clear $T_g$ value at 1 week after mixing. When the post space was prepared 1 week after root canal obturation, the leakage was significantly higher than in the other groups (p < 0.05), among which there was no significant difference in leakage. Conclusions: Poor apical seal was detected when post space preparation was delayed until 1 week after root canal obturation.

Cerebrospinal fluid flow in normal beagle dogs analyzed using magnetic resonance imaging

  • Cho, Hyunju;Kim, Yejin;Hong, Saebyel;Choi, Hojung
    • Journal of Veterinary Science
    • /
    • v.22 no.1
    • /
    • pp.2.1-2.10
    • /
    • 2021
  • Background: Diseases related to cerebrospinal fluid flow, such as hydrocephalus, syringomyelia, and Chiari malformation, are often found in small dogs. Although studies in human medicine have revealed a correlation with cerebrospinal fluid flow in these diseases by magnetic resonance imaging, there is little information and no standard data for normal dogs. Objectives: The purpose of this study was to obtain cerebrospinal fluid flow velocity data from the cerebral aqueduct and subarachnoid space at the foramen magnum in healthy beagle dogs. Methods: Six healthy beagle dogs were used in this experimental study. The dogs underwent phase-contrast and time-spatial labeling inversion pulse magnetic resonance imaging. Flow rate variations in the cerebrospinal fluid were observed using sagittal time-spatial labeling inversion pulse images. The pattern and velocity of cerebrospinal fluid flow were assessed using phase-contrast magnetic resonance imaging within the subarachnoid space at the foramen magnum level and the cerebral aqueduct. Results: In the ventral aspect of the subarachnoid space and cerebral aqueduct, the cerebrospinal fluid was characterized by a bidirectional flow throughout the cardiac cycle. The mean ± SD peak velocities through the ventral and dorsal aspects of the subarachnoid space and the cerebral aqueduct were 1.39 ± 0.13, 0.32 ± 0.12, and 0.76 ± 0.43 cm/s, respectively. Conclusions: Noninvasive visualization of cerebrospinal fluid flow movement with magnetic resonance imaging was feasible, and a reference dataset of cerebrospinal fluid flow peak velocities was obtained through the cervical subarachnoid space and cerebral aqueduct in healthy dogs.

Mechanistic insight into the progressive retinal atrophy disease in dogs via pathway-based genome-wide association analysis

  • Sheet, Sunirmal;Krishnamoorthy, Srikanth;Park, Woncheoul;Lim, Dajeong;Park, Jong-Eun;Ko, Minjeong;Choi, Bong-Hwan
    • Journal of Animal Science and Technology
    • /
    • v.62 no.6
    • /
    • pp.765-776
    • /
    • 2020
  • The retinal degenerative disease, progressive retinal atrophy (PRA) is a major reason of vision impairment in canine population. Canine PRA signifies an inherently dissimilar category of retinal dystrophies which has solid resemblances to human retinis pigmentosa. Even though much is known about the biology of PRA, the knowledge about the intricate connection among genetic loci, genes and pathways associated to this disease in dogs are still remain unknown. Therefore, we have performed a genome wide association study (GWAS) to identify susceptibility single nucleotide polymorphisms (SNPs) of PRA. The GWAS was performed using a case-control based association analysis method on PRA dataset of 129 dogs and 135,553 markers. Further, the gene-set and pathway analysis were conducted in this study. A total of 1,114 markers associations with PRA trait at p < 0.01 were extracted and mapped to 640 unique genes, and then selected significant (p < 0.05) enriched 35 gene ontology (GO) terms and 5 Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways contain these genes. In particular, apoptosis process, homophilic cell adhesion, calcium ion binding, and endoplasmic reticulum GO terms as well as pathways related to focal adhesion, cyclic guanosine monophosphate)-protein kinase G signaling, and axon guidance were more likely associated to the PRA disease in dogs. These data could provide new insight for further research on identification of potential genes and causative pathways for PRA in dogs.

Text summarization of dialogue based on BERT

  • Nam, Wongyung;Lee, Jisoo;Jang, Beakcheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.8
    • /
    • pp.41-47
    • /
    • 2022
  • In this paper, we propose how to implement text summaries for colloquial data that are not clearly organized. For this study, SAMSum data, which is colloquial data, was used, and the BERTSumExtAbs model proposed in the previous study of the automatic summary model was applied. More than 70% of the SAMSum dataset consists of conversations between two people, and the remaining 30% consists of conversations between three or more people. As a result, by applying the automatic text summarization model to colloquial data, a result of 42.43 or higher was derived in the ROUGE Score R-1. In addition, a high score of 45.81 was derived by fine-tuning the BERTSum model, which was previously proposed as a text summarization model. Through this study, the performance of colloquial generation summary has been proven, and it is hoped that the computer will understand human natural language as it is and be used as basic data to solve various tasks.

Utilizing Mean Teacher Semi-Supervised Learning for Robust Pothole Image Classification

  • Inki Kim;Beomjun Kim;Jeonghwan Gwak
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.5
    • /
    • pp.17-28
    • /
    • 2023
  • Potholes that occur on paved roads can have fatal consequences for vehicles traveling at high speeds and may even lead to fatalities. While manual detection of potholes using human labor is commonly used to prevent pothole-related accidents, it is economically and temporally inefficient due to the exposure of workers on the road and the difficulty in predicting potholes in certain categories. Therefore, completely preventing potholes is nearly impossible, and even preventing their formation is limited due to the influence of ground conditions closely related to road environments. Additionally, labeling work guided by experts is required for dataset construction. Thus, in this paper, we utilized the Mean Teacher technique, one of the semi-supervised learning-based knowledge distillation methods, to achieve robust performance in pothole image classification even with limited labeled data. We demonstrated this using performance metrics and GradCAM, showing that when using semi-supervised learning, 15 pre-trained CNN models achieved an average accuracy of 90.41%, with a minimum of 2% and a maximum of 9% performance difference compared to supervised learning.

Object detection and tracking using a high-performance artificial intelligence-based 3D depth camera: towards early detection of African swine fever

  • Ryu, Harry Wooseuk;Tai, Joo Ho
    • Journal of Veterinary Science
    • /
    • v.23 no.1
    • /
    • pp.17.1-17.10
    • /
    • 2022
  • Background: Inspection of livestock farms using surveillance cameras is emerging as a means of early detection of transboundary animal disease such as African swine fever (ASF). Object tracking, a developing technology derived from object detection aims to the consistent identification of individual objects in farms. Objectives: This study was conducted as a preliminary investigation for practical application to livestock farms. With the use of a high-performance artificial intelligence (AI)-based 3D depth camera, the aim is to establish a pathway for utilizing AI models to perform advanced object tracking. Methods: Multiple crossovers by two humans will be simulated to investigate the potential of object tracking. Inspection of consistent identification will be the evidence of object tracking after crossing over. Two AI models, a fast model and an accurate model, were tested and compared with regard to their object tracking performance in 3D. Finally, the recording of pig pen was also processed with aforementioned AI model to test the possibility of 3D object detection. Results: Both AI successfully processed and provided a 3D bounding box, identification number, and distance away from camera for each individual human. The accurate detection model had better evidence than the fast detection model on 3D object tracking and showed the potential application onto pigs as a livestock. Conclusions: Preparing a custom dataset to train AI models in an appropriate farm is required for proper 3D object detection to operate object tracking for pigs at an ideal level. This will allow the farm to smoothly transit traditional methods to ASF-preventing precision livestock farming.

Effects of the Selection of Deformation-related Variables on Accuracy in Relative Position Estimation via Time-varying Segment-to-Joint Vectors (시변 분절-관절 벡터를 통한 상대위치 추정시 변형관련 변수의 선정이 추정 정확도에 미치는 영향)

  • Lee, Chang June;Lee, Jung Keun
    • Journal of Sensor Science and Technology
    • /
    • v.31 no.3
    • /
    • pp.156-162
    • /
    • 2022
  • This study estimates the relative position between body segments using segment orientation and segment-to-joint center (S2J) vectors. In many wearable motion tracking technologies, the S2J vector is treated as a constant based on the assumption that rigid body segments are connected by a mechanical ball joint. However, human body segments are deformable non-rigid bodies, and they are connected via ligaments and tendons; therefore, the S2J vector should be determined as a time-varying vector, instead of a constant. In this regard, our previous study (2021) proposed a method for determining the time-varying S2J vector from the learning dataset using a regression method. Because that method uses a deformation-related variable to consider the deformation of S2J vectors, the optimal variable must be determined in terms of estimation accuracy by motion and segment. In this study, we investigated the effects of deformation-related variables on the estimation accuracy of the relative position. The experimental results showed that the estimation accuracy was the highest when the flexion and adduction angles of the shoulder and the flexion angles of the shoulder and elbow were selected as deformation-related variables for the sternum-to-upper arm and upper arm-to-forearm, respectively. Furthermore, the case with multiple deformation-related variables was superior by an average of 2.19 mm compared to the case with a single variable.

Deep learning-based clothing attribute classification using fashion image data (패션 이미지 데이터를 활용한 딥러닝 기반의 의류속성 분류)

  • Hye Seon Jeong;So Young Lee;Choong Kwon Lee
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.57-64
    • /
    • 2024
  • Attributes such as material, color, and fit in fashion images are important factors for consumers to purchase clothing. However, the process of classifying clothing attributes requires a large amount of manpower and is inconsistent because it relies on the subjective judgment of human operators. To alleviate this problem, there is a need for research that utilizes artificial intelligence to classify clothing attributes in fashion images. Previous studies have mainly focused on classifying clothing attributes for either tops or bottoms, so there is a limitation that the attributes of both tops and bottoms cannot be identified simultaneously in the case of full-body fashion images. In this study, we propose a deep learning model that can distinguish between tops and bottoms in fashion images and classify the category of each item and the attributes of the clothing material. The deep learning models ResNet and EfficientNet were used in this study, and the dataset used for training was 1,002,718 fashion images and 125 labels including clothing categories and material properties. Based on the weighted F1-Score, ResNet is 0.800 and EfficientNet is 0.781, with ResNet showing better performance.