• Title/Summary/Keyword: 파라미터연구

Search Result 3,165, Processing Time 0.028 seconds

Relationship between Stratum Corneum Carbonylated Protein (SCCP) and Skin Biophysical Parameters (Stratum Corneum Carbonylated Protein (SCCP)의 피부 생물학적 파라미터와의 관계)

  • Lee, Yongjik;Nam, Gaewon
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.45 no.2
    • /
    • pp.131-138
    • /
    • 2019
  • Carbonylated proteins (CPs) are synthesized by the chemical reaction of basic amino acid residues in proteins with aldehyde compounds yielded by lipid peroxidation. CPs are excited by a range of light from UVA to blue light, and resulted in the generation of superoxide anion radicals ($^{\cdot}O_2{^-}$) by photosensitizing reaction. Then, they CPs induce new protein carbonylation in stratum corneum through ROS generation. Furthermore, the superoxide anion radicals produce CPs in the stratum corneum (SC) through lipid peroxidation and finally affects skin conditions including color and moisture functions. The purpose of this study was to investigate the relationship between the production of stratum corneum carbonylated protein (SCCP) and the skin elasticity. 46 healthy female Koream at the ages of 30 ~ 50 years old were participated in this study for 8 weeks. The skin test was experiment conducted into two groups; placebo group (N = 23) used cream that did not contain active ingredients, and the other group (N = 23) used cream containing the elasticity improving ingredients. Test areas were the crow 's feet and the cheek. Various non-invasive methods were carried out to measure biophysical parameters on human skin indicating that dermis density and skin wrinkle were measured by using DUB scanner and Primos premium, respectively. Skin elasticity were measured using dermal torque meter (DTM310) and balistometer (BLS780). SCCP was assessed in a simple and non-invasive method using skin surface biopsy on the cheek of the subject. The amount of SCCP was determined using image analysis. All measurements were taken at 0, 4 and 8 8week. Results revealed that the amount of CP in SC was reduced when the skin wrinkle and skin elasticity related parameters were improved. This indicates that the correlation between the elasticity improvement and the amount of CP can be used as a anti-aging indicator and applicable to the skin clinical test for the measurement of skin aging in the future.

Evaluation of Robustness of Deep Learning-Based Object Detection Models for Invertebrate Grazers Detection and Monitoring (조식동물 탐지 및 모니터링을 위한 딥러닝 기반 객체 탐지 모델의 강인성 평가)

  • Suho Bak;Heung-Min Kim;Tak-Young Kim;Jae-Young Lim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.297-309
    • /
    • 2023
  • The degradation of coastal ecosystems and fishery environments is accelerating due to the recent phenomenon of invertebrate grazers. To effectively monitor and implement preventive measures for this phenomenon, the adoption of remote sensing-based monitoring technology for extensive maritime areas is imperative. In this study, we compared and analyzed the robustness of deep learning-based object detection modelsfor detecting and monitoring invertebrate grazersfrom underwater videos. We constructed an image dataset targeting seven representative species of invertebrate grazers in the coastal waters of South Korea and trained deep learning-based object detection models, You Only Look Once (YOLO)v7 and YOLOv8, using this dataset. We evaluated the detection performance and speed of a total of six YOLO models (YOLOv7, YOLOv7x, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) and conducted robustness evaluations considering various image distortions that may occur during underwater filming. The evaluation results showed that the YOLOv8 models demonstrated higher detection speed (approximately 71 to 141 FPS [frame per second]) compared to the number of parameters. In terms of detection performance, the YOLOv8 models (mean average precision [mAP] 0.848 to 0.882) exhibited better performance than the YOLOv7 models (mAP 0.847 to 0.850). Regarding model robustness, it was observed that the YOLOv7 models were more robust to shape distortions, while the YOLOv8 models were relatively more robust to color distortions. Therefore, considering that shape distortions occur less frequently in underwater video recordings while color distortions are more frequent in coastal areas, it can be concluded that utilizing YOLOv8 models is a valid choice for invertebrate grazer detection and monitoring in coastal waters.

Animal Infectious Diseases Prevention through Big Data and Deep Learning (빅데이터와 딥러닝을 활용한 동물 감염병 확산 차단)

  • Kim, Sung Hyun;Choi, Joon Ki;Kim, Jae Seok;Jang, Ah Reum;Lee, Jae Ho;Cha, Kyung Jin;Lee, Sang Won
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.137-154
    • /
    • 2018
  • Animal infectious diseases, such as avian influenza and foot and mouth disease, occur almost every year and cause huge economic and social damage to the country. In order to prevent this, the anti-quarantine authorities have tried various human and material endeavors, but the infectious diseases have continued to occur. Avian influenza is known to be developed in 1878 and it rose as a national issue due to its high lethality. Food and mouth disease is considered as most critical animal infectious disease internationally. In a nation where this disease has not been spread, food and mouth disease is recognized as economic disease or political disease because it restricts international trade by making it complex to import processed and non-processed live stock, and also quarantine is costly. In a society where whole nation is connected by zone of life, there is no way to prevent the spread of infectious disease fully. Hence, there is a need to be aware of occurrence of the disease and to take action before it is distributed. Epidemiological investigation on definite diagnosis target is implemented and measures are taken to prevent the spread of disease according to the investigation results, simultaneously with the confirmation of both human infectious disease and animal infectious disease. The foundation of epidemiological investigation is figuring out to where one has been, and whom he or she has met. In a data perspective, this can be defined as an action taken to predict the cause of disease outbreak, outbreak location, and future infection, by collecting and analyzing geographic data and relation data. Recently, an attempt has been made to develop a prediction model of infectious disease by using Big Data and deep learning technology, but there is no active research on model building studies and case reports. KT and the Ministry of Science and ICT have been carrying out big data projects since 2014 as part of national R &D projects to analyze and predict the route of livestock related vehicles. To prevent animal infectious diseases, the researchers first developed a prediction model based on a regression analysis using vehicle movement data. After that, more accurate prediction model was constructed using machine learning algorithms such as Logistic Regression, Lasso, Support Vector Machine and Random Forest. In particular, the prediction model for 2017 added the risk of diffusion to the facilities, and the performance of the model was improved by considering the hyper-parameters of the modeling in various ways. Confusion Matrix and ROC Curve show that the model constructed in 2017 is superior to the machine learning model. The difference between the2016 model and the 2017 model is that visiting information on facilities such as feed factory and slaughter house, and information on bird livestock, which was limited to chicken and duck but now expanded to goose and quail, has been used for analysis in the later model. In addition, an explanation of the results was added to help the authorities in making decisions and to establish a basis for persuading stakeholders in 2017. This study reports an animal infectious disease prevention system which is constructed on the basis of hazardous vehicle movement, farm and environment Big Data. The significance of this study is that it describes the evolution process of the prediction model using Big Data which is used in the field and the model is expected to be more complete if the form of viruses is put into consideration. This will contribute to data utilization and analysis model development in related field. In addition, we expect that the system constructed in this study will provide more preventive and effective prevention.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Effects of Motion Correction for Dynamic $[^{11}C]Raclopride$ Brain PET Data on the Evaluation of Endogenous Dopamine Release in Striatum (동적 $[^{11}C]Raclopride$ 뇌 PET의 움직임 보정이 선조체 내인성 도파민 유리 정량화에 미치는 영향)

  • Lee, Jae-Sung;Kim, Yu-Kyeong;Cho, Sang-Soo;Choe, Yearn-Seong;Kang, Eun-Joo;Lee, Dong-Soo;Chung, June-Key;Lee, Myung-Chul;Kim, Sang-Eun
    • The Korean Journal of Nuclear Medicine
    • /
    • v.39 no.6
    • /
    • pp.413-420
    • /
    • 2005
  • Purpose: Neuroreceptor PET studies require 60-120 minutes to complete and head motion of the subject during the PET scan increases the uncertainty in measured activity. In this study, we investigated the effects of the data-driven head mutton correction on the evaluation of endogenous dopamine release (DAR) in the striatum during the motor task which might have caused significant head motion artifact. Materials and Methods: $[^{11}C]raclopride$ PET scans on 4 normal volunteers acquired with bolus plus constant infusion protocol were retrospectively analyzed. Following the 50 min resting period, the participants played a video game with a monetary reward for 40 min. Dynamic frames acquired during the equilibrium condition (pre-task: 30-50 min, task: 70-90 min, post-task: 110-120 min) were realigned to the first frame in pre-task condition. Intra-condition registrations between the frames were performed, and average image for each condition was created and registered to the pre-task image (inter-condition registration). Pre-task PET image was then co-registered to own MRI of each participant and transformation parameters were reapplied to the others. Volumes of interest (VOI) for dorsal putamen (PU) and caudate (CA), ventral striatum (VS), and cerebellum were defined on the MRI. Binding potential (BP) was measured and DAR was calculated as the percent change of BP during and after the task. SPM analyses on the BP parametric images were also performed to explore the regional difference in the effects of head motion on BP and DAR estimation. Results: Changes in position and orientation of the striatum during the PET scans were observed before the head motion correction. BP values at pre-task condition were not changed significantly after the intra-condition registration. However, the BP values during and after the task and DAR were significantly changed after the correction. SPM analysis also showed that the extent and significance of the BP differences were significantly changed by the head motion correction and such changes were prominent in periphery of the striatum. Conclusion: The results suggest that misalignment of MRI-based VOI and the striatum in PET images and incorrect DAR estimation due to the head motion during the PET activation study were significant, but could be remedied by the data-driven head motion correction.