• Title/Summary/Keyword: Modified image

Search Result 1,248, Processing Time 0.025 seconds

Digitization of Adjectives that Describe Facial Complexion to Evaluate Various Expressions of Skin Tone in Korean (피부색을 표현하는 형용사들의 수치화를 통한 안색 평가법 연구)

  • Lee, Sun Hwa;Lee, Jung Ah;Park, Sun Mi;Kim, Younghee;Jang, Yoon Jung;Kim, Bora;Kim, Nam Soo;Moon, Tae Kee
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.43 no.4
    • /
    • pp.349-355
    • /
    • 2017
  • Skin tone plays a key role in one of the determinant for facial attractiveness. Most female customers have an interest in choosing skin color and improving their skin tone and their needs have been contributed the expansion of cosmetic products in the market. Recently, cosmetic customers, who want bright skin, are also interested in healthy and lively-looking skin. However, there is no method to evaluate the skin tone with the complexion-describing adjectives (CDAs). Therefore, this study was conducted to find the ways to objectify and digitize the CDA. We obtained that quasi $L^*$ at dark skin is 65 and quasi $L^*$ at bright skin is 74 for standard images, which are selected from our data base. To match the following seven CDAs: pale, clear, radiant, lively, healthy, rosy and dull, the colors of both images were adjusted by 30 panels. The quasi $L^*$, $a^*$ and $b^*$ were converted from the RGB values of the manipulated images. The differences between the quasi $L^*$, $a^*$ and $b^*$ values of standard images and manipulated images reflecting each CDA were statistically significant (p < 0.05). However, there were no statistical significances between the $L^*$ values of dark and bright skin images that were modified in accordance with each CDA and there also were no statistical significances between the quasi $a^*$ values of dark and bright skin for pale and clear CDAs. From the statistical analysis, the CDAs were observed to form three groups: (i) pale-clear-radiant, (ii) lively-healthy-rosy and (iii) dull. We recognized that people have a similar opinion about perception of CDAs. Following our results of this study, we establish new standard method for sensibility evaluation which is difficult to carry out scientifically or objectively.

Development of Program for Renal Function Study with Quantification Analysis of Nuclear Medicine Image (핵의학 영상의 정량적 분석을 통한 신장기능 평가 프로그램 개발)

  • Song, Ju-Young;Lee, Hyoung-Koo;Suh, Tae-Suk;Choe, Bo-Young;Shinn, Kyung-Sub;Chung, Yong-An;Kim, Sung-Hoon;Chung, Soo-Kyo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.35 no.2
    • /
    • pp.89-99
    • /
    • 2001
  • Purpose: In this study, we developed a new software tool for the analysis of renal scintigraphy which can be modified more easily by a user who needs to study new clinical applications, and the appropriateness of the results from our program was studied. Materials and Methods: The analysis tool was programmed with IDL5.2 and designed for use on a personal computer running Windows. For testing the developed tool and studying the appropriateness of the calculated glomerular filtration rate (GFR), $^{99m}Tc$-DTPA was administered to 10 adults in normal condition. In order to study the appropriateness of the calculated mean transit time (MTT), $^{99m}Tc-DTPA\;and\;^{99m}Tc-MAG3$ were administered to 11 adults in normal condition and 22 kidneys were analyzed. All the images were acquired with ORBITOR. the Siemens gamma camera. Results: With the developed tool, we could show dynamic renal images and time activity curve (TAC) in each ROI and calculate clinical parameters of renal function. The results calculated by the developed tool were not different statistically from the results obtained by the Siemens application program (Tmax: p=0.68, Relative Renal Function: p:1.0, GFR: p=0.25) and the developed program proved reasonable. The MTT calculation tool proved to be reasonable by the evaluation of the influence of hydration status on MTT. Conclusion: We have obtained reasonable clinical parameters for the evaluation of renal function with the software tool developed in this study. The developed tool could prove more practical than conventional, commercial programs.

  • PDF

Increase of Tc-99m RBC SPECT Sensitivity for Small Liver Hemangioma using Ordered Subset Expectation Maximization Technique (Tc-99m RBC SPECT에서 Ordered Subset Expectation Maximization 기법을 이용한 작은 간 혈관종 진단 예민도의 향상)

  • Jeon, Tae-Joo;Bong, Jung-Kyun;Kim, Hee-Joung;Kim, Myung-Jin;Lee, Jong-Doo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.344-356
    • /
    • 2002
  • Purpose: RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Materials and Methods: Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile, OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. Three experts reviewed these images without any information. Results: According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. Conclusion: OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. Furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.

Time-Lapse Crosswell Seismic Study to Evaluate the Underground Cavity Filling (지하공동 충전효과 평가를 위한 시차 공대공 탄성파 토모그래피 연구)

  • Lee, Doo-Sung
    • Geophysics and Geophysical Exploration
    • /
    • v.1 no.1
    • /
    • pp.25-30
    • /
    • 1998
  • Time-lapse crosswell seismic data, recorded before and after the cavity filling, showed that the filling increased the velocity at a known cavity zone in an old mine site in Inchon area. The seismic response depicted on the tomogram and in conjunction with the geologic data from drillings imply that the size of the cavity may be either small or filled by debris. In this study, I attempted to evaluate the filling effect by analyzing velocity measured from the time-lapse tomograms. The data acquired by a downhole airgun and 24-channel hydrophone system revealed that there exists measurable amounts of source statics. I presented a methodology to estimate the source statics. The procedure for this method is: 1) examine the source firing-time for each source, and remove the effect of irregular firing time, and 2) estimate the residual statics caused by inaccurate source positioning. This proposed multi-step inversion may reduce high frequency numerical noise and enhance the resolution at the zone of interest. The multi-step inversion with different starting models successfully shows the subtle velocity changes at the small cavity zone. The inversion procedure is: 1) conduct an inversion using regular sized cells, and generate an image of gross velocity structure by applying a 2-D median filter on the resulting tomogram, and 2) construct the starting velocity model by modifying the final velocity model from the first phase. The model was modified so that the zone of interest consists of small-sized grids. The final velocity model developed from the baseline survey was as a starting velocity model on the monitor inversion. Since we expected a velocity change only in the cavity zone, in the monitor inversion, we can significantly reduce the number of model parameters by fixing the model out-side the cavity zone equal to the baseline model.

  • PDF

Derivation of Green Coverage Ratio Based on Deep Learning Using MAV and UAV Aerial Images (유·무인 항공영상을 이용한 심층학습 기반 녹피율 산정)

  • Han, Seungyeon;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1757-1766
    • /
    • 2021
  • The green coverage ratio is the ratio of the land area to green coverage area, and it is used as a practical urban greening index. The green coverage ratio is calculated based on the land cover map, but low spatial resolution and inconsistent production cycle of land cover map make it difficult to calculate the correct green coverage area and analyze the precise green coverage. Therefore, this study proposes a new method to calculate green coverage area using aerial images and deep neural networks. Green coverage ratio can be quickly calculated using manned aerial images acquired by local governments, but precise analysis is difficult because components of image such as acquisition date, resolution, and sensors cannot be selected and modified. This limitation can be supplemented by using an unmanned aerial vehicle that can mount various sensors and acquire high-resolution images due to low-altitude flight. In this study, we proposed a method to calculate green coverage ratio from manned or unmanned aerial images, and experimentally verified the proposed method. Aerial images enable precise analysis by high resolution and relatively constant cycles, and deep learning can automatically detect green coverage area in aerial images. Local governments acquire manned aerial images for various purposes every year and we can utilize them to calculate green coverage ratio quickly. However, acquired manned aerial images may be difficult to accurately analyze because details such as acquisition date, resolution, and sensors cannot be selected. These limitations can be supplemented by using unmanned aerial vehicles that can mount various sensors and acquire high-resolution images due to low-altitude flight. Accordingly, the green coverage ratio was calculated from the two aerial images, and as a result, it could be calculated with high accuracy from all green types. However, the green coverage ratio calculated from manned aerial images had limitations in complex environments. The unmanned aerial images used to compensate for this were able to calculate a high accuracy of green coverage ratio even in complex environments, and more precise green area detection was possible through additional band images. In the future, it is expected that the rust rate can be calculated effectively by using the newly acquired unmanned aerial imagery supplementary to the existing manned aerial imagery.

Anomaly Detection for User Action with Generative Adversarial Networks (적대적 생성 모델을 활용한 사용자 행위 이상 탐지 방법)

  • Choi, Nam woong;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.43-62
    • /
    • 2019
  • At one time, the anomaly detection sector dominated the method of determining whether there was an abnormality based on the statistics derived from specific data. This methodology was possible because the dimension of the data was simple in the past, so the classical statistical method could work effectively. However, as the characteristics of data have changed complexly in the era of big data, it has become more difficult to accurately analyze and predict the data that occurs throughout the industry in the conventional way. Therefore, SVM and Decision Tree based supervised learning algorithms were used. However, there is peculiarity that supervised learning based model can only accurately predict the test data, when the number of classes is equal to the number of normal classes and most of the data generated in the industry has unbalanced data class. Therefore, the predicted results are not always valid when supervised learning model is applied. In order to overcome these drawbacks, many studies now use the unsupervised learning-based model that is not influenced by class distribution, such as autoencoder or generative adversarial networks. In this paper, we propose a method to detect anomalies using generative adversarial networks. AnoGAN, introduced in the study of Thomas et al (2017), is a classification model that performs abnormal detection of medical images. It was composed of a Convolution Neural Net and was used in the field of detection. On the other hand, sequencing data abnormality detection using generative adversarial network is a lack of research papers compared to image data. Of course, in Li et al (2018), a study by Li et al (LSTM), a type of recurrent neural network, has proposed a model to classify the abnormities of numerical sequence data, but it has not been used for categorical sequence data, as well as feature matching method applied by salans et al.(2016). So it suggests that there are a number of studies to be tried on in the ideal classification of sequence data through a generative adversarial Network. In order to learn the sequence data, the structure of the generative adversarial networks is composed of LSTM, and the 2 stacked-LSTM of the generator is composed of 32-dim hidden unit layers and 64-dim hidden unit layers. The LSTM of the discriminator consists of 64-dim hidden unit layer were used. In the process of deriving abnormal scores from existing paper of Anomaly Detection for Sequence data, entropy values of probability of actual data are used in the process of deriving abnormal scores. but in this paper, as mentioned earlier, abnormal scores have been derived by using feature matching techniques. In addition, the process of optimizing latent variables was designed with LSTM to improve model performance. The modified form of generative adversarial model was more accurate in all experiments than the autoencoder in terms of precision and was approximately 7% higher in accuracy. In terms of Robustness, Generative adversarial networks also performed better than autoencoder. Because generative adversarial networks can learn data distribution from real categorical sequence data, Unaffected by a single normal data. But autoencoder is not. Result of Robustness test showed that he accuracy of the autocoder was 92%, the accuracy of the hostile neural network was 96%, and in terms of sensitivity, the autocoder was 40% and the hostile neural network was 51%. In this paper, experiments have also been conducted to show how much performance changes due to differences in the optimization structure of potential variables. As a result, the level of 1% was improved in terms of sensitivity. These results suggest that it presented a new perspective on optimizing latent variable that were relatively insignificant.

Emulsifying Properties of Gelatinized Octenyl Succinic Anhydride Modified starch from Barley (호화 옥테닐 호박산 전분의 유화 특성)

  • Kim, San-Seong;Kim, Sun-Hyung;Lee, Eui-Seok;Lee, Ki-Teak;Hong, Soon-Taek
    • Journal of the Korean Applied Science and Technology
    • /
    • v.36 no.1
    • /
    • pp.174-188
    • /
    • 2019
  • The present study was carried out to investigate the emulsifying properties of heat-treated octenyl succinic anhydride(OSA) starch and the interfacial structure at oil droplet surface in emulsions stabilized by heat-treated OSA starch. First, the aqueous suspensions of OSA starch were heated at $80^{\circ}C$ for 30 min. Oil-in-water emulsions were then prepared with the heat-treated OSA starch suspension as sole emulsifier and their physicochemical properties such as fat globule size, surface load, zeta-potential, dispersion stability, confocal laser scanning microscopic image(CLSM) were determined. It was found that fat globule size decreased as the concentration of OSA starch in emulsions increased, showing a lower limit value ($d_{32}:0.31{\mu}m$) at ${\geq}0.2wt%$. Surface load increased steadily with increasing OSA starch concentration in emulsions, possibly forming multiple layers. In addition, fat globule sizes were also influenced by pH: they were increased in acidic conditions and these results were interpreted in view of the change in zeta potentials. The dispersion stability by Turbiscan showed that it was more unstable in emulsions at acidic condition. Heat-treated OSA starch found to adsorb at the oil droplet surface as some forms of membrane (not starch granules), which might be indicative of stabilizing mechanism of OSA starch emulsions to be steric forces.

Automatic Fracture Detection in CT Scan Images of Rocks Using Modified Faster R-CNN Deep-Learning Algorithm with Rotated Bounding Box (회전 경계박스 기능의 변형 FASTER R-CNN 딥러닝 알고리즘을 이용한 암석 CT 영상 내 자동 균열 탐지)

  • Pham, Chuyen;Zhuang, Li;Yeom, Sun;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.31 no.5
    • /
    • pp.374-384
    • /
    • 2021
  • In this study, we propose a new approach for automatic fracture detection in CT scan images of rock specimens. This approach is built on top of two-stage object detection deep learning algorithm called Faster R-CNN with a major modification of using rotated bounding box. The use of rotated bounding box plays a key role in the future work to overcome several inherent difficulties of fracture segmentation relating to the heterogeneity of uninterested background (i.e., minerals) and the variation in size and shape of fracture. Comparing to the commonly used bounding box (i.e., axis-align bounding box), rotated bounding box shows a greater adaptability to fit with the elongated shape of fracture, such that minimizing the ratio of background within the bounding box. Besides, an additional benefit of rotated bounding box is that it can provide relative information on the orientation and length of fracture without the further segmentation and measurement step. To validate the applicability of the proposed approach, we train and test our approach with a number of CT image sets of fractured granite specimens with highly heterogeneous background and other rocks such as sandstone and shale. The result demonstrates that our approach can lead to the encouraging results on fracture detection with the mean average precision (mAP) up to 0.89 and also outperform the conventional approach in terms of background-to-object ratio within the bounding box.