• Title/Summary/Keyword: Experiment study

Search Result 27,850, Processing Time 0.081 seconds

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Effect of Hydrogen Peroxide Enema on Recovery of Carbon Monoxide Poisoning (과산화수소 관장이 급성 일산화탄소중독의 회복에 미치는 영향)

  • Park, Won-Kyun;Chae, E-Up
    • The Korean Journal of Physiology
    • /
    • v.20 no.1
    • /
    • pp.53-63
    • /
    • 1986
  • Carbon monoxide(CO) poisoning has been one of the major environmental problems because of the tissue hypoxia, especially brain tissue hypoxia, due to the great affinity of CO with hemoglobin. Inhalation of the pure oxygen$(0_2)$ under the high atmospheric pressure has been considered as the best treatment of CO poisoning by the supply of $0_2$ to hypoxic tissues with dissolved from in plasma and also by the rapid elimination of CO from the carboxyhemoglobin(HbCO). Hydrogen peroxide $(H_2O_2)$ was rapidly decomposed to water and $0_2$ under the presence of catalase in the blood, but the intravenous administration of $H_2O_2$ is hazardous because of the formation of methemoglobin and air embolism. However, it was reported that the enema of $H_2O_2$ solution below 0.75% could be continuously supplied $0_2$ to hypoxic tissues without the hazards mentioned above. This study was performed to evaluate the effect of $H_2O_2$ enema on the elimination of CO from the HbCO in the recovery of the acute CO poisoning. Rabbits weighting about 2.0 kg were exposed to If CO gas mixture with room air for 30 minutes. After the acute CO poisoning, 30 rabbits were divided into three groups relating to the recovery period. The first group T·as exposed to the room air and the second group w·as inhalated with 100% $0_2$ under 1 atmospheric pressure. The third group was administered 10 ml of 0.5H $H_2O_2$ solution per kg weight by enema immediately after CO poisoning and exposed to the room air during the recovery period. The arterial blood was sampled before and after CO poisoning ana in 15, 30, 60 and 90 minutes of the recovery period. The blood pH, $Pco_2\;and\;Po_2$ were measured anaerobically with a Blood Gas Analyzer and the saturation percentage of HbCO was measured by the Spectrophotometric method. The effect of $H_2O_2$ enema on the recovery from the acute CO poisoning was observed and compared with the room air group and the 100% $0_2$ inhalation group. The results obtained from the experiment are as follows: The pH of arterial blood was significantly decreased after CO poisoning and until the first 15 minutes of the recovery period in all groups. Thereafter, it was slowly increased to the level of the before CO poisoning, but the recovery of pH of the $H_2O_2$ enema group was more delayed than that of the other groups during the recovery period. $Paco_2$ was significantly decreased after CO poisoning in all groups. Boring the recovery Period, $Paco_2$ of the room air group was completely recovered to the level of the before CO Poisoning, but that of the 100% $O_2$ inhalation group and the $H_2O_2$ enema group was not recovered until the 90 minutes of the recovery period. $Paco_2$ was slightly decreased after CO poisoning. During the recovery Period, it was markedly increased in the first 15 minutes and maintained the level above that before CO Poisoning in all groups. Furthermore $Paco_2$ of the $H_2O_2$ enema group was 102 to 107 mmHg and it was about 10 mmHg higher than that of the room air group during the recovery period. The saturation percentage of HbCO was increased up to the range of 54 to 72 percents after CO poisoning and in general it was generally diminished during the recovery period. However in the $H_2O_2$ enema group the diminution of the saturation percentage of HbCO was generally faster than that of the 100% $O_2$ inhalation group and the room air group, and its diminution in the 100% $O_2$ inhalation group was also slightly faster than that of the room air group at the relatively later time of the recovery period. In conclusion, the enema of 0.5% $H_2O_2$ solution is seems to facilitate the elimination of CO from the HbCO in the blood and increase $Paco_2$ simultaneously during the recovery period of the acute CO poisoning.

  • PDF

Deep Learning-based Professional Image Interpretation Using Expertise Transplant (전문성 이식을 통한 딥러닝 기반 전문 이미지 해석 방법론)

  • Kim, Taejin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.79-104
    • /
    • 2020
  • Recently, as deep learning has attracted attention, the use of deep learning is being considered as a method for solving problems in various fields. In particular, deep learning is known to have excellent performance when applied to applying unstructured data such as text, sound and images, and many studies have proven its effectiveness. Owing to the remarkable development of text and image deep learning technology, interests in image captioning technology and its application is rapidly increasing. Image captioning is a technique that automatically generates relevant captions for a given image by handling both image comprehension and text generation simultaneously. In spite of the high entry barrier of image captioning that analysts should be able to process both image and text data, image captioning has established itself as one of the key fields in the A.I. research owing to its various applicability. In addition, many researches have been conducted to improve the performance of image captioning in various aspects. Recent researches attempt to create advanced captions that can not only describe an image accurately, but also convey the information contained in the image more sophisticatedly. Despite many recent efforts to improve the performance of image captioning, it is difficult to find any researches to interpret images from the perspective of domain experts in each field not from the perspective of the general public. Even for the same image, the part of interests may differ according to the professional field of the person who has encountered the image. Moreover, the way of interpreting and expressing the image also differs according to the level of expertise. The public tends to recognize the image from a holistic and general perspective, that is, from the perspective of identifying the image's constituent objects and their relationships. On the contrary, the domain experts tend to recognize the image by focusing on some specific elements necessary to interpret the given image based on their expertise. It implies that meaningful parts of an image are mutually different depending on viewers' perspective even for the same image. So, image captioning needs to implement this phenomenon. Therefore, in this study, we propose a method to generate captions specialized in each domain for the image by utilizing the expertise of experts in the corresponding domain. Specifically, after performing pre-training on a large amount of general data, the expertise in the field is transplanted through transfer-learning with a small amount of expertise data. However, simple adaption of transfer learning using expertise data may invoke another type of problems. Simultaneous learning with captions of various characteristics may invoke so-called 'inter-observation interference' problem, which make it difficult to perform pure learning of each characteristic point of view. For learning with vast amount of data, most of this interference is self-purified and has little impact on learning results. On the contrary, in the case of fine-tuning where learning is performed on a small amount of data, the impact of such interference on learning can be relatively large. To solve this problem, therefore, we propose a novel 'Character-Independent Transfer-learning' that performs transfer learning independently for each character. In order to confirm the feasibility of the proposed methodology, we performed experiments utilizing the results of pre-training on MSCOCO dataset which is comprised of 120,000 images and about 600,000 general captions. Additionally, according to the advice of an art therapist, about 300 pairs of 'image / expertise captions' were created, and the data was used for the experiments of expertise transplantation. As a result of the experiment, it was confirmed that the caption generated according to the proposed methodology generates captions from the perspective of implanted expertise whereas the caption generated through learning on general data contains a number of contents irrelevant to expertise interpretation. In this paper, we propose a novel approach of specialized image interpretation. To achieve this goal, we present a method to use transfer learning and generate captions specialized in the specific domain. In the future, by applying the proposed methodology to expertise transplant in various fields, we expected that many researches will be actively conducted to solve the problem of lack of expertise data and to improve performance of image captioning.

Study on the Content of ${NO_3}^-$ in Green Vegetable Juice by Different Sorts, Harvesting Time, Mixing Rate of Vegetable, Storage Condition and Manufacturers (채소 모재료의 종류, 수확시기별, 부위별 혼합비율, 저장조건 및 생산회사에 따른 녹즙의 ${NO_3}^-$ 함량차이)

  • Sohn, Sang-Mok;Yoon, Ji-Young
    • Korean Journal of Organic Agriculture
    • /
    • v.7 no.1
    • /
    • pp.91-103
    • /
    • 1998
  • After the consumption of green vegetable juice by Korean increase rapidly, the ${NO_3}^-$ intake through green vegetable juice have been ignored to consider for the calculation of daily ${NO_3}^-$ intake. It is necessary to collect the basic data on the ${NO_3}^-$ content in green vegetable juice by different sorts, harvesting time and mixing rate of vegetable, manufacturers and storage conditions for the next calculation of daily ${NO_3}^-$ intake for Korean. Followings are the research results from monitoring and laboratory experiment related with ${NO_3}^-$ and vitamine C in green vegetable juice. The ${NO_3}^-$ content of angelica plaant(tomorrow's leaf) and kale were higher in spring than those in summer and autumm. The highest value of ${NO_3}^-$ content in tomorrow's leaf and kale were 4.85 and 2.94 times higher compare to the lowest value. The average ${NO_3}^-$ content in the midribs of tomorrow's leaf and kale were 7.5 and 2.1 times higher than those in leafblades. It indicate the green vegetable juice made from leadblade of tomorrow's leaf and kale might be better compare to those from midrib in terms of ${NO_3}^-$ content. The content of ${NO_3}^-$ and vitamine C as affected by the timecourse after juice making were decreased rapidly compare to those by storage temperature in case of carrot, kale and cucumber juice. It show the positive comelation between the content of ${NO_3}^-$ and vitamine C in carrot, kale and cucumber juice regardless of room temperature(${NO_3}^-$) or cold temperature(${NO_3}^-$). The content of ${NO_3}^-$ and vitamine C of green vegetable juice by P company were the highest among the manufactuers. The lower content of ${NO_3}^-$ and vitamine C of green vegecable juice by TW company and GB company compare to P company is due to dilution with water to produce the juice. The content of ${NO_3}^-$ of green vegetable juice which were available in market showed 143ppm in carrot juice, 506ppm in tomorrow's leaf juice, 669ppm in wild water celery juice, 985ppm in kale juice, whereas the content of vitamine C were 43ppm in carrot juice, 289ppm in wild water celery juice, 353ppm in kale juice and 768ppm in tomorrow's leaf juice. It was calculated that people take 253mg by tomorrow's leaf juice, 335mg by wild water celery juice, 483mg by kale juice if they drink 500ml of green vegetable juice perday, and it suggest to excess 1.16, 1.53 and 2.21 times respectively only by green vegetable juice consumption.

  • PDF

Effects of Fermented Diets Including Liquid By-products on Nutrient Digestibility and Nitrogen Balance in Growing Pigs (착즙부산물을 이용한 발효사료가 육성돈의 영양소 소화율 및 질소균형에 미치는 영향)

  • Lee, Je-Hyun;Jung, Hyun-Jung;Kim, Dong-Woon;Lee, Sung-Dae;Kim, Sang-Ho;Kim, In-Cheul;Kim, In-Ho;Ohh, Sang-Jip;Cho, Sung-Back
    • Journal of Animal Environmental Science
    • /
    • v.16 no.1
    • /
    • pp.81-92
    • /
    • 2010
  • This study was conducted to evaluate the effects of fermented diets including liquid by-products on nutrient digestibility and nitrogen balance in growing pigs. Treatments were 1) CON (basal diet), 2) F (fermented diet with basal diet), 3) KF (fermented diet with basal diet including 30% kale pomace), 4) AF (fermented diet with basal diet including 30% angelica keiskei pomace), 5) CF (fermented diet with basal diet including 30% carrot pomace) and 6) OF (fermented diet with basal diet including 30% grape pomace). A total of 24 pigs (41.74kg average initial body weight, Landrace $\times$ Yorkshire $\times$ Duroc), were assigned to 6 treatments, 4 replicates and 1 pig per metabolic cage in a randomized complete block (RCB) design. Pigs were housed in $0.5\times1.3m$ metabolic cage in a 17d digestibility trial. During the entire experimental period, Digestibility of dry matter (p<0.05) of treatment CON, F and CF were higher than other treatments. In crude protein digestibility, treatment F was higher than treatment AF and GF (p<0.05). Treatment GF showed the lowest digestibility of crude fiber among all treatments (p<0.05). In ether extract digestibility, treatment AF and CF showed higher than other treatments (p<0.05) except KF treatment. CF treatment showed the best digestibility of ash among all treatments (p<0.05). Whereas, For Ca and P digestibility, CF and OF treatments were improved than other treatments (p<0.05). Energy digestibility (p<0.05) of CON, F and CF treatments were higher than KF, AF and GF treatments. In total essential amino acid digestibility, F treatment was improved than AF, CF and GF treatments (p<0.05). In total non-essential amino acid digestibility, F treatment was higher than CON, AF and GF treatments (p<0.05). In total amino acid digestibility, F treatment was higher than AF and CF treatments (p<0.05) and GF treatment showed the lowest digestibility (p<0.05). In fecal nitrogen excretion ratio, GF treatment was greatest among all treatments (p<0.05) and F treatment was decreased than other treatments (p<0.05). In urinary nitrogen excretion ratio, CON and GF treatments showed the lowest among all treatments (p<0.05). In nitrogen retention ratio, CON treatment showed the high and KF treatment showed the lost among all treatments (p<0.05). Therefore, this experiment suggested that fermented diet could improve nutrient and amino acid digestibilities of growing pigs.

Changes of Proteolytic Activity and Amino Acid Composition of the Tissue Extract from Sea Cucumber Entrails during Fermentation with Salt (해삼내장(內臟)젓갈 숙성중(熟成中) 단백질분해효소(蛋白質分解酵素)의 활성(活性)과 아미노산(酸) 조성(組成)의 변화(變化))

  • Lee, Gi Chan;Cho, Deuk Moon;Byun, Dae Seok;Joo, Hyen Kyu;Pyeun, Jae Hyeung
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.12 no.4
    • /
    • pp.342-349
    • /
    • 1983
  • This study was undertaken to ascertain food and nutritional evaluating data on the processing of fermented sea cucumber (Stichopus japonicus) entrails. In the experiment, the crude proteolytic enzyme from the entrails tissue of raw and fermented sea cucumber during the days of ripening was extracted. The optimal activity condition for the crude enzyme and the compositional changes of amino acid of the protein and free amino acid in the raw and fermented sample were also investigated. 1. Less than three kinds of proteolytic enzymes that each enzyme has optimal activity condition at pH 3.1 $50^{\circ}C$(A-enzyme), pH 5.7 $50^{\circ}C$(B-enzyme) and pH 7.7 $45^{\circ}C$(C-enzyme), respectively were believed to be exist in the entrails tissue of sea cucumber. 2. A-enzyme and C-enzyme were strongly inhibited with the increase of the salt concentration, and B-enzyme was activated at the 1% salt concentration and was inhibited above the 5% salt concentration. 3. The result of the effect of several salt ions on the proteolytic activity showed that A-enzyme was slightly inhibited in the presence of all salt ions added, B-enzyme was activated in the presence of the all salt ions except $Cu^{2+}$ and C-enzyme was activated in the presence of $Ca^{2+}$ and $Mn^{2+}$, and inhibited by $Cu^{2+}$, $Co^{2+}$ and $Mg^{2+}$. 4. When the effects of the ripening days on the proteolytic activity of the crude enzymes were analysed, the activity of the A-enzyme was slightly weakened with the lapse of the fermentation days, whereas the B-enzyme was not influenced by the fermentation days. 5. In the analysis of amino acid composition of the protein of the samples, the 8 days fermented sea cucumber entrails showed the diminution of all kinds of amino acid. Apparently diminished amino acids were arginine, alanine, glutamic acid, glycine, serine, valine, threonine and lysine etc., and methionine, histidine and isoleucine were slightly decreased. 6. In the analysis of free amino acid composition of the 8 days fermented sample, glutamic acid, aspartic acid, leucine and lysine were rich, while histidine, methionine, proline and tyrosine were poor. The most of free amino acids were increased during the fermentation procedure and especially in lysine, histidine, threonine, glutamic acid, methionine, valine and leucine.

  • PDF

Growth and Survival by the Breeding Method of Early Young Spats of the Hard Clam, Meretrix petechiails (LAMARCK) (말백합, Meretrix petechiails (LAMARCK) 초기치패의 사육방법별 성장 및 생존)

  • Kim, Byeong-Hak;Cho, Kee-Chae;Jee, Young-Ju;Byun, Soon-Gyu;Kim, Min-Chul
    • The Korean Journal of Malacology
    • /
    • v.27 no.2
    • /
    • pp.115-119
    • /
    • 2011
  • To establish technical development for artificial seed production, growth and survival for early young spats of the hard clam, Meretrix petechialis, were investigated by breeding methods. Adult clams were collected at Hasa-ri, Baeksu-eup, Yeonggwang-gun, Jeollanam-do on July 13, 2010, and then transported to the indoor aquarium at the laboratory. Eggs which were taken from mother clams, were inseminated, and after they were fertilized in the aquarium, 60 million bottom-clinging spats ($198{\pm}12{\mu}m$ in shell length) were produced and bred. The breeding experiments were carried out from July 16 to October 4, 2010 for 80 days. The methods of sand box, sand bottom circulation filter, inclosing net, floor were used for the breeding experiments, and the experimental condition of sea water temperature for larvae were at 25, 28, 31, $34^{\circ}C$. Four marine cultured food organisms were used for this study as follows: Isochrysis galbana, Chaetoceros gracilis, Phaeodactylum tricornutum, Tetraselmis tetrathele. According to the experimental conditions, experimental groups of the spats in the early stage were investigated the growth rate and the survival. As the result, the method of the inclosing net section was the fastest (grew up to $2.64{\pm}0.59{\mu}m$ in shell length), followed by sandbox ($2.59{\pm}0.64{\mu}m$, bottom circulating filter ($2.56{\pm}0.52{\mu}m$), and floor ($2.52{\pm}0.56{\mu}m$). The survival was the highest in the experimental condition of sandbox (35.9%), followed by floor (34.6%), bottom circulating filter (29.5%), and inclosing net (9.3%). Eexperimental condition of water temperature of $34^{\circ}C$ showed the fastest growth rate (grew up to $2.70{\pm}0.76{\mu}m$ in shell length), and showed the latest growth rate (grew up to $2.45{\pm}0.41{\mu}m$ in shell length) at $25^{\circ}C$. The survival (%) was the highest under the water temperature conditions at $31^{\circ}C$, and showed the lowest (14.2%) at $34.^{\circ}C$. The growth rate of the experimental group fed the mixture live food was the fastest with shell length $2.52{\pm}0.66{\mu}m$, and that of experimental group fed P. tricornutum showed the latest (grew up to $2.29{\pm}0.43{\mu}m$ in shell length). The survival was the highest (36.9%) under the experiment condition fed mixture live food and experimental group fed T. tetrathele showed the lowest rate (16.2%).

Evaluation of indirect N2O Emission from Nitrogen Leaching in the Ground-water in Korea (우리나라 농경지에서 질소의 수계유출에 의한 아산화질소 간접배출량 평가)

  • Kim, Gun-Yeob;Jeong, Hyun-Cheol;Kim, Min-Kyeong;Roh, Kee-An;Lee, Deog-Bae;Kang, Kee-Kyung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.44 no.6
    • /
    • pp.1232-1238
    • /
    • 2011
  • This experiment was conducted to measure concentration of dissolved $N_2O$ in ground-water of 59 wells and to make emission factor for assessment of indirect $N_2O$ emission at agricultural sector in agricultural areas of Gyeongnam province from 2007 to 2010. Concentrations of dissolved $N_2O$ in ground-water of 59 wells were ranged trace to $196.6{\mu}g-N\;L^{-1}$. $N_2O$ concentrations were positively related with $NO_3$-N suggesting that denitrification was the principal reason of $N_2O$ production and $NO_3$-N concentration was the best predictor of indirect $N_2O$ emission. The ratio of dissolved $N_2O$-N to $NO_3$-N in ground-water was very important to make emission factor for assessment of indirect $N_2O$ emission at agricultural sector. The mean ratio of $N_2O$-N to $NO_3$-N was 0.0035. It was greatly lower than 0.015, the default value of currently using in the Intergovernmental Panel on Climate Change (IPCC) methodology for assessing indirect $N_2O$ emission in agro-ecosystems (IPCC, 1996). It means that the IPCC's present nitrogen indirect emission factor ($EF_{5-g}$, 0.015) and indirect $N_2O$ emission estimated with IPCC's emission factor are too high to use adopt in Korea. So we recommend 0.0034 as national specific emission factor ($EF_{5-g}$) for assessment of indirect $N_2O$ emission at agricultural sector. Using the estimated value of 0.0034 as the emission factor ($EF_{5-g}$) revised the indirect $N_2O$ emission from agricultural sector in Korea decreased from 1,801,576 ton ($CO_2$-eq) to 964,645 ton ($CO_2$-eq) in 2008. The results of this study suggest that the indirect Emission of nitrous oxide from upland recommend 0.0034 as national specific emission factor ($EF_{5-g}$) for assessment of indirect $N_2O$ emission at agricultural sector.

Evaluation of Metal Volume and Proton Dose Distribution Using MVCT for Head and Neck Proton Treatment Plan (두경부 양성자 치료계획 시 MVCT를 이용한 Metal Volume 평가 및 양성자 선량분포 평가)

  • Seo, Sung Gook;Kwon, Dong Yeol;Park, Se Joon;Park, Yong Chul;Choi, Byung Ki
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.25-32
    • /
    • 2019
  • Purpose: The size, shape, and volume of prosthetic appliance depend on the metal artifacts resulting from dental implant during head and neck treatment with radiation. This reduced the accuracy of contouring targets and surrounding normal tissues in radiation treatment plan. Therefore, the purpose of this study is to obtain the images of metal representing the size of tooth through MVCT, SMART-MAR CT and KVCT, evaluate the volumes, apply them into the proton therapy plan, and analyze the difference of dose distribution. Materials and Methods : Metal A ($0.5{\times}0.5{\times}0.5cm$), Metal B ($1{\times}1{\times}1cm$), and Metal C ($1{\times}2{\times}1cm$) similar in size to inlay, crown, and bridge taking the treatments used at the dentist's into account were made with Cerrobend ($9.64g/cm^3$). Metal was placed into the In House Head & Neck Phantom and by using CT Simulator (Discovery CT 590RT, GE, USA) the images of KVCT and SMART-MAR were obtained with slice thickness 1.25 mm. The images of MVCT were obtained in the same way with $RADIXACT^{(R)}$ Series (Accuracy $Precision^{(R)}$, USA). The images of metal obtained through MVCT, SMART-MAR CT, and KVCT were compared in both size of axis X, Y, and Z and volume based on the Autocontour Thresholds Raw Values from the computerized treatment planning equipment Pinnacle (Ver 9.10, Philips, Palo Alto, USA). The proton treatment plan (Ray station 5.1, RaySearch, USA) was set by fusing the contour of metal B ($1{\times}1{\times}1cm$) obtained from the above experiment by each CT into KVCT in order to compare the difference of dose distribution. Result: Referencing the actual sizes, it was appeared: Metal A (MVCT: 1.0 times, SMART-MAR CT: 1.84 times, and KVCT: 1.92 times), Metal B (MVCT: 1.02 times, SMART-MAR CT: 1.47 times, and KVCT: 1.82 times), and Metal C (MVCT: 1.0 times, SMART-MAR CT: 1.46 times, and KVCT: 1.66 times). MVCT was measured most similarly to the actual metal volume. As a result of measurement by applying the volume of metal B into proton treatment plan, the dose of $D_{99%}$ volume was measured as: MVCT: 3094 CcGE, SMART-MAR CT: 2902 CcGE, and KVCT: 2880 CcGE, against the reference 3082 CcGE Conclusion: Overall volume and axes X and Z were most identical to the actual sizes in MVCT and axis Y, which is in the superior-Inferior direction, was regular in length without differences in CT. The best dose distribution was shown in MVCT having similar size, shape, and volume of metal when treating head and neck protons. Thus it is thought that it would be very useful if the contour of prosthetic appliance using MVCT is applied into KVCT for proton treatment plan.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.