• Title/Summary/Keyword: order of accuracy

Search Result 6,328, Processing Time 0.038 seconds

Pseudo Image Composition and Sensor Models Analysis of SPOT Satellite Imagery for Inaccessible Area (비접근 지역에 대한 SPOT 위성영상의 Pseudo영상 구성 및 센서모델 분석)

  • 방기인;조우석
    • Korean Journal of Remote Sensing
    • /
    • v.17 no.1
    • /
    • pp.33-44
    • /
    • 2001
  • The paper presents several satellite models and satellite image decomposition methods for inaccessible area where ground control points can hardly acquired in conventional ways. First, 10 different satellite sensor models, which were extended from collinearity condition equations, were developed and then behavior of each sensor model was investigated. Secondly, satellite images were decomposed and also pseudo images were generated. The satellite sensor model extended from collinearity equations was represented by the six exterior orientation parameters in $1^{st}$, $2^{nd}$ and $3^{rd}$ order function of satellite image row. Among them, the rotational angle parameters such as $\omega$(omega) and $\Phi$(phi) correlated highly with positional parameters could be assigned to constant values. For inaccessible area, satellite images were decomposed, which means that two consecutive images were combined as one image, The combined image consists of one satellite image with ground control points and the other without ground control points. In addition, a pseudo image which is an imaginary image, was prepared from one satellite image with ground control points and the other without ground control points. In other words, the pseudo image is an arbitrary image bridging two consecutive images. For the experiments, SPOT satellite images exposed to the similar area in different pass were used. Conclusively, it was found that 10 different satellite sensor models and 5 different decomposed methods delivered different levels of accuracy. Among them, the satellite camera model with 1st order function of image row for positional orientation parameters and rotational angle parameter of kappa, and constant rotational angle parameter omega and phi provided the best 60m maximum error at check point with pseudo images arrangement.

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

Comparison of Natural Flow Estimates for the Han River Basin Using TANK and SWAT Models (TANK 모형과 SWAT 모형을 이용한 한강유역의 자연유출량 산정 비교)

  • Kim, Chul-Gyum;Kim, Nam-Won
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.3
    • /
    • pp.301-316
    • /
    • 2012
  • Two models, TANK and SWAT (Soil and Water Assessment Tool) were compared for simulating natural flows in the Paldang Dam upstream areas of the Han River basin in order to understand the limitations of TANK and to review the applicability and capability of SWAT. For comparison, simulation results from the previous research work were used. In the results for the calibrated watersheds (Chungju Dam and Soyanggang Dam), two models provided promising results for forecasting of daily flows with the Nash-Sutcliffe model efficiency of around 0.8. TANK simulated observations during some peak flood seasons better than SWAT, while it showed poor results during dry seasons, especially its simulations did not fall down under a certain value. It can be explained that TANK was calibrated for relatively larger flows than smaller ones. SWAT results showed a relatively good agreement with observed flows except some flood flows, and simulated inflows at the Paldang Dam considering discharges from upper dams coincided with observations with the model efficiency of around 0.9. This accounts for SWAT applicability with higher accuracy in predicting natural flows without dam operation or artificial water uses, and in assessing flow variations before and after dam development. Also, two model results were compared for other watersheds such as Pyeongchang-A, Dalcheon-B, Seomgang-B, Inbuk-A, Hangang-D, and Hongcheon-A to which calibrated TANK parameters were applied. The results were similar to the case of calibrated watersheds, that TANK simulated poor smaller flows except some flood flows and had same problem of keeping on over a certain value in dry seasons. This indicates that TANK application may have fatal uncertainties in estimating low flows used as an important index in water resources planning and management. Therefore, in order to reflect actually complex and complicated physical characteristics of Korean watersheds, and to manage efficiently water resources according to the land use and water use changes with urbanization or climate change in the future, it is necessary to utilize a physically based watershed model like SWAT rather than an existing conceptual lumped model like TANK.

Pseudo Image Composition and Sensor Models Analysis of SPOT Satellite Imagery of Non-Accessible Area (비접근 지역에 대한 SPOT 위성영상의 Pseudo영상 구성 및 센서모델 분석)

  • 방기인;조우석
    • Proceedings of the KSRS Conference
    • /
    • 2001.03a
    • /
    • pp.140-148
    • /
    • 2001
  • The satellite sensor model is typically established using ground control points acquired by ground survey Of existing topographic maps. In some cases where the targeted area can't be accessed and the topographic maps are not available, it is difficult to obtain ground control points so that geospatial information could not be obtained from satellite image. The paper presents several satellite sensor models and satellite image decomposition methods for non-accessible area where ground control points can hardly acquired in conventional ways. First, 10 different satellite sensor models, which were extended from collinearity condition equations, were developed and then the behavior of each sensor model was investigated. Secondly, satellite images were decomposed and also pseudo images were generated. The satellite sensor model extended from collinearity equations was represented by the six exterior orientation parameters in 1$^{st}$, 2$^{nd}$ and 3$^{rd}$ order function of satellite image row. Among them, the rotational angle parameters such as $\omega$(omega) and $\phi$(phi) correlated highly with positional parameters could be assigned to constant values. For non-accessible area, satellite images were decomposed, which means that two consecutive images were combined as one image. The combined image consists of one satellite image with ground control points and the other without ground control points. In addition, a pseudo image which is an imaginary image, was prepared from one satellite image with ground control points and the other without ground control points. In other words, the pseudo image is an arbitrary image bridging two consecutive images. For the experiments, SPOT satellite images exposed to the similar area in different pass were used. Conclusively, it was found that 10 different satellite sensor models and 5 different decomposed methods delivered different levels of accuracy. Among them, the satellite camera model with 1$^{st}$ order function of image row for positional orientation parameters and rotational angle parameter of kappa, and constant rotational angle parameter omega and phi provided the best 60m maximum error at check point with pseudo images arrangement.

  • PDF

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

Studies on the analysis of phytin by the Chelatometric method (Chelate 법(法)에 의(依)한 Phytin 분석(分析)에 관(關)한 연구(硏究))

  • Shin, Jai-Doo
    • Applied Biological Chemistry
    • /
    • v.10
    • /
    • pp.1-13
    • /
    • 1968
  • Phytin is a salt(mainly calcium and magnesium) of phytic acid and its purity and molecular formula can be determined by assaying the contents of phosporus, calcium and magnesium in phytin. In order to devise a new method for the quantitative analysis of the three elements in phytin, the chelatometric method was developed as follows: 1) As the pretreatment for phytin analysis, it was ashfied st $550{\sim}600^{\circ}C$ in the presence of concentrated nitric acid. This dry process is more accurate than the wet process. 2) Phosphorus, calcium and megnesium were analyzed by the conventional and the new method described here, for the phytin sample decomposed by the dry process. The ashfied phytin solution in hydrochloric acid was partitioned into cation and anion fractions by means of a ration exchange resin. A portion of the ration fraction was adjusted to pH 7.0, followed by readjustment to pH 10 and titrated with standard EDTA solution using the BT [Eriochrome black T] indicator to obtain the combined value of calcium and magnesium. Another portion of the ration fraction was made to pH 7.0, and a small volume of standard EDTA solution was added to it. pH was adjusted to $12{\sim}13$ with 8 N KOH and it was titrate by a standard EDTA solution in the presence of N-N[2-Hydroxy-1-(2-hydroxy-4-sulfo-1-naphytate)-3-naphthoic acid] diluted powder indicator in order to obtain the calcium content. Magnesium content was calculated from the difference between the two values. From the anion fraction the magnesium ammonium phosphate precipitate was obtained. The precipitate was dissolved in hydrochloric acid, and a standard EDTA solution was added to it. The solution was adjusted to pH 7.0 and then readjusted to pH 10.0 by a buffer solution and titrated with a standard magnesium sulfate solution in the presence of BT indicator to obtain the phosphorus content. The analytical data for phosphorus, calcium and magnesium were 98.9%, 97.1% and 99.1% respectively, in reference to the theoretical values for the formula $C_6H_6O_{24}P_6Mg_4CaNa_2{\cdot}5H_2O$. Statical analysis indicated a good coincidence of the theoretical and experimental values. On the other hand, the observed values for the three elements by the conventional method were 92.4%, 86.8% and 93.8%, respectively, revealing a remarkable difference from the theoretical. 3) When sodium phytate was admixed with starch and subjected to the analysis of phosphorus, calcium and magnesium by the chelatometric method, their recovery was almost 100% 4) In order to confirm the accuracy of this method, phytic acid was reacted with calcium chloride and magnesium chloride in the molar ratio of phytic: calcium chloride: magnesium chloride=1 : 5 : 20 to obtain sodium phytate containing one calcium atom and four magnesium atoms per molecule of sodium phytate. The analytical data for phosporus, calcium and magnesium were coincident with those as determine d by the aforementioned method. The new method employing the dry process, ion exchange resin and chelatometric assay of phosphorus, calcium and magnesium is considered accurate and rapid for the determination of phytin.

  • PDF

Development of an Offline Based Internal Organ Motion Verification System during Treatment Using Sequential Cine EPID Images (연속촬영 전자조사 문 영상을 이용한 오프라인 기반 치료 중 내부 장기 움직임 확인 시스템의 개발)

  • Ju, Sang-Gyu;Hong, Chae-Seon;Huh, Woong;Kim, Min-Kyu;Han, Young-Yih;Shin, Eun-Hyuk;Shin, Jung-Suk;Kim, Jing-Sung;Park, Hee-Chul;Ahn, Sung-Hwan;Lim, Do-Hoon;Choi, Doo-Ho
    • Progress in Medical Physics
    • /
    • v.23 no.2
    • /
    • pp.91-98
    • /
    • 2012
  • Verification of internal organ motion during treatment and its feedback is essential to accurate dose delivery to the moving target. We developed an offline based internal organ motion verification system (IMVS) using cine EPID images and evaluated its accuracy and availability through phantom study. For verification of organ motion using live cine EPID images, a pattern matching algorithm using an internal surrogate, which is very distinguishable and represents organ motion in the treatment field, like diaphragm, was employed in the self-developed analysis software. For the system performance test, we developed a linear motion phantom, which consists of a human body shaped phantom with a fake tumor in the lung, linear motion cart, and control software. The phantom was operated with a motion of 2 cm at 4 sec per cycle and cine EPID images were obtained at a rate of 3.3 and 6.6 frames per sec (2 MU/frame) with $1,024{\times}768$ pixel counts in a linear accelerator (10 MVX). Organ motion of the target was tracked using self-developed analysis software. Results were compared with planned data of the motion phantom and data from the video image based tracking system (RPM, Varian, USA) using an external surrogate in order to evaluate its accuracy. For quantitative analysis, we analyzed correlation between two data sets in terms of average cycle (peak to peak), amplitude, and pattern (RMS, root mean square) of motion. Averages for the cycle of motion from IMVS and RPM system were $3.98{\pm}0.11$ (IMVS 3.3 fps), $4.005{\pm}0.001$ (IMVS 6.6 fps), and $3.95{\pm}0.02$ (RPM), respectively, and showed good agreement on real value (4 sec/cycle). Average of the amplitude of motion tracked by our system showed $1.85{\pm}0.02$ cm (3.3 fps) and $1.94{\pm}0.02$ cm (6.6 fps) as showed a slightly different value, 0.15 (7.5% error) and 0.06 (3% error) cm, respectively, compared with the actual value (2 cm), due to time resolution for image acquisition. In analysis of pattern of motion, the value of the RMS from the cine EPID image in 3.3 fps (0.1044) grew slightly compared with data from 6.6 fps (0.0480). The organ motion verification system using sequential cine EPID images with an internal surrogate showed good representation of its motion within 3% error in a preliminary phantom study. The system can be implemented for clinical purposes, which include organ motion verification during treatment, compared with 4D treatment planning data, and its feedback for accurate dose delivery to the moving target.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Analysis of the Time-dependent Relation between TV Ratings and the Content of Microblogs (TV 시청률과 마이크로블로그 내용어와의 시간대별 관계 분석)

  • Choeh, Joon Yeon;Baek, Haedeuk;Choi, Jinho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.163-176
    • /
    • 2014
  • Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.

Evaluation of Metal Volume and Proton Dose Distribution Using MVCT for Head and Neck Proton Treatment Plan (두경부 양성자 치료계획 시 MVCT를 이용한 Metal Volume 평가 및 양성자 선량분포 평가)

  • Seo, Sung Gook;Kwon, Dong Yeol;Park, Se Joon;Park, Yong Chul;Choi, Byung Ki
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.31 no.1
    • /
    • pp.25-32
    • /
    • 2019
  • Purpose: The size, shape, and volume of prosthetic appliance depend on the metal artifacts resulting from dental implant during head and neck treatment with radiation. This reduced the accuracy of contouring targets and surrounding normal tissues in radiation treatment plan. Therefore, the purpose of this study is to obtain the images of metal representing the size of tooth through MVCT, SMART-MAR CT and KVCT, evaluate the volumes, apply them into the proton therapy plan, and analyze the difference of dose distribution. Materials and Methods : Metal A ($0.5{\times}0.5{\times}0.5cm$), Metal B ($1{\times}1{\times}1cm$), and Metal C ($1{\times}2{\times}1cm$) similar in size to inlay, crown, and bridge taking the treatments used at the dentist's into account were made with Cerrobend ($9.64g/cm^3$). Metal was placed into the In House Head & Neck Phantom and by using CT Simulator (Discovery CT 590RT, GE, USA) the images of KVCT and SMART-MAR were obtained with slice thickness 1.25 mm. The images of MVCT were obtained in the same way with $RADIXACT^{(R)}$ Series (Accuracy $Precision^{(R)}$, USA). The images of metal obtained through MVCT, SMART-MAR CT, and KVCT were compared in both size of axis X, Y, and Z and volume based on the Autocontour Thresholds Raw Values from the computerized treatment planning equipment Pinnacle (Ver 9.10, Philips, Palo Alto, USA). The proton treatment plan (Ray station 5.1, RaySearch, USA) was set by fusing the contour of metal B ($1{\times}1{\times}1cm$) obtained from the above experiment by each CT into KVCT in order to compare the difference of dose distribution. Result: Referencing the actual sizes, it was appeared: Metal A (MVCT: 1.0 times, SMART-MAR CT: 1.84 times, and KVCT: 1.92 times), Metal B (MVCT: 1.02 times, SMART-MAR CT: 1.47 times, and KVCT: 1.82 times), and Metal C (MVCT: 1.0 times, SMART-MAR CT: 1.46 times, and KVCT: 1.66 times). MVCT was measured most similarly to the actual metal volume. As a result of measurement by applying the volume of metal B into proton treatment plan, the dose of $D_{99%}$ volume was measured as: MVCT: 3094 CcGE, SMART-MAR CT: 2902 CcGE, and KVCT: 2880 CcGE, against the reference 3082 CcGE Conclusion: Overall volume and axes X and Z were most identical to the actual sizes in MVCT and axis Y, which is in the superior-Inferior direction, was regular in length without differences in CT. The best dose distribution was shown in MVCT having similar size, shape, and volume of metal when treating head and neck protons. Thus it is thought that it would be very useful if the contour of prosthetic appliance using MVCT is applied into KVCT for proton treatment plan.