• Title/Summary/Keyword: Multiple-input

Search Result 2,074, Processing Time 0.03 seconds

The Effect of Variations in the Vertical Position of the Bracket on the Crown Inclination (브라켓의 수직적 위치변동에 따른 치관경사도변화에 관한 연구)

  • Chang, Yeon-Joo;Kim, Tae-Woo;Yoo, Kwan-Hee
    • The korean journal of orthodontics
    • /
    • v.32 no.6 s.95
    • /
    • pp.401-411
    • /
    • 2002
  • Precise bracket positioning is essential in modem orthodontics. However, there can be alterations in the vertical position of a bracket due to several reasons. The purpose of this study was to evaluate the effect of variations in the vertical bracket position on the crown inclination in Korean patients with normal occlusion. From a larger group of what was considered to be normal occlusions obtained from the Department of Orthodontics, College of Dentistry, Seoul National University, each of the final 10 subjects (6 males and 4 females, with an average age of 22.3 yews) was selected. The dental models of each of the subjects were scanned three-dimensionally by a laser scanner, and measurements drawn from these were made on the scanned dental casts of the subjects were input into the computer program. From this the occlusal plane and the bracket plane were determined. The tooth plane was then constructed to measure the crown inclination on the bracket plane of each tooth. From a practical standpoint, information was obtained on the extent to which the torque of a tooth would be changed as the bracket position was to be moved vertically (in ${\pm}0.5mm,\;{\pm}1.0mm,\;{\pm}1.5mm$) from its ideal position. A one way analysis of the variance (ANOVA) was used to compare each group of the different vertical distances from the bracket plane on a specific tooth. Duncan's multiple comparison test was then performed. There were statistically significant differences in the crown inclination among the groups of different vertical distances for the upper central incisor, upper lateral incisor, upper canine, upper first and second molars, lower first and second premolars, and lower first and second molars (p<0.05). On the upper anterior teeth, upper molars, lower premolars and lower molars, the resultant torque values due to the vertical displacement of the bracket were different depending on the direction of the displacement, occlusal or gingival. This study implies that the torque of these teeth should be handled carefully during the orthodontic treatment. In circumstances in which the bracket must be positioned more gingivally or occlusally due to various reasons, it would be useful to provide the chart of torque alteration of each tooth referred to in this study with its specified bracket prescription.

Predicting Regional Soybean Yield using Crop Growth Simulation Model (작물 생육 모델을 이용한 지역단위 콩 수량 예측)

  • Ban, Ho-Young;Choi, Doug-Hwan;Ahn, Joong-Bae;Lee, Byun-Woo
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.699-708
    • /
    • 2017
  • The present study was to develop an approach for predicting soybean yield using a crop growth simulation model at the regional level where the detailed and site-specific information on cultivation management practices is not easily accessible for model input. CROPGRO-Soybean model included in Decision Support System for Agrotechnology Transfer (DSSAT) was employed for this study, and Illinois which is a major soybean production region of USA was selected as a study region. As a first step to predict soybean yield of Illinois using CROPGRO-Soybean model, genetic coefficients representative for each soybean maturity group (MG I~VI) were estimated through sowing date experiments using domestic and foreign cultivars with diverse maturity in Seoul National University Farm ($37.27^{\circ}N$, $126.99^{\circ}E$) for two years. The model using the representative genetic coefficients simulated the developmental stages of cultivars within each maturity group fairly well. Soybean yields for the grids of $10km{\times}10km$ in Illinois state were simulated from 2,000 to 2,011 with weather data under 18 simulation conditions including the combinations of three maturity groups, three seeding dates and two irrigation regimes. Planting dates and maturity groups were assigned differently to the three sub-regions divided longitudinally. The yearly state yields that were estimated by averaging all the grid yields simulated under non-irrigated and fully-Irrigated conditions showed a big difference from the statistical yields and did not explain the annual trend of yield increase due to the improved cultivation technologies. Using the grain yield data of 9 agricultural districts in Illinois observed and estimated from the simulated grid yield under 18 simulation conditions, a multiple regression model was constructed to estimate soybean yield at agricultural district level. In this model a year variable was also added to reflect the yearly yield trend. This model explained the yearly and district yield variation fairly well with a determination coefficients of $R^2=0.61$ (n = 108). Yearly state yields which were calculated by weighting the model-estimated yearly average agricultural district yield by the cultivation area of each agricultural district showed very close correspondence ($R^2=0.80$) to the yearly statistical state yields. Furthermore, the model predicted state yield fairly well in 2012 in which data were not used for the model construction and severe yield reduction was recorded due to drought.

Simulation Approach for the Tracing the Marine Pollution Using Multi-Remote Sensing Data (다중 원격탐사 자료를 활용한 해양 오염 추적 모의 실험 방안에 대한 연구)

  • Kim, Keunyong;Kim, Euihyun;Choi, Jun Myoung;Shin, Jisun;Kim, Wonkook;Lee, Kwang-Jae;Son, Young Baek;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.2_2
    • /
    • pp.249-261
    • /
    • 2020
  • Coastal monitoring using multiple platforms/sensors is a very important tools for accurately understanding the changes in offshore marine environment and disaster with high temporal and spatial resolutions. However, integrated observation studies using multiple platforms and sensors are insufficient, and none of them have been evaluated for efficiency and limitation of convergence. In this study, we aimed to suggest an integrated observation method with multi-remote sensing platform and sensors, and to diagnose the utility and limitation. Integrated in situ surveys were conducted using Rhodamine WT fluorescent dye to simulate various marine disasters. In September 2019, the distribution and movement of RWT dye patches were detected using satellite (Kompsat-2/3/3A, Landsat-8 OLI, Sentinel-3 OLCI and GOCI), unmanned aircraft (Mavic 2 pro and Inspire 2), and manned aircraft platforms after injecting fluorescent dye into the waters of the South Sea-Yeosu Sea. The initial patch size of the RWT dye was 2,600 ㎡ and spread to 62,000 ㎡ about 138 minutes later. The RWT patches gradually moved southwestward from the point where they were first released,similar to the pattern of tidal current flowing southwest as the tides gradually decreased. Unmanned Aerial Vehicles (UAVs) image showed highest resolution in terms of spatial and time resolution, but the coverage area was the narrowest. In the case of satellite images, the coverage area was wide, but there were some limitations compared to other platforms in terms of operability due to the long cycle of revisiting. For Sentinel-3 OLCI and GOCI, the spectral resolution and signal-to-noise ratio (SNR) were the highest, but small fluorescent dye detection was limited in terms of spatial resolution. In the case of hyperspectral sensor mounted on manned aircraft, the spectral resolution was the highest, but this was also somewhat limited in terms of operability. From this simulation approach, multi-platform integrated observation was able to confirm that time,space and spectral resolution could be significantly improved. In the future, if this study results are linked to coastal numerical models, it will be possible to predict the transport and diffusion of contaminants, and it is expected that it can contribute to improving model accuracy by using them as input and verification data of the numerical models.

Analysis of Authority Control System in Collecting Repository -from the case of Archival Management System in Korea Democracy Foundation- (수집형 기록관의 전거제어시스템 분석 - 민주화운동기념사업회 사료관리시스템의 사례를 중심으로 -)

  • Lee, Hyun-Jeong
    • The Korean Journal of Archival Studies
    • /
    • no.13
    • /
    • pp.91-134
    • /
    • 2006
  • In general, personally collected archives, manuscripts, are physically badly conditioned and also contextual of the archives and information on the history of production is mostly collected partly in the manuscripts. Therefore they need to control the name of the producers on the archives collected in various ways effectively and accumulate provenance information which is the key element when understanding the production background in the collecting repository. Here, the authority control and provenance information management must be organized from the beginning of acquisition and this means to collect necessary information considering control process of acquisition as well. This thesis is for verifying the necessity of the authority control in collecting repository and accumulation of the provenance information and for suggesting the things to be considered as collecting Archival authority system. For all these, this thesis shows that it has checked out the necessity of the authority control in archival management and archival authority control and researched the standard of archival authority control, work process and accumulation process. Archival provenance information management and authority control in the archival authority control system are organized through the whole steps of the archival management starting from the lead file to the name of the producers at archival registration and archival description at acquisition. And a lot of information is registered and described at the proper point of time and finally all the information including authority control which controls the Heading in the authority management must be organized to use them as an intellectual management of archives and Finding Aids. The features of the Archival authority system are as follows; first of all, Authority file type which is necessary at the archival authority control of democracy movement is made up of the name of the group, person, affair and terminology(subject name). Second of all, basic record structures and description elements in authority collection of Korea Democracy Foundation Archives apply in the paragraph 1 of ISAAR(CPF) adding some necessary elements and details of description rule such as spacing words and using the periods apply in the paragraph 4 of KCR coping with the features of the archival management system. And also the way of input on the authority record is based on EAC(Encoded Archival Context). Third of all, it made users approach to the sources which they want more easily by connecting the authority terms systemically making it possible to connect the relative terms with up and down words, before and after words variously and concretely expanding the term relations rather than earlier traditional authority system which is usually expressed only with relative words (see also). So the authority control of archival management system can effectively collect and manage the function of various and multiple groups and information on main activities as well as its own function which is controlling the Heading and express the multiple and intermediary relationship between archives and producers or between producers and it also provides them with expanded Record information service which satisfies user's various requests through Indexing service. Finally applying in this international standard ISAAR(CPF) through the instance of the authority management like this, it can be referred to making Archival authority system in Collecting repository hereafter by reorganizing the description elements into appropriate formations and setting up the authority file type which is to be managed properly for every service.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Study on water quality prediction in water treatment plants using AI techniques (AI 기법을 활용한 정수장 수질예측에 관한 연구)

  • Lee, Seungmin;Kang, Yujin;Song, Jinwoo;Kim, Juhwan;Kim, Hung Soo;Kim, Soojun
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.3
    • /
    • pp.151-164
    • /
    • 2024
  • In water treatment plants supplying potable water, the management of chlorine concentration in water treatment processes involving pre-chlorination or intermediate chlorination requires process control. To address this, research has been conducted on water quality prediction techniques utilizing AI technology. This study developed an AI-based predictive model for automating the process control of chlorine disinfection, targeting the prediction of residual chlorine concentration downstream of sedimentation basins in water treatment processes. The AI-based model, which learns from past water quality observation data to predict future water quality, offers a simpler and more efficient approach compared to complex physicochemical and biological water quality models. The model was tested by predicting the residual chlorine concentration downstream of the sedimentation basins at Plant, using multiple regression models and AI-based models like Random Forest and LSTM, and the results were compared. For optimal prediction of residual chlorine concentration, the input-output structure of the AI model included the residual chlorine concentration upstream of the sedimentation basin, turbidity, pH, water temperature, electrical conductivity, inflow of raw water, alkalinity, NH3, etc. as independent variables, and the desired residual chlorine concentration of the effluent from the sedimentation basin as the dependent variable. The independent variables were selected from observable data at the water treatment plant, which are influential on the residual chlorine concentration downstream of the sedimentation basin. The analysis showed that, for Plant, the model based on Random Forest had the lowest error compared to multiple regression models, neural network models, model trees, and other Random Forest models. The optimal predicted residual chlorine concentration downstream of the sedimentation basin presented in this study is expected to enable real-time control of chlorine dosing in previous treatment stages, thereby enhancing water treatment efficiency and reducing chemical costs.

Intelligent Brand Positioning Visualization System Based on Web Search Traffic Information : Focusing on Tablet PC (웹검색 트래픽 정보를 활용한 지능형 브랜드 포지셔닝 시스템 : 태블릿 PC 사례를 중심으로)

  • Jun, Seung-Pyo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.93-111
    • /
    • 2013
  • As Internet and information technology (IT) continues to develop and evolve, the issue of big data has emerged at the foreground of scholarly and industrial attention. Big data is generally defined as data that exceed the range that can be collected, stored, managed and analyzed by existing conventional information systems and it also refers to the new technologies designed to effectively extract values from such data. With the widespread dissemination of IT systems, continual efforts have been made in various fields of industry such as R&D, manufacturing, and finance to collect and analyze immense quantities of data in order to extract meaningful information and to use this information to solve various problems. Since IT has converged with various industries in many aspects, digital data are now being generated at a remarkably accelerating rate while developments in state-of-the-art technology have led to continual enhancements in system performance. The types of big data that are currently receiving the most attention include information available within companies, such as information on consumer characteristics, information on purchase records, logistics information and log information indicating the usage of products and services by consumers, as well as information accumulated outside companies, such as information on the web search traffic of online users, social network information, and patent information. Among these various types of big data, web searches performed by online users constitute one of the most effective and important sources of information for marketing purposes because consumers search for information on the internet in order to make efficient and rational choices. Recently, Google has provided public access to its information on the web search traffic of online users through a service named Google Trends. Research that uses this web search traffic information to analyze the information search behavior of online users is now receiving much attention in academia and in fields of industry. Studies using web search traffic information can be broadly classified into two fields. The first field consists of empirical demonstrations that show how web search information can be used to forecast social phenomena, the purchasing power of consumers, the outcomes of political elections, etc. The other field focuses on using web search traffic information to observe consumer behavior, identifying the attributes of a product that consumers regard as important or tracking changes on consumers' expectations, for example, but relatively less research has been completed in this field. In particular, to the extent of our knowledge, hardly any studies related to brands have yet attempted to use web search traffic information to analyze the factors that influence consumers' purchasing activities. This study aims to demonstrate that consumers' web search traffic information can be used to derive the relations among brands and the relations between an individual brand and product attributes. When consumers input their search words on the web, they may use a single keyword for the search, but they also often input multiple keywords to seek related information (this is referred to as simultaneous searching). A consumer performs a simultaneous search either to simultaneously compare two product brands to obtain information on their similarities and differences, or to acquire more in-depth information about a specific attribute in a specific brand. Web search traffic information shows that the quantity of simultaneous searches using certain keywords increases when the relation is closer in the consumer's mind and it will be possible to derive the relations between each of the keywords by collecting this relational data and subjecting it to network analysis. Accordingly, this study proposes a method of analyzing how brands are positioned by consumers and what relationships exist between product attributes and an individual brand, using simultaneous search traffic information. It also presents case studies demonstrating the actual application of this method, with a focus on tablets, belonging to innovative product groups.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Behavioural Analysis of Password Authentication and Countermeasure to Phishing Attacks - from User Experience and HCI Perspectives (사용자의 패스워드 인증 행위 분석 및 피싱 공격시 대응방안 - 사용자 경험 및 HCI의 관점에서)

  • Ryu, Hong Ryeol;Hong, Moses;Kwon, Taekyoung
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.79-90
    • /
    • 2014
  • User authentication based on ID and PW has been widely used. As the Internet has become a growing part of people' lives, input times of ID/PW have been increased for a variety of services. People have already learned enough to perform the authentication procedure and have entered ID/PW while ones are unconscious. This is referred to as the adaptive unconscious, a set of mental processes incoming information and producing judgements and behaviors without our conscious awareness and within a second. Most people have joined up for various websites with a small number of IDs/PWs, because they relied on their memory for managing IDs/PWs. Human memory decays with the passing of time and knowledges in human memory tend to interfere with each other. For that reason, there is the potential for people to enter an invalid ID/PW. Therefore, these characteristics above mentioned regarding of user authentication with ID/PW can lead to human vulnerabilities: people use a few PWs for various websites, manage IDs/PWs depending on their memory, and enter ID/PW unconsciously. Based on the vulnerability of human factors, a variety of information leakage attacks such as phishing and pharming attacks have been increasing exponentially. In the past, information leakage attacks exploited vulnerabilities of hardware, operating system, software and so on. However, most of current attacks tend to exploit the vulnerabilities of the human factors. These attacks based on the vulnerability of the human factor are called social-engineering attacks. Recently, malicious social-engineering technique such as phishing and pharming attacks is one of the biggest security problems. Phishing is an attack of attempting to obtain valuable information such as ID/PW and pharming is an attack intended to steal personal data by redirecting a website's traffic to a fraudulent copy of a legitimate website. Screens of fraudulent copies used for both phishing and pharming attacks are almost identical to those of legitimate websites, and even the pharming can include the deceptive URL address. Therefore, without the supports of prevention and detection techniques such as vaccines and reputation system, it is difficult for users to determine intuitively whether the site is the phishing and pharming sites or legitimate site. The previous researches in terms of phishing and pharming attacks have mainly studied on technical solutions. In this paper, we focus on human behaviour when users are confronted by phishing and pharming attacks without knowing them. We conducted an attack experiment in order to find out how many IDs/PWs are leaked from pharming and phishing attack. We firstly configured the experimental settings in the same condition of phishing and pharming attacks and build a phishing site for the experiment. We then recruited 64 voluntary participants and asked them to log in our experimental site. For each participant, we conducted a questionnaire survey with regard to the experiment. Through the attack experiment and survey, we observed whether their password are leaked out when logging in the experimental phishing site, and how many different passwords are leaked among the total number of passwords of each participant. Consequently, we found out that most participants unconsciously logged in the site and the ID/PW management dependent on human memory caused the leakage of multiple passwords. The user should actively utilize repudiation systems and the service provider with online site should support prevention techniques that the user can intuitively determined whether the site is phishing.

Performance Evaluation of an Electrometer for Quality Control and Dosimetry in Radiation Therapy (방사선 치료의 정도관리 및 선량측정에 이용되는 전리계의 성능평가)

  • Kim, Chang-Seon;Kim, Chul-Yong;Park, Myung-Sun
    • Progress in Medical Physics
    • /
    • v.11 no.2
    • /
    • pp.123-130
    • /
    • 2000
  • The performance of an electrometer directly affects on the accuracy and precision in radiation dosimetry. This study is to list of the quality control for maintaining performance and to perform evaluation tests of an electrometer. Performance tests selected include proper polarizing voltages, warm-up and equalization time, leakages, long-term stability, linearity, and effect of ambient conditions. An electrometer connected with a rigid stem ionization chamber was evaluated with a Strontium-90 check device. Bias voltage was measured directly on the input socket. Equalization time is the time required for reaching threshold of charged state after the power is on or the bias voltage is changed. Pre- and post-signal leakages are defined as the accumulation of signal with no exposure and after exposure, respectively. Over three months period, the electrometer's long-term stability was measured by comparison of the temperature-pressure corrected readings. Linearity was expressed as the deviation of readings from multiple short exposures from one continuous exposure. Effect of ambient conditions was expressed as the zero drift of the electrometer over 17-34$^{\circ}C$ temperature ranges. For two nominal values, 300 and 500 volts, measured voltages were lower by 2.5 and 5.8%, respectively. The warm-up time, 20 minutes, was longer than the lamp time by 9 minutes and the equalization time was less than 1 minute. Without exposure, the zero-drift was 0.002 scale-unit in 15 minutes and the leakage after 10 minutes exposure was minimal. The IQ-4 was stable over 99.4% for three-month periods. Deviation from the linearity was 0.9% for measurement scale, 0.000-9.991. Over 17-34$^{\circ}C$ temperature range, the zero-drift was minimal, less than 0.2%. For a clinically-used electrometer, a list for the basic peformance evaluations is proposed. By running this program, the measurement error using an electrometer can be reduced and in turn the improvement in accuracy and precision of radiation dosimetry can be achieved.

  • PDF