• 제목/요약/키워드: Intelligent Control System

검색결과 2,858건 처리시간 0.032초

Application for Measurement of Curing Temperature of Concrete in a Construction Site using a Wireless Sensor Network (무선센서네트워크에 의한 콘크리트 양생온도 계측에 관한 현장 적용성 연구)

  • Lee, Sung-Bok;Bae, Kee-Sun;Lee, Do-Heon
    • Journal of the Korea Institute of Building Construction
    • /
    • 제11권3호
    • /
    • pp.283-291
    • /
    • 2011
  • As the construction industry has recently been transformed by the emergence of ubiquitous and intelligent technology, there have been major changes in the management methods employed. Specifically, next-generation construction management systems have been developed that collect and analyze many pieces of information in real time by using various wireless sensors and networks. The purpose of this study is to understand the current status of Ubiquitous Sensor Networks (USN) in the construction sector, and to gain fundamental data for a system of measuring concrete curing temperature in a construction site that employs a USN. By investigating the application status of USN, it was confirmed that USN has mainly been applied to the maintenance of facilities, safety management, and quality control. In addition, a field experiment in which the curing temperature of concrete was measured using a USN was carried out to evaluate two systems with wireless sensor networks, and the applicability of these systems on site was confirmed. However, it is estimated that the embedded wireless sensor type is affected by metal equipment on site, internal battery of sensor and concrete depth, and studies to provide more stable system by USN are thus required.

Collision Avoidance and Deadlock Resolution for AGVs in an Automated Container Terminal (자동화 컨테이너 터미널에서의 AGV 충돌 방지 및 교착 해결 방안)

  • Kang, Jae-Ho;Choi, Lee;Kang, Byoung-Ho;Ryu, Kwang-Ryel;Kim, Kap-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • 제11권3호
    • /
    • pp.25-43
    • /
    • 2005
  • In modern automated container terminals, automated guided vehicle (AGV) systems are considered a viable option for the horizontal tansportation of containers between the stacking yard and the quayside cranes. AGVs in a container terminal move rather freely and do not follow fixed guide paths. For an efficient operation of such AGVs, however, a sophisticated traffic management system is required. Although the flexible routing scheme allows us to find the shortest possible routes for each of the AGVs, it may incur many coincidental encounters and path intersections of the AGVs, leading to collisions or deadlocks. However, the computational cost of perfect prediction and avoidance of deadlocks is prohibitively expensive for a real time application. In this paper, we propose a traffic control method that predicts and avoids some simple, but at the same time the most frequently occurring, cases of deadlocks between two AGVs. More complicated deadlock situations are not predicted ahead of time but detected and resolved after they occur. Our method is computationally cheap and readily applicable to real time applications. The efficiency and effectiveness of our proposed methods have been validated by simulation.

  • PDF

Building an Analytical Platform of Big Data for Quality Inspection in the Dairy Industry: A Machine Learning Approach (유제품 산업의 품질검사를 위한 빅데이터 플랫폼 개발: 머신러닝 접근법)

  • Hwang, Hyunseok;Lee, Sangil;Kim, Sunghyun;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • 제24권1호
    • /
    • pp.125-140
    • /
    • 2018
  • As one of the processes in the manufacturing industry, quality inspection inspects the intermediate products or final products to separate the good-quality goods that meet the quality management standard and the defective goods that do not. The manual inspection of quality in a mass production system may result in low consistency and efficiency. Therefore, the quality inspection of mass-produced products involves automatic checking and classifying by the machines in many processes. Although there are many preceding studies on improving or optimizing the process using the data generated in the production process, there have been many constraints with regard to actual implementation due to the technical limitations of processing a large volume of data in real time. The recent research studies on big data have improved the data processing technology and enabled collecting, processing, and analyzing process data in real time. This paper aims to propose the process and details of applying big data for quality inspection and examine the applicability of the proposed method to the dairy industry. We review the previous studies and propose a big data analysis procedure that is applicable to the manufacturing sector. To assess the feasibility of the proposed method, we applied two methods to one of the quality inspection processes in the dairy industry: convolutional neural network and random forest. We collected, processed, and analyzed the images of caps and straws in real time, and then determined whether the products were defective or not. The result confirmed that there was a drastic increase in classification accuracy compared to the quality inspection performed in the past.

A Study of Influencing Factors Upon Using C4I Systems: The Perspective of Mediating Variables in a Structured Model (C4I 시스템 사용의 영향 요인에 관한 연구: 구조모형의 매개변수의 관점에서)

  • Kim, Chong-Man;Kim, In-Jai
    • Asia pacific journal of information systems
    • /
    • 제19권2호
    • /
    • pp.73-94
    • /
    • 2009
  • The general aspects for the future warfare shows that the concept of firepower and maneuver centric warfare has been replacing with that of information and knowledge centric warfare. Thus, some developed countries are now trying to establish the information systems to perform intelligent warfare and innovate defense operations. The C4I(Command, Control, Communication, Computers and Intelligence for the Warrior) systems make it possible to do modern and systematic war operations. The basic idea of this study is to investigate how TAM(Technology Acceptance Model) can explain the acceptance behavior in military organizations. Because TAM is inadequate in explaining the acceptance processes forcomplex technologies and strict organizations, a revised research model based upon TAM was developed in order to assess the usage of the C4I system. The purpose of this study is to investigate factors affecting the usage of C4I in the Korean Army. The research model, based upon TAM, was extended through a belief construct such as self-efficacy as one of mediating variables. The self-efficacy has been used as a mediating variable for technology acceptance, and the variable was included in the research model. The external variables were selected on the basis of previous research. The external variables can be classified into following: 1) technological, 2) organizational, and 3) environmental factors on the basis of TOE(Technology-Organization-Environment) framework. The technological factor includes the information quality and the task-technology fitness. The organizational factor includes the influence of senior colleagues. The environmental factor includes the education/train data. The external variables are considered very important for explaining the behavior patterns of information technology or systems. A structured questionnaire was developed and administrated to those who were using the C4I system. Total 329 data were used for statistical data analyses. A confirmatory factor analysis and structured equation model were used as main statistical methods. Model fitness Indexes for measurement and structured models were verified before all 18 hypotheses were tested. This study shows that the perceived usefulness and the self-efficacy played their roles more than the perceived ease of use did in TAM. In military organizations, the perceived usefulness showed its mediating effects between external variables and dependent variable, but the perceived ease of use did not. These results imply that the perceived usefulness can explain the acceptance processes better than the perceived ease of use in the army. The self-efficacy was also used as one of the three mediating variables, and showed its mediating effects in explaining the acceptance processes. Such results also show that the self-efficacy can be selected as one possible belief construct in TAM. The perceived usefulness was influenced by such factors as senior colleagues, the information quality, and the task-technology fitness. The self-efficacy was affected by education/train and task-technology fitness. The actual usage of C4I was influenced not by the perceived ease of use but by the perceived usefulness and selfefficacy. This study suggests the followings: (1) An extended TAM can be applied to such strict organizations as the army; (2) Three mediation variables are included in the research model and tested at real situations; and (3) Several other implications are discussed.

Security Credential Management & Pilot Policy of U.S. Government in Intelligent Transport Environment (지능형 교통 환경에서 미국정부의 보안인증관리 & Pilot 정책)

  • Hong, Jin-Keun
    • Journal of Convergence for Information Technology
    • /
    • 제9권9호
    • /
    • pp.13-19
    • /
    • 2019
  • This paper analyzed the SCMS and pilot policy, which is pursued by the U.S. government in connected vehicles. SCMS ensures authentication, integrity, privacy and interoperability. The SCMS Support Committee of U.S. government has established the National Unit SCMS and is responsible for system-wide control. Of course, it introduces security policy, procedures and training programs making. In this paper, the need for SCMS to be applied to C-ITS was discussed. The structure of the SCMS was analyzed and the U.S. government's filot policy for connected vehicles was discussed. The discussion of the need for SCMS highlighted the importance of the role and responsibilities of SCMS between vehicles and vehicles. The security certificate management system looked at the structure and analyzed the type of certificate used in the vehicle or road side unit (RSU). The functions and characteristics of the certificates were reviewed. In addition, the functions of basic safety messages were analyzed with consideration of the detection and warning functions of abnormal behavior in SCMS. Finally, the status of the pilot project for connected vehicles currently being pursued by the U.S. government was analyzed. In addition to the environment used for the test, the relevant messages were also discussed. We also looked at some of the issues that arise in the course of the pilot project.

The Relationship between Internet Search Volumes and Stock Price Changes: An Empirical Study on KOSDAQ Market (개별 기업에 대한 인터넷 검색량과 주가변동성의 관계: 국내 코스닥시장에서의 산업별 실증분석)

  • Jeon, Saemi;Chung, Yeojin;Lee, Dongyoup
    • Journal of Intelligence and Information Systems
    • /
    • 제22권2호
    • /
    • pp.81-96
    • /
    • 2016
  • As the internet has become widespread and easy to access everywhere, it is common for people to search information via online search engines such as Google and Naver in everyday life. Recent studies have used online search volume of specific keyword as a measure of the internet users' attention in order to predict disease outbreaks such as flu and cancer, an unemployment rate, and an index of a nation's economic condition, and etc. For stock traders, web search is also one of major information resources to obtain data about individual stock items. Therefore, search volume of a stock item can reflect the amount of investors' attention on it. The investor attention has been regarded as a crucial factor influencing on stock price but it has been measured by indirect proxies such as market capitalization, trading volume, advertising expense, and etc. It has been theoretically and empirically proved that an increase of investors' attention on a stock item brings temporary increase of the stock price and the price recovers in the long run. Recent development of internet environment enables to measure the investor attention directly by the internet search volume of individual stock item, which has been used to show the attention-induced price pressure. Previous studies focus mainly on Dow Jones and NASDAQ market in the United States. In this paper, we investigate the relationship between the individual investors' attention measured by the internet search volumes and stock price changes of individual stock items in the KOSDAQ market in Korea, where the proportion of the trades by individual investors are about 90% of the total. In addition, we examine the difference between industries in the influence of investors' attention on stock return. The internet search volume of stocks were gathered from "Naver Trend" service weekly between January 2007 and June 2015. The regression model with the error term with AR(1) covariance structure is used to analyze the data since the weekly prices in a stock item are systematically correlated. The market capitalization, trading volume, the increment of trading volume, and the month in which each trade occurs are included in the model as control variables. The fitted model shows that an abnormal increase of search volume of a stock item has a positive influence on the stock return and the amount of the influence varies among the industry. The stock items in IT software, construction, and distribution industries have shown to be more influenced by the abnormally large internet search volume than the average across the industries. On the other hand, the stock items in IT hardware, manufacturing, entertainment, finance, and communication industries are less influenced by the abnormal search volume than the average. In order to verify price pressure caused by investors' attention in KOSDAQ, the stock return of the current week is modelled using the abnormal search volume observed one to four weeks ahead. On average, the abnormally large increment of the search volume increased the stock return of the current week and one week later, and it decreased the stock return in two and three weeks later. There is no significant relationship with the stock return after 4 weeks. This relationship differs among the industries. An abnormal search volume brings particularly severe price reversal on the stocks in the IT software industry, which are often to be targets of irrational investments by individual investors. An abnormal search volume caused less severe price reversal on the stocks in the manufacturing and IT hardware industries than on average across the industries. The price reversal was not observed in the communication, finance, entertainment, and transportation industries, which are known to be influenced largely by macro-economic factors such as oil price and currency exchange rate. The result of this study can be utilized to construct an intelligent trading system based on the big data gathered from web search engines, social network services, and internet communities. Particularly, the difference of price reversal effect between industries may provide useful information to make a portfolio and build an investment strategy.

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Digital Archives of Cultural Archetype Contents: Its Problems and Direction (디지털 아카이브즈의 문제점과 방향 - 문화원형 콘텐츠를 중심으로 -)

  • Hahm, Han-Hee;Park, Soon-Cheol
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • 제17권2호
    • /
    • pp.23-42
    • /
    • 2006
  • This is a study of the digital archives of Culturecontent.com where 'Cultural Archetype Contents' are currently in service. One of the major purposes of our study is to point out problems in the current system and eventually propose improvements to the digital archives. The government launched a four-year project for developing the cultural archetype content sources and establishing its related business with the hope of enhancing the nation's competitiveness. More specifically, the project focuses on the production of source materials of cultural archetype contents in the subjects of Korea's history. tradition, everyday life. arts and general geographical books. In addition, through this project, the government also intends to establish a proper distribution system of digitalized culture contents and to control copyright issues. This paper analyzes the digital archives system that stores the culture content data that have been produced from 2002 to 2005 and evaluates the current system's weaknesses and strengths. The summary of our findings is as follows. First. the digital archives system does not contain a semantic search engine and therefore its full function is 1agged. Second, similar data is not classified into the same categories but into the different ones, thereby confusing and inconveniencing users. Users who want to find source materials could be disappointed by the current distributive system. Our paper suggests a better system of digital archives with text mining technology which consists of five significant intelligent process-keyword searches, summarization, clustering, classification and topic tracking. Our paper endeavors to develop the best technical environment for preserving and using culture contents data. With the new digitalized upgraded settings, users of culture contents data will discover a world of new knowledge. The technology we introduce in this paper will lead to the highest achievable digital intelligence through a new framework.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • 제17권1호
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF