• Title/Summary/Keyword: intelligent control system

Search Result 2,865, Processing Time 0.029 seconds

A Study on Smart Grid and Cyber Security Strategy (지능형 전력망 도입과 사이버보안 전략)

  • Lee, Sang-Keun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.5
    • /
    • pp.95-108
    • /
    • 2011
  • Smart Grids are intelligent next generating Electric Power System (EPS) that provide environment-friendliness, high-efficiency, and high-trustworthiness by integrating information and communication technology with electric power technology. Smart grids help to supply power more efficiently and safely than past systems by bilaterally exchanging information between the user and power producer. In addition, it alleviates environmental problems by using renewable energy resources. However, smart grids have many cyber security risks because of the bilateral service, the increase of small and medium-sized energy resources, and the installation of multi-sensors or control devices. These cyber risks can cause critical problems within a national grid through even small errors. Therefore, in order to reduce these risks, it is necessary to establish a cyber security strategy and apply it from the developmental stage to the implementation stage. This thesis analyzes and recommends security strategy in order to resolve the security risks. By applying cyber security strategy to a smart grid, it will provide a stepping-stone to creating a safe and dependable smart grid.

A Study on Government Service Innovation with Intelligent(AI): Based on e-Government Website Assessment Data (전자정부 웹사이트 평가 결과 데이터 기반 지능형(AI) 정부 웹서비스 관리 방안 연구)

  • Lee, Eun Suk;Cha, Kyung Jin
    • Journal of Information Technology Services
    • /
    • v.20 no.2
    • /
    • pp.1-11
    • /
    • 2021
  • As a key of access to public participation and information, e-government is taking the active role of public service by relevant laws and policy measures for universal use of e-government websites. To improve the accessibility of web contents, the level of deriving the results for each detailed evaluation item according to the Korean web contents accessibility guideline is carried out, which is an important factor according to the detailed evaluation items for each website property and requires data-based management. In this paper, detailed indicators are analyzed based on the quality control level diagnosis results of existing domestic e-government websites, and the results are classified according to high and low to propose new improvement directions and induce detailed improvement. Depending on the necessity of management according to the detailed indicators for each website attribute, not only results but also level diagnosis to strengthen web service quality suggests directions for future improvement through accurate detailed analysis and research for policy feedback. This study ultimately makes it possible to expect government system management based on predicted data through deduction history management based on evaluation score data on public websites. And it provides several theoretical and practical implications through correlation and synergy. The characteristics of each score for the quality management of public sector websites were identified, and the accuracy of evaluation, the possibility of sophisticated analysis, such as analysis of characteristics of each institution, were expanded. With creating an environment for improving the quality of public websites and it is expected that the possibility of evaluation accuracy and elaborate analysis can be expanded in the e-government performance and the post-introduction stage of government website service.

A multi-layer approach to DN 50 electric valve fault diagnosis using shallow-deep intelligent models

  • Liu, Yong-kuo;Zhou, Wen;Ayodeji, Abiodun;Zhou, Xin-qiu;Peng, Min-jun;Chao, Nan
    • Nuclear Engineering and Technology
    • /
    • v.53 no.1
    • /
    • pp.148-163
    • /
    • 2021
  • Timely fault identification is important for safe and reliable operation of the electric valve system. Many research works have utilized different data-driven approach for fault diagnosis in complex systems. However, they do not consider specific characteristics of critical control components such as electric valves. This work presents an integrated shallow-deep fault diagnostic model, developed based on signals extracted from DN50 electric valve. First, the local optimal issue of particle swarm optimization algorithm is solved by optimizing the weight search capability, the particle speed, and position update strategy. Then, to develop a shallow diagnostic model, the modified particle swarm algorithm is combined with support vector machine to form a hybrid improved particle swarm-support vector machine (IPs-SVM). To decouple the influence of the background noise, the wavelet packet transform method is used to reconstruct the vibration signal. Thereafter, the IPs-SVM is used to classify phase imbalance and damaged valve faults, and the performance was evaluated against other models developed using the conventional SVM and particle swarm optimized SVM. Secondly, three different deep belief network (DBN) models are developed, using different acoustic signal structures: raw signal, wavelet transformed signal and time-series (sequential) signal. The models are developed to estimate internal leakage sizes in the electric valve. The predictive performance of the DBN and the evaluation results of the proposed IPs-SVM are also presented in this paper.

Study on the EMC Engineering for Fixed Installations (복합설비를 위한 EMC 엔지니어링 연구)

  • Young-Heung Kang
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.6
    • /
    • pp.798-803
    • /
    • 2023
  • In the industrial internet of things (IIoT) industry, including smart factories, there are many cases where electronic devices are complexly combined and installed due to the recent development of intelligent information technology. Electromagnetic waves generated from such complex facilities affect other devices and services, which can lead to safety issues. The problem such as electromagnetic interference (EMI) and electromagnetic compatibility (EMC) generated when controlling complex facilities is an essential element that must be solved, and the engineering basis for EMI and EMC must be established to foster the industry of complex facilities. Therefore, in this study, EMC & EMI engineering demonstration cases for solar power fixed facilities using the national standard guideline have been analyzed. The results show that the electromagnetic risk indices in the solar power facilities have been degraded up to control level, and a national EMC engineering system has been proposed for complex facilities.

Designing Tracking Method using Compensating Acceleration with FCM for Maneuvering Target (FCM 기반 추정 가속도 보상을 이용한 기동표적 추적기법 설계)

  • Son, Hyun-Seung;Park, Jin-Bae;Joo, Young-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.3
    • /
    • pp.82-89
    • /
    • 2012
  • This paper presents the intelligent tracking algorithm for maneuvering target using the positional error compensation of the maneuvering target. The difference between measured point and predict point is separated into acceleration and noise. Fuzzy c-mean clustering and predicted impact point are used to get the optimal acceleration value. The membership function is determined for acceleration and noise which are divided by fuzzy c-means clustering and the characteristics of the maneuvering target is figured out. Divided acceleration and noise are used in the tracking algorithm to compensate computational error. The filtering process in a series of the algorithm which estimates the target value recognize the nonlinear maneuvering target as linear one because the filter recognize only remained noise by extracting acceleration from the positional error. After filtering process, we get the estimates target by compensating extracted acceleration. The proposed system improves the adaptiveness and the robustness by adjusting the parameters in the membership function of fuzzy system. To maximize the effectiveness of the proposed system, we construct the multiple model structure. Procedures of the proposed algorithm can be implemented as an on-line system. Finally, some examples are provided to show the effectiveness of the proposed algorithm.

The Effect of Information Protection Control Activities on Organizational Effectiveness : Mediating Effects of Information Application (정보보호 통제활동이 조직유효성에 미치는 영향 : 정보활용의 조절효과를 중심으로)

  • Jeong, Gu-Heon;Jeong, Seung-Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.71-90
    • /
    • 2011
  • This study was designed to empirically analyze the effect of control activities(physical, managerial and technical securities) of information protection on organizational effectiveness and the mediating effects of information application. The result was summarized as follows. First, the effect of control activities(physical, technical and managerial securities) of information protection on organizational effectiveness showed that the physical, technical and managerial security factors have a significant positive effect on the organizational effectiveness(p < .01). Second, the effect of control activities(physical, technical and managerial securities) of information protection on information application showed that the technical and managerial security factors have a significant positive effect on the information application(p < .01). Third, the explanatory power of models, which additionally put the information protection control activities(physical, technical and managerial securities) and the interaction variables of information application to verify how the information protection control activities( physical, technical and managerial security controls) affecting the organizational effectiveness are mediated by the information application, was 50.6%~4.1% additional increase. And the interaction factor(${\beta}$ = .148, p < .01) of physical security and information application, and interaction factor(${\beta}$ = .196, p < .01) of physical security and information application among additionally-put interaction variables, were statistically significant(p < .01), indicating the information application has mediated the relationship between physical security and managerial security factors of control activities, and organizational effectiveness. As for results stated above, it was proven that physical, technical and managerial factors as internal control activities for information protection are main mechanisms affecting the organizational effectiveness very significantly by information application. In information protection control activities, the more all physical, technical and managerial security factors were efficiently well performed, the higher information application, and the more information application was efficiently controlled and mediated, which it was proven that all these three factors are variables for useful information application. It suggested that they have acted as promotion mechanisms showing a very significant result on the internal customer satisfaction of employees, the efficiency of information management and the reduction of risk in the organizational effectiveness for information protection by the mediating or difficulty of proved information application.

The Efficiency Analysis of CRM System in the Hotel Industry Using DEA (DEA를 이용한 호텔 관광 서비스 업계의 CRM 도입 효율성 분석)

  • Kim, Tai-Young;Seol, Kyung-Jin;Kwak, Young-Dai
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.91-110
    • /
    • 2011
  • This paper analyzes the cases where the hotels have increased their services and enhanced their work process through IT solutions to cope with computerization globalization. Also the cases have been studies where national hotels use the CRM solution internally to respond effectively to customers requests, increase customer analysis, and build marketing strategies. In particular, this study discusses the introduction of the CRM solutions and CRM sales business and marketing services using a process for utilizing the presumed, CRM by introducing effective DEA(Data Envelopment Analysis). First, the comparison has done regarding the relative efficiency of L Company with the CCR model, then compared L Company's restaurants and facilities' effectiveness through BCC model. L Company reached a conclusion that it is important to precisely create and manage sales data which are the preliminary data for CRM, and for that reason it made it possible to save sales data generated by POS system on each sales performance database. In order to do that, it newly established Oracle POS system and LORIS POS system concerned with restaurants for food and beverage as well as rooms, and made it possible to stably generate and manage sales data and manage. Moreover, it set up a composite database to control comprehensively the results of work processes during a specific period by collecting customer registration information and made it possible to systematically control the information on sales performances. By establishing a system which unifies database and managing it comprehensively, impeccability of data has been greatly enhanced and a problem which generated asymmetric data could be thoroughly solved. Using data accumulated on the comprehensive database, sales data can be analyzed, categorized, classified through data mining engine imbedded in Polaris CRM and the results can be organized on data mart to provide them in the form of CRM application data. By transforming original sales data into forms which are easy to handle and saving them on data mart separately, it enabled acquiring well-organized data with ease when engaging in various marketing operations, holding a morning meeting and working on decision-making. By using summarized data at data mart, it was possible to process marketing operations such as telemarketing, direct mailing, internet marketing service and service product developments for perceived customers; moreover, information on customer perceptions which is one of CRM's end-products could feed back into the comprehensive database. This research was undertaken to find out how effectively CRM has been employed by comparing and analyzing the management performance of each enterprise site and store after introducing CRM to Hotel enterprises using DEA technique. According to the research results, efficiency evaluation for each site was calculated through input and output factors to find out comparative CRM system usage efficiency of L's Company four sites; moreover, with regard to stores, the sizes of workforce and budget application show a huge difference and so does the each store efficiency. Furthermore, by using the DEA technique, it could assess which sites have comparatively high efficiency and which don't by comparing and evaluating hotel enterprises IT project outcomes such as CRM introduction using the CCR model for each site of the related enterprises. By using the BCC model, it could comparatively evaluate the outcome of CRM usage at each store of A site, which is representative of L Company, and as a result, it could figure out which stores maintain high efficiency in using CRM and which don't. It analyzed the cases of CRM introduction at L Company, which is a hotel enterprise, and precisely evaluated them through DEA. L Company analyzed the customer analysis system by introducing CRM and achieved to provide customers identified through client analysis data with one to one tailored services. Moreover, it could come up with a plan to differentiate the service for customers who revisit by assessing customer discernment rate. As tasks to be solved in the future, it is required to do research on the process analysis which can lead to a specific outcome such as increased sales volumes by carrying on test marketing, target marketing using CRM. Furthermore, it is also necessary to do research on efficiency evaluation in accordance with linkages between other IT solutions such as ERP and CRM system.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

End to End Model and Delay Performance for V2X in 5G (5G에서 V2X를 위한 End to End 모델 및 지연 성능 평가)

  • Bae, Kyoung Yul;Lee, Hong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.107-118
    • /
    • 2016
  • The advent of 5G mobile communications, which is expected in 2020, will provide many services such as Internet of Things (IoT) and vehicle-to-infra/vehicle/nomadic (V2X) communication. There are many requirements to realizing these services: reduced latency, high data rate and reliability, and real-time service. In particular, a high level of reliability and delay sensitivity with an increased data rate are very important for M2M, IoT, and Factory 4.0. Around the world, 5G standardization organizations have considered these services and grouped them to finally derive the technical requirements and service scenarios. The first scenario is broadcast services that use a high data rate for multiple cases of sporting events or emergencies. The second scenario is as support for e-Health, car reliability, etc.; the third scenario is related to VR games with delay sensitivity and real-time techniques. Recently, these groups have been forming agreements on the requirements for such scenarios and the target level. Various techniques are being studied to satisfy such requirements and are being discussed in the context of software-defined networking (SDN) as the next-generation network architecture. SDN is being used to standardize ONF and basically refers to a structure that separates signals for the control plane from the packets for the data plane. One of the best examples for low latency and high reliability is an intelligent traffic system (ITS) using V2X. Because a car passes a small cell of the 5G network very rapidly, the messages to be delivered in the event of an emergency have to be transported in a very short time. This is a typical example requiring high delay sensitivity. 5G has to support a high reliability and delay sensitivity requirements for V2X in the field of traffic control. For these reasons, V2X is a major application of critical delay. V2X (vehicle-to-infra/vehicle/nomadic) represents all types of communication methods applicable to road and vehicles. It refers to a connected or networked vehicle. V2X can be divided into three kinds of communications. First is the communication between a vehicle and infrastructure (vehicle-to-infrastructure; V2I). Second is the communication between a vehicle and another vehicle (vehicle-to-vehicle; V2V). Third is the communication between a vehicle and mobile equipment (vehicle-to-nomadic devices; V2N). This will be added in the future in various fields. Because the SDN structure is under consideration as the next-generation network architecture, the SDN architecture is significant. However, the centralized architecture of SDN can be considered as an unfavorable structure for delay-sensitive services because a centralized architecture is needed to communicate with many nodes and provide processing power. Therefore, in the case of emergency V2X communications, delay-related control functions require a tree supporting structure. For such a scenario, the architecture of the network processing the vehicle information is a major variable affecting delay. Because it is difficult to meet the desired level of delay sensitivity with a typical fully centralized SDN structure, research on the optimal size of an SDN for processing information is needed. This study examined the SDN architecture considering the V2X emergency delay requirements of a 5G network in the worst-case scenario and performed a system-level simulation on the speed of the car, radius, and cell tier to derive a range of cells for information transfer in SDN network. In the simulation, because 5G provides a sufficiently high data rate, the information for neighboring vehicle support to the car was assumed to be without errors. Furthermore, the 5G small cell was assumed to have a cell radius of 50-100 m, and the maximum speed of the vehicle was considered to be 30-200 km/h in order to examine the network architecture to minimize the delay.

Application and Evaluation of ITS Map Datum and Location Referencing System for ITS User Services (ITS서비스를 위한 Map Datum 및 위치참조체계 모델의 적용 및 평가)

  • 최기주;이광섭
    • Journal of Korean Society of Transportation
    • /
    • v.17 no.2
    • /
    • pp.55-68
    • /
    • 1999
  • Many ITS services require map databases in digital form to meet desired needs. Due to the dynamic nature of ITS and the sheer diversity of applications, the design and development of spatial databases to meet those needs pose a major challenge to both the public and private sectors. This challenge is further complicated by the necessity to transfer locationally referenced information between different kinds of databases and spatial data handling systems so that ITS products will work seamlessly across the region and nation. The Purpose of this paper is to develop the framework-models commonly to reference locations in the various applications and systems-the ITS Map Datum and LRS(Location Referencing System). The ITS Map Datum consists of the around control points which are the prime intersections (nodes) of the nationwide road network In this study, the major points have been determined along wish link-node modeling procedure. LRS, defined as a system for determining the position (location) of an entity relative to other entities or to some external frame of reference, has also been set up using CSOM type method. The method has been implemented using ArcView GIS software over the Kangnam and Seocho districts in the city of Seoul, showing that the implemented LRS scheme can be used successfully elsewhere. With the proper advent of the K.ITS architecture and services, the procedure can be used to improve the data sharing and to inter operate among systems, enhancing the efficiency both in terms of money and time.

  • PDF