• 제목/요약/키워드: IS Development and Operations

검색결과 1,939건 처리시간 0.029초

원자력발전소 중대사고 대응 조직에 대한 레질리언스 정량적 모델 개발: AHP 방법 적용 (Development of a Quantitative Resilience Model for Severe Accident Response Organizations of Nuclear Power Plants: Application of AHP Method)

  • 박주영;김지태;이승헌;김종현
    • 한국안전학회지
    • /
    • 제35권1호
    • /
    • pp.116-129
    • /
    • 2020
  • Resilience is defined as the intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations or functions with the related systems under both expected and unexpected conditions. Resilience engineering is a relatively new paradigm for safety management that focuses on how to cope with complexity under pressure or disturbance to achieve successful functioning. This study aims to develop a quantitative resilience model for severe accident response organizations of nuclear power plants using the Analytic Hierarchy Process (AHP) method. First, we investigated severe accident response organizations based on a radiation emergency plan in the Korean case and developed a qualitative resilience model for the organizations with resilience-influencing factors, which have been identified in the author's previous studies. Then, a quantitative model for entire severe accident response organizations was developed by using the Analytic Hierarchy Process (AHP) method with a tool for System Dynamics. For applying the AHP method, several experts who are working on implementing, regulating or researching the severe accident response participated in collecting their expertise on the relative importance between all the possible relations in the model. Finally, a sensitivity analysis was carried out to discuss which factors have the most influenceable on resilience.

소프트웨어/하드웨어 최적화된 타원곡선 유한체 연산 알고리즘의 개발과 이를 이용한 고성능 정보보호 SoC 설계 (Design of a High-Performance Information Security System-On-a-Chip using Software/Hardware Optimized Elliptic Curve Finite Field Computational Algorithms)

  • 문상국
    • 한국정보통신학회논문지
    • /
    • 제13권2호
    • /
    • pp.293-298
    • /
    • 2009
  • 본 연구에서는 193비트 타원곡선 암호화프로세서를 보조프로세서 형태로 제작하여 FPGA에 구현하였다. 프로그램 레벨에서 최적화된 알고리즘과 수식을 제안하여 증명하였고, 검증을 위해 Verilog와 같은 하드웨어 기술언어를 통하여 다시 한번 분석 하여 하드웨어 구현에 적합하도록 수정하여 최적화 하였다. 그 이유는 프로그래밍 언어의 순차적으로 컴파일되고 실행되는 특성이 하드웨어를 직접 구현하는 데에 본질적으로 틀리기 때문이다. 알고리즘적인 접근과 더불어 하드웨어적으로 2중적으로 검증된 하드웨어 보조프로세서를 Altera 임베디드 시스템을 활용하여, ARM9이 내장되어 있는 Altera CycloneII FPGA 보드에 매핑하여 실제 칩 프로토타입 IP로 구현하였다. 구현된 유한체 연산 알고리즘과 하드웨어 IP들은 실제적인 암호 시스템에 응용되기 위하여, 193 비트 이상의 타원 곡선 암호 연산 IP를 구성하는 라이브러리 모듈로 사용될 수 있다.

저수지 가뭄감시를 위한 물공급능력지수의 개발 (Development of Water Supply Capacity Index to Monitor Droughts in a Reservoir)

  • 이동률;문장원;이대희;안재현
    • 한국수자원학회논문집
    • /
    • 제39권3호
    • /
    • pp.199-214
    • /
    • 2006
  • 가뭄기간의 효율적인 저수지 운영은 가뭄계획에서 중요한 요소이다. 본 연구에서는 가뭄기간에 저수지의 용수공급능력을 모니터링 할 수 있는 새로운 가뭄지수로서 물공급능력지수(Water Supply Capacity Index, WSCI)를 개발하였다. WSCI는 극심한 기상조건하에서 각 저수지가 수요량에 대해 어느 정도 기간 용수를 공급할 수 있는가를 평가할 수 있는 정량적 지표이다. WSCI는 가뭄기간의 저수지 운영을 위한 의사결정에 대한 유용한 정보를 제공하고, 저수지에서 용수를 공급받는 지역의 가뭄단계 설정에 활용될 수 있다. 표준화된 WSCI와 기존에 널리 이용되고 있는 PDSI, SPI, SWSI와 비교 분석함으로써 WSCI의 유용성을 확인하였다.

AR/VR 마이크로 디스플레이 환경을 고려한 JPEG-LS 플랫폼 개발 (A Development of JPEG-LS Platform for Mirco Display Environment in AR/VR Device.)

  • 박현문;장영종;김병수;황태호
    • 한국전자통신학회논문지
    • /
    • 제14권2호
    • /
    • pp.417-424
    • /
    • 2019
  • AR/VR 디바이스에서 무손실 이미지 압축을 위한 JPEG-LS(: LosSless) 코덱에서 SBT 기반 프레임 압축기술로 메모리와 지연을 줄이는 설계를 제안하였다. 제안된 JPEG 무손실 코덱은 주로 콘텍스트 모형화 및 업데이트, 픽셀과 오류 예측 그리고 메모리 블록으로 구성된다. 모든 블록은 실시간 영상처리를 위해 파이프라인 구조를 가지며, LOCO-I 압축 알고리즘에 SBT 코딩기반의 개선된 2차원 접근방식을 사용한다. 제시한 STB-FLC기법을 통해 Block-RAM 사이즈를 기존 유사연구보다 1/3로 줄이고 예측(prediction) 블록의 병렬 설계는 처리속도에 향상을 가져올 수 있었다.

조직의 정보 니즈와 ERP 기능과의 불일치 및 그 대응책에 대한 이해: 조직 메모리 이론을 바탕으로 (Understanding the Mismatch between ERP and Organizational Information Needs and Its Responses: A Study based on Organizational Memory Theory)

  • 정승렬;배억호
    • Asia pacific journal of information systems
    • /
    • 제22권2호
    • /
    • pp.21-38
    • /
    • 2012
  • Until recently, successful implementation of ERP systems has been a popular topic among ERP researchers, who have attempted to identify its various contributing factors. None of these efforts, however, explicitly recognize the need to identify disparities that can exist between organizational information requirements and ERP systems. Since ERP systems are in fact "packages" -that is, software programs developed by independent software vendors for sale to organizations that use them-they are designed to meet the general needs of numerous organizations, rather than the unique needs of a particular organization, as is the case with custom-developed software. By adopting standard packages, organizations can substantially reduce many of the potential implementation risks commonly associated with custom-developed software. However, it is also true that the nature of the package itself could be a risk factor as the features and functions of the ERP systems may not completely comply with a particular organization's informational requirements. In this study, based on the organizational memory mismatch perspective that was derived from organizational memory theory and cognitive dissonance theory, we define the nature of disparities, which we call "mismatches," and propose that the mismatch between organizational information requirements and ERP systems is one of the primary determinants in the successful implementation of ERP systems. Furthermore, we suggest that customization efforts as a coping strategy for mismatches can play a significant role in increasing the possibilities of success. In order to examine the contention we propose in this study, we employed a survey-based field study of ERP project team members, resulting in a total of 77 responses. The results of this study show that, as anticipated from the organizational memory mismatch perspective, the mismatch between organizational information requirements and ERP systems makes a significantly negative impact on the implementation success of ERP systems. This finding confirms our hypothesis that the more mismatch there is, the more difficult successful ERP implementation is, and thus requires more attention to be drawn to mismatch as a major failure source in ERP implementation. This study also found that as a coping strategy on mismatch, the effects of customization are significant. In other words, utilizing the appropriate customization method could lead to the implementation success of ERP systems. This is somewhat interesting because it runs counter to the argument of some literature and ERP vendors that minimized customization (or even the lack thereof) is required for successful ERP implementation. In many ERP projects, there is a tendency among ERP developers to adopt default ERP functions without any customization, adhering to the slogan of "the introduction of best practices." However, this study asserts that we cannot expect successful implementation if we don't attempt to customize ERP systems when mismatches exist. For a more detailed analysis, we identified three types of mismatches-Non-ERP, Non-Procedure, and Hybrid. Among these, only Non-ERP mismatches (a situation in which ERP systems cannot support the existing information needs that are currently fulfilled) were found to have a direct influence on the implementation of ERP systems. Neither Non-Procedure nor Hybrid mismatches were found to have significant impact in the ERP context. These findings provide meaningful insights since they could serve as the basis for discussing how the ERP implementation process should be defined and what activities should be included in the implementation process. They show that ERP developers may not want to include organizational (or business processes) changes in the implementation process, suggesting that doing so could lead to failed implementation. And in fact, this suggestion eventually turned out to be true when we found that the application of process customization led to higher possibilities of failure. From these discussions, we are convinced that Non-ERP is the only type of mismatch we need to focus on during the implementation process, implying that organizational changes must be made before, rather than during, the implementation process. Finally, this study found that among the various customization approaches, bolt-on development methods in particular seemed to have significantly positive effects. Interestingly again, this finding is not in the same line of thought as that of the vendors in the ERP industry. The vendors' recommendations are to apply as many best practices as possible, thereby resulting in the minimization of customization and utilization of bolt-on development methods. They particularly advise against changing the source code and rather recommend employing, when necessary, the method of programming additional software code using the computer language of the vendor. As previously stated, however, our study found active customization, especially bolt-on development methods, to have positive effects on ERP, and found source code changes in particular to have the most significant effects. Moreover, our study found programming additional software to be ineffective, suggesting there is much difference between ERP developers and vendors in viewpoints and strategies toward ERP customization. In summary, mismatches are inherent in the ERP implementation context and play an important role in determining its success. Considering the significance of mismatches, this study proposes a new model for successful ERP implementation, developed from the organizational memory mismatch perspective, and provides many insights by empirically confirming the model's usefulness.

  • PDF

세계 주요 공항 운영 효율성 분석: DEA와 Malmquist 생산성 지수 분석을 중심으로 (An analysis of the operational efficiency of the major airports worldwide using DEA and Malmquist productivity indices)

  • 김홍섭;박정림
    • 유통과학연구
    • /
    • 제11권8호
    • /
    • pp.5-14
    • /
    • 2013
  • Purpose - We live in a world of constant change and competition. Many airports have specific competitiveness goals and strategies for achieving and maintaining them. The global economic recession, financial crises, and rising oil prices have resulted in an increasingly important role for facility investment and renewal and the implementation of appropriate policies in ensuring the competitive advantage for airports. It is thus important to analyze the factors that enhance efficiency and productivity for an airport. This study aims to determine the efficiency levels of 20 major airports in East Asia, Europe, and North America. Further, this study also suggests suitable policies and strategies for their development. Research design, data, and methodology - This paper employs the DEA-CCR, DEA-BCC, and DEA-Malmquist production index analysis models to determine airport efficiency. The study uses data on the efficiency and productivity of the world's leading airports between 2006 and 2010. The input variables include the airport size, the number of runways, the size of passenger terminals, and the size of cargo terminals. The output variables include the annual number of passengers and the annual cargo volume. The study uses basic data from the 2010 World Airport Traffic Report (ACI). The world's top 20 airports (as rated by the ACI report) are investigated. The study uses the expanded DEA Model and the Super Efficiency Model to identify the most effective airports among the top 20. The Malmquist productivity index analysis is used to measure airport effectiveness. Results - This study analyzes longitudinal and cross-sectional data on the world's top 20 airports covering 2006 to 2010. A CCR analysis shows that the most efficient airports in 2010 were Gatwick Airport (LGW), Zurich Airport (ZRH), Vienna Airport (VIE), Leonardo da Vinci Fiumicino Airport (FCO), Los Angeles International Airport (LAX), Seattle-Tacoma Airport (SEA), San Francisco Airport (SFO), HongKong Airport (HKG), Beijing Capital International Airport (PEK), and Shanghai Pudong Airport (PVG). We find that changes in airport productivity are affected more by technical factors than by airport efficiency. Conclusions - Based on the study results, we offer four airport development proposals. First, a benchmark airport needs to be identified. Second, inefficiency must be reduced and high-cost factors need to be managed. Third, airport operations should be enhanced through technical innovation. Finally, scientific demand forecasting and facility preparation must become the focus of attention. This paper has some limitations. Because the Malmquist productivity index is based on the hypothesis of the, the identified production change could be over- or under-estimated. Further, as DEA estimates the relative efficiency. It also cannot generalize to include all airport conditions because the variables are limited. To measure airport productivity more accurately, other input variables and environmental variables such as financial and policy factors should be included.

중대급 작전지역에서 소형 감시정찰 드론의 수량 변화에 따른 전투 효율 연구 (Study on Combat Efficiency According to Change in Quantity of Small Reconnaissance Drones in the Infantry Company Responsibility Area)

  • 김경수;배용찬
    • 한국시뮬레이션학회논문지
    • /
    • 제31권4호
    • /
    • pp.23-31
    • /
    • 2022
  • 4차 산업혁명을 통한 혁신 기술의 발전은 국방 분야에서도 적극적으로 활용되는 추세이다. 특히, 드론을 활용한 감시 및 정찰 능력은 미래 군 병력 감축을 대비하고 경계 능력을 비약적으로 보강하는 등 군 전투력 발전에 큰 도움이 될 것이다. 본 연구에서는 소형 감시정찰 드론의 전투 효율을 단순화하여 드론이 중대급 군사작전에 얼마나 도움이 될 수 있는지 시뮬레이션을 통해 분석한다. 드론과 적은 작전지역을 탐지확률 등을 수치화한 2차원 공간 내에서 가장 효율적인 최단 경로를 찾아 움직인다. 탐지 가능성이 가장 낮은 경로를 따라 침투하는 적의 탐지확률을 기준으로 드론이 추가 투입될 때마다 발생하는 탐지확률 변화를 제시하고 드론 추가 투입에 따른 전투 효율을 분석한다. 중대급 작전지역 같은 소규모 작전지역에서는 드론이 추가 투입될수록 전투 효율의 상승 값이 적어짐을 시뮬레이션을 통해 증명한다. 본 연구는 중대급 야전부대에서 한정된 수량의 드론을 효율적으로 운영하도록 기여하고, 전투력 향상을 위해 바람직한 드론 전력화 소요를 결정하는 데 도움이 될것으로 기대한다.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 1993년도 Fifth International Fuzzy Systems Association World Congress 93
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

초거대 인공지능의 국방 분야 적용방안: 새로운 영역 발굴 및 전투시나리오 모델링을 중심으로 (Application Strategies of Superintelligent AI in the Defense Sector: Emphasizing the Exploration of New Domains and Centralizing Combat Scenario Modeling)

  • 박건우
    • 문화기술의 융합
    • /
    • 제10권3호
    • /
    • pp.19-24
    • /
    • 2024
  • 미래의 군사 전투 환경은 현재의 군(軍) 인구 감소 및 변화하는 양상에 맞춰 국방 분야에서 인공지능(AI)의 역할과 중요성이 급격히 확대되고 있다. 특히, 민간에서의 AI(Artificial Intelligence) 개발은 OpenAI의 Chat-GPT 등장 이후 초거대 AI(Super-Giant AI, also known as Hyperscale AI), 즉 파운데이션 모델을 기반으로 새로운 영역에서 부상하고 있다. 미국 국방부는 CDAO(Chief Digital and AI Office) 산하의 Task Force Lima를 조직하여 LLM(Large Language Model)과 생성형 AI의 활용 방안에 대한 연구를 진행 중이며, 중국, 이스라엘 등 군사 선진국에서도 초거대 AI를 군에 적용하기 위한 연구를 수행 중이다. 따라서, 우리 군도 무기체계에 초거대 AI 모델의 활용 가능성과 적용분야에 대한 연구의 필요성이 대두되고 있다. 본 논문에서는 기존의 특화 AI와 초거대 AI(파운데이션 모델, Foundation Model)의 특징 및 장·단점을 비교하고, 무기체계에 적용될 수 있는 초거대 AI의 새로운 적용분야를 발굴하였다. 본 연구는 미래의 적용 분야와 잠재적인 도전과제에 대한 예측과 함께 초거대 인공지능을 국방작전에 효과적으로 통합하기 위한 통찰력을 제공하고, 선진화된 인공지능 시대에서의 국방 정책 개발, 국제 안보 전략을 형성하는 데 기여할 것으로 기대한다.

한국형 워리어플랫폼 아키텍처 개발 연구 (Development of Korean Warrior Platform Architecture)

  • 김욱기;신규용;조성식;백승호;김용철
    • 융합정보논문지
    • /
    • 제11권5호
    • /
    • pp.111-117
    • /
    • 2021
  • 최근 국방부는 4차산업혁명을 비롯한 첨단과학 기술의 급속한 발전으로 미래 전장환경이 급속도로 변화하고 있는 현실에서 병역자원 감소와 복무기간 단축 등의 사회적 문제에 대해 능동적으로 대응하고, 인간 중심의 가치문화를 정립하기 위해 노력하고 있다. 이에 대한 일환으로 국방부는 국방개혁과 연계하여 육군의 역할을 재정립하고, 육군의 전투력을 극대화하기 위해 차세대 개인전투체계인 워리어플랫폼 도입을 추진하고 있다. 본 논문에서는 미래지상작전 양상 및 개념을 살펴보고, 해외 개인전투체계에 대한 사례분석을 통해 한국군에 적합한 최적의 워리어플랫폼 아키텍처를 제시한다. 이를 위해 개인 전투원에게 요구되는 필수 요구능력과 부대유형별 요구능력에 대해 분석하고, 워리어플랫폼 단계별 통합 및 연동방안을 구체적으로 제시하며, 통합 및 연동이 필요한 장비들간의 데이터 흐름 및 전원연결 구성도를 제시함으로써 효율적인 사업 추진 방향을 제안한다.