• Title/Summary/Keyword: Next generation Information Technology(IT)

Search Result 447, Processing Time 0.025 seconds

Software Architecture Degisn for Integrated Maritime DGPS Reference Station & Integrity Monitor (해양 DGPS 기준국과 감시국 소프트웨어의 통합을 위한 아키텍처 설계)

  • Jang, Won-Seok;Seo, Ki-Yeol;Kim, YoungKi
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2013.05a
    • /
    • pp.427-429
    • /
    • 2013
  • DGPS Reference Station the national infrastructure generates the DGPS correction information for Differential GPS. Currently, South Korea operates the software based DGPS reference station using the next generation DGPS architecture in order that upgrade the hardware based DGPS reference station. However the software based DGPS reference station proposed by USCG has not changed just a form of its structure but intimate architecture. Accordingly, It can't strengthen the advantages of software based architecture. In this paper, I will propose a new software architecture design that is integrated with DGPS reference station and integrity monitor. This architecture is more simple than existing one so can use the maritime DGPS reference station which is required more simple structure.

  • PDF

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.

Deployment Strategies of Cloud Computing System for Defense Infrastructure Enhanced with High Availability (고가용성 보장형 국방 클라우드 시스템 도입 전략)

  • Kang, Ki-Wan;Park, Jun-Gyu;Lee, Sang-Hoon;Park, Ki-Woong
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.3
    • /
    • pp.7-15
    • /
    • 2019
  • Cloud computing markets are rapidly growing as cost savings and business innovation are being carried out through ICT worldwide. In line with this paradigm, the nation is striving to introduce cloud computing in various areas, including the public sector and defense sector, through various research. In the defense sector, DIDC was established in 2015 by integrating military, naval, air and military computing centers, and it provides cloud services in the form of IaaS to some systems in the center. In DIDC and various future cloud defense systems, It is an important issue to ensure availability in cloud defense systems in the defense sector because system failures such as network delays and system resource failures are directly linked to the results of battlefields. However, ensuring the highest levels of availability for all systems in the defense cloud can be inefficient, and the efficiency that can be gained from deploying a cloud system can be reduced. In this paper, we classify and define the level of availability of defense cloud systems step by step, and propose the strategy of introducing Erasure coding and failure acceptance systems, and disaster recovery system technology according to each level of availability acquisition.

Model-based Efficiency Analysis for Photovoltaic Generation O&M: A Case Study (태양광발전 운전 및 유지보수를 위한 모델기반 효율분석: 사례연구)

  • Yu, Jung-Un;Park, Sung-Won;Son, Sung-Yong
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.5
    • /
    • pp.405-412
    • /
    • 2022
  • This paper studies the method of estimating power loss and classifying the factors for improving the power generation efficiency through O&M. It is installed under various climatic conditions worldwide, operational and maintenance technologies suitable for the characteristics of the installation location are required. Existing studies related to solar power generation efficiency have been actively quantifying the impact on short-term losses by environmental factors such as high temperature, dust accumulation, precipitation, humidity, and wind speed, but analysis of the overall impact from a long-term operation perspective is limited. In this study, the potential for efficiency improvement was analyzed by re-establishing a loss classification system according to the power flow of solar power to derive a comprehensive efficiency model for long-term operation and estimating power loss through a case study for each region where climate conditions are classified. As a result of the analysis, the average annual potential for improving soiling loss was 26.9%, Death Valley 7.2%, and Seoul 3.8%. Aging losses was 6.6% in the 20th year as a cumulative. The average annual potential due to temperature loss was 2.9 % for Doha, 1.9% for Death Valley, and 0.2% for Seoul.

High-Performance Low-Complexity Iterative BCH Decoder Architecture for 100 Gb/s Optical Communications (100 Gb/s급 광통신시스템을 위한 고성능 저면적 반복 BCH 복호기 구조)

  • Yang, Seung-Jun;Yeon, Jaewoong;Lee, Hanho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.140-148
    • /
    • 2013
  • This paper presents a iterative Bose-Chaudhuri-hocquenghem (i-BCH) code and its high-speed decoder architecture for 100 Gb/s optical communications. The proposed architecture features a very high data processing rate as well as excellent error correction capability. The proposed 6-iteration i-BCH code structure with interleaving method allows the decoder to achieve 9.34 dB net coding gain performance at $10^{-15}$ decoder output bit error rate to compensate for serious transmission quality degradation. The proposed high-speed i-BCH decoder architecture is synthesized using a 90-nm CMOS technology. It can operate at a clock frequency of 430 MHz and achieve a data processing rate of 100 Gb/s. Thus, it has potential applications in next generation forward error correction (FEC) schemes for 100 Gb/s optical communications.

Effective address assignment method in hierarchical structure of Zigbee network (Zigbee 네트워크 계층 구조에서의 효율적인 주소 할당 방법)

  • Kim, Jae-Hyun;Hur, Soo-Jung;Kang, Won-Sek;Lee, Dong-Ha;Park, Yong-Wan
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.10
    • /
    • pp.20-28
    • /
    • 2007
  • Zigbee sensor network base on IEEE802.15.4 has local address of 2 byte on transmit packet data which is pick up the address for each sensor node. Sensor network is requested low power, low cost, many nodes at hues physical area. There for Zigbee is very good solution supporting for next Ubiquitous generation but the Zigbee sensor network has address allocation problem of each sensor node. Is established standard from Zigbee Alliance, to the address allocation method uses Cskip algorithm. The Cskip algorithm use the hazard which allocates an address must blow Hop of the maximum modification and child node number. There is to address allocation and from theoretically it will be able to compose a personal 65536 sensor nodes only actual with concept or space, only 500 degree will be able to compose expansion or the low Zigbee network. We proposed an address allocation method using coordinate value for Zigbee sensor network.

The Technological Competitiveness Analysis of Evolving Artificial Intelligence by Using the Patent Information (특허 분석을 통한 인공지능 기술경쟁력 변화 과정에 관한 연구 - 주요 5개국을 중심으로 -)

  • Huang, Minghao;Nam, Eun Young;Park, Se Hoon
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.18 no.1
    • /
    • pp.66-83
    • /
    • 2022
  • Artificial Intelligence (AI) is to assumed to be one of next generation technology which determine technological competitiveness and strategic advantage of a certain country. By using the patent data, this study aims to have a comparative analysis of the technological competitiveness of evolving artificial intelligence at different stages of development among the five largest intellectual property offices in the world (IP5). For the analysis data, all AI technology patent data from 1956 to 2019 were utilized according to the classification system presented in the "WIPO 2019 Technology Trend: Artificial Intelligence" report published by the World Intellectual Property Organization (WIPO) in 2019. The results shows that China has already surpassed the United States in terms of the number of patent applications in the field of artificial intelligence technology. However, in the domains of the United States, Europe, Japan, and Korea, the technology competitiveness of the United States is far ahead of China. Interestingly, the rate of increase of Korea's technology competitiveness is also very fast, and it has been shown that the technology strength is ahead of China in non-Chinese domains. The significance of this study can be found in the fact that the temporal and spatial change process of technological competitiveness of significant countries in the field of artificial intelligence technology artificial intelligence was viewed as a macro-framework using the technology index (TS) the differences were compared.

Bus Architecture Analysis for Hardware Implementation of Computer Generated Hologram (컴퓨터 생성 홀로그램의 하드웨어 구현을 위한 버스 구조 분석)

  • Seo, Yong-Ho;Kim, Dong-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.4
    • /
    • pp.713-720
    • /
    • 2012
  • Recently, holography has received much attention as the next generation visual technology. Hologram is obtained by the optical capturing, but in recent years it is mainly produced by the method using computer. This method is named by computer generated hologram (CGH). Since CGH requires huge computational amount, if it is implemented by S/W it can't work in real time. Therefore it should use FPGA or GPU for real time operation. If it is implemented in the type of H/W, it can't obtain the same quality as S/W due to the bit limitation of the internal system. In this paper, we analyze the bit width for minimizing the degradation of the hologram and reducing more hardware resources and propose guidelines for H/W implementation of CGH. To do this, we performs fixed-points simulations according to main internal variables and arithmetics, analyze the numerical and visual results, and present the optimal bit width according to application fields.

Big Wave in R&D in Quantum Information Technology -Quantum Technology Flagship (양자정보기술 연구개발의 거대한 물결)

  • Hwang, Y.;Baek, C.H.;Kim, T.;Huh, J.D.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.1
    • /
    • pp.75-85
    • /
    • 2019
  • Quantum technology is undergoing a revolution. Theoretically, strange phenomena of quantum mechanics, such as superposition and entanglement, can enable high-performance computing, unconditionally secure communication, and high-precision sensing. Such theoretical possibilities have been examined in the last few decades. The goal now is to apply these quantum advantages to daily life. Europe, where quantum mechanics was born a 100 years ago, is struggling to be placed at the front of this quantum revolution. Thus, the European Commission has decided to invest 1 billion EUR over 10 years and has initiated the ramp-up phase with 20 projects in the fields of communication, simulation, sensing and metrology, computing, and fundamental science. This program, approved by the European Commission, is called the "Quantum Technology Flagship" program. Its first objective is to consolidate and expand European scientific leadership and excellence in quantum research. Its second objective is to kick-start a competitive European industry in quantum technology and develop future global industrial leaders. Its final objective is to make Europe a dynamic and attractive region for innovative and collaborative research and business in quantum technology. This program also trains next-generation quantum engineers to achieve a world-leading position in quantum technology. However, the most important principle of this program is to realize quantum technology and introduce it to the market. To this end, the program emphasizes that academic institutes and industries in Europe have to collaborate to research and develop quantum technology. They believe that without commercialization, no technology can be developed to its full potential. In this study, we review the strategy of the Quantum Europe Flagship program and the 20 projects of the ramp-up phase.

IP Paging for Data-receiving Service in HPi Network (HPi망에서의 착신서비스를 위한 IP페이징 기법)

  • Jeong Tae Eui;Na Jee Hyeon;Kim Yeong Jin;Song Byung Kwon
    • The KIPS Transactions:PartC
    • /
    • v.12C no.2 s.98
    • /
    • pp.235-242
    • /
    • 2005
  • As the demands in a wireless network are recently increasing, it is necessary to promote the power efficiency of a wireless terminal, and to reduce the overhead of a network. To resolve such problems, we propose the paging technology and the structure of paging area with the joint access point in HPi (High-speed Portable Internet) network, which is being studied as the domestic next-generation IP packet data network. The application of the paging technology to the HPi network possesses the advantage of reducing the registration cost while a terminal in dormant state moves around, and the reporting cost of the terminal's location by the joint access point. The technology suggested in this paper causes the advantages that it promotes the power efficiency in user's point of view while it reduces the network overhead and makes the easy rearrangement of joint APs according to the changes of users' moving pattern in the network manager's point of view.