• Title/Summary/Keyword: 과학기술 데이터

Search Result 2,575, Processing Time 0.031 seconds

A Study on Promotion Strategies for Examining Platforms of Convergence Contents (방송.통신 융합 환경에 적합한 다중 플랫폼 융합 콘텐츠 육성 전략)

  • Park, Soo-Ile;Shin, Dong-Pil;Chun, Sang-Kwon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2009.01a
    • /
    • pp.197-202
    • /
    • 2009
  • 과학기술의 발달로 인한 사회 문화적 트렌드의 변화는 새로운 기회와 가능성을 제공해 주며, 정보통신기술은 통신과 방송, 통신과 콘텐츠 등 영역간의 경계를 허물며 융합을 가능하게 하고, 우리의 감성과 상상력을 자극하여 새로운 문화적 가능성을 열어주고 있다. 이러한 상황들은 방송 통신 융합이라는 이름으로 방송과 통신, TV와 PC 온라인과 오프라인 등의 모든 영역에서 다양한 노력이 진행되고 있다. 방송과 통신의 융합은 마치 역사상 신대륙의 개척 과정처럼 새로운 제품과 새로운 시장을 창출해내는 능력을 가지고 있기 때문에, 국내는 물론 세계의 모든 비즈니스 업체들은 이 기회의 땅을 향해 전력 질주하고 있다. 또한, 이에 따르는 콘텐츠의 융합 역시 괄목할만하며, 게임과 영화, 다큐멘터리와 드라마 등의 콘텐츠 간의 융합은 물론이고, 최근에는 모바일에서 영화를 제작하고, 게임과 소설 네트워크가 결합하고, 심지어는 게임 안에서 음악을 유통시키는 유통의 융합까지도 이뤄지고 있다. 이와 같은 다양한 융합의 확산은 미디어와 플랫폼의 등장뿐만 아니라 플랫폼 간 교차와 연결 및 통합이 가능한 미디어 전경(landscape)을 창출해 내고 있으며, 인터넷과 TV의 결합은 다양한 애플리케이션을 구현할 수 있는 전송 메커니즘을 서로 연결시켜 수많은 형태의 다중 플랫폼을 등장시키고 있다. 이로 인하여 방송 서비스와 인터넷 서비스가 네트워크나 전송 플랫폼의 구별 없이, 그리고 디바이스의 선택과 상관없이 활용되는 통합 플랫폼 환경이 폭 넓게 조성되고 있다. 따라서, 방송 통신 융합 환경에 적합한 다중 플랫폼 융합 콘텐츠는 사용자의 요구 및 새로운 비즈니스 모텔에 대한 요구를 만족할 수 있어야 하며, 일관된 기술로 통선 및 서비스간의 호환성을 유지하는 인터페이스의 표준화가 이루어져야한다. 방송 통신 융합 환경에 적합한 다중 플랫폼 융합 콘텐츠는 초고속 데이터 통신망을 활용하는 멀티미디어 및 IP 멀티캐스트 기능을 활용한 서비스들과 연계하여, 관련된 소재 산업들의 파급효과가 매우 크며, 관련 분야에 미치는 효과가 막대하므로, 이에 대한 적절한 육성전략을 고찰해보도록 한다.

  • PDF

Analysis of Traffic and Attack Frequency in the NURION Supercomputing Service Network (누리온 슈퍼컴퓨팅서비스 네트워크에서 트래픽 및 공격 빈도 분석)

  • Lee, Jae-Kook;Kim, Sung-Jun;Hong, Taeyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.5
    • /
    • pp.113-120
    • /
    • 2020
  • KISTI(Korea Institute of Science and Technology Information) provides HPC(High Performance Computing) service to users of university, institute, government, affiliated organization, company and so on. The NURION, supercomputer that launched its official service on Jan. 1, 2019, is the fifth supercomputer established by the KISTI. The NURION has 25.7 petaflops computation performance. Understanding how supercomputing services are used and how researchers are using is critical to system operators and managers. It is central to monitor and analysis network traffic. In this paper, we briefly introduce the NURION system and supercomputing service network with security configuration. And we describe the monitoring system that checks the status of supercomputing services in real time. We analyze inbound/outbound traffics and abnormal (attack) IP addresses data that are collected in the NURION supercomputing service network for 11 months (from January to November 1919) using time series and correlation analysis method.

Development of Fine Dust Analysis Technology using IoT Sensor (IoT 센서를 활용한 미세먼지 분석 기술 개발)

  • Shin, Dong-Jin;Lee, Jin;Heo, Min-Hui;Hwang, Seung-Yeon;Lee, Yong-Soo;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.121-129
    • /
    • 2021
  • In addition to yellow dust occurring in China, fine dust has become a hot topic in Korea through news and media. Although there is fine dust generated from the outside, the purchase rate of air purifier products is increasing as external fine dust flows into the inside. The air purifier uses a filter internally, and the sensor notifies the user through the LED alarm whether the filter is replaced. However, there is currently no product measuring how much the filter rate is reduced and determining the pressure of the blower to operate. Therefore, in this paper, data are generated directly using Arduino, fine dust sensor, and differential pressure sensor. In addition, a program was developed using Python programming to calculate how old the filter is and to analyze the wind power of the blower according to the filter rate by calculating the measured dust and pressure values.

A method of calculating the number of fishing operation days for fishery compensation using fishing vessel trajectory data (어선 항적데이터를 활용한 어업손실보상을 위한 조업일수 산출 방법)

  • KIM, Kwang-Il;KIM, Keun-Huyng;YOO, Sang-Lok;KIM, Seok-Jong
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.57 no.4
    • /
    • pp.334-341
    • /
    • 2021
  • The fishery compensation by marine spatial planning such as routeing of ships and offshore wind farms is required objective data on whether fishing vessels are engaged in a target area. There has still been no research that calculated the number of fishing operation days scientifically. This study proposes a novel method for calculating the number of fishing operation days using the fishing trajectory data when investigating fishery compensation in marine spatial planning areas. It was calculated by multiplying the average reporting interval of trajectory data, the number of collected data, the status weighting factor, and the weighting factor for fishery compensation according to the location of each fishing vessel. In particular, the number of fishing operation days for the compensation of driftnet fishery was considered the daily average number of large vessels from the port and the fishery loss hours for avoiding collisions with them. The target area for applying the proposed method is the routeing area of ships of Jeju outer port. The yearly average fishing operation days were calculated from three years of data from 2017 to 2019. As a result of the study, the yearly average fishing operation days for the compensation of each fishing village fraternity varied from 0.0 to 39.0 days. The proposed method can be used for fishery compensation as an objective indicator in various marine spatial planning areas.

A Study on Determining the Optimal Replacement Interval of the Rolling Stock Signal System Component based on the Field Data (필드데이터에 의한 철도차량 신호장치 구성품의 최적 교체주기 결정에 관한 연구)

  • Byoung Noh Park;Kyeong Hwa Kim;Jaehoon Kim
    • Journal of the Korean Society of Safety
    • /
    • v.38 no.2
    • /
    • pp.104-111
    • /
    • 2023
  • Rolling stock maintenance, which focuses on preventive maintenance, is typically implemented considering the potential harm that may be inflicted to passengers in the event of failure. The cost of preventive maintenance throughout the life cycle of a rolling stock is 60%-75% of the initial purchase cost. Therefore, ensuring stability and reducing maintenance costs are essential in terms of economy. In particular, private railroad operators must reduce government support budget by effectively utilizing railroad resources and reducing maintenance costs. Accordingly, this study analyzes the reliability characteristics of components using field data. Moreover, it resolves the problem of determining an economical replacement interval considering the timing of scrapping railroad vehicles. The procedure for determining the optimal replacement interval involves five steps. According to the decision model, the optimal replacement interval for the onboard signal device components of the "A" line train is calculated using field data, such as failure data, preventive maintenance cost, and failure maintenance cost. The field data analysis indicates that the mileage meter is 9 years, which is less than the designed durability of 15 years. Furthermore, a life cycle in which the phase signal has few failures is found to be the same as the actual durability of 15 years.

Navigational Anomaly Detection using a Traffic Network Model (교통 네트워크 모델 기반 이상 운항 선박 식별에 관한 연구)

  • Jaeyong Oh;Hye-Jin Kim
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.29 no.7
    • /
    • pp.828-835
    • /
    • 2023
  • Vessel traffic service operators (VTSOs) need to quickly and accurately analyze the maritime traffic situation in the vessel traffic service (VTS) area and provide information to the vessels. However, if traf ic increases rapidly, the workload of VTSOs increases, and they may not be able to provide adequate information. Therefore, it is essential to develop VTSO support technologies that can reduce their workload and provide consistent information. In this paper, we propose a model for automatically detecting abnormal vessels in the VTS area. The proposed model consists of a positional model and a contextual model and is specifically optimized for the traffic characteristics of the target area. The implemented model was tested by using real-world data collected at a test center (Daesan Port VTS). Our experiments confirmed that the model could automatically detect various abnormal situations, and the results were validated through expert evaluation.

Massive Electronic Record Management System using iRODS (iRODS를 이용한 대용량 전자기록물 관리 시스템)

  • Han, Yong-Koo;Kim, Jin-Seung;Lee, Seung-Hyun;Lee, Young-Koo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.8
    • /
    • pp.825-836
    • /
    • 2010
  • The advancement of electronic records brought great changes of the records management system. One of the biggest changes is the transition from passive to automatic management system, which manages massive records more efficiently. The integrated Rule-Oriented Data System (iRODS) is a rule-oriented grid system S/W which provides an infrastructure for building massive archive through virtualization. It also allows to define rules for data distribution and back-up. Therefore, iRODS is an ideal tool to build an electronic record management system that manages electronic records automatically. In this paper we describe the issues related to design and implementation of the electronic record management system using iRODS. We also propose a system that serves automatic processing of distribution and back-up of records according to their types by defining iRODS rules. It also provides functions to store and retrieve metadata using iRODS Catalog (iCAT) Database.

Data Block based User Authentication for Outsourced Data (아웃소싱 데이터 보호를 위한 데이터 블록 기반의 상호 인증 프로토콜)

  • Hahn, Changhee;Kown, Hyunsoo;Kim, Daeyeong;Hur, Junbeom
    • Journal of KIISE
    • /
    • v.42 no.9
    • /
    • pp.1175-1184
    • /
    • 2015
  • Recently, there has been an explosive increase in the volume of multimedia data that is available as a result of the development of multimedia technologies. More and more data is becoming available on a variety of web sites, and it has become increasingly cost prohibitive to have a single data server store and process multimedia files locally. Therefore, many service providers have been likely to outsource data to cloud storage to reduce costs. Such behavior raises one serious concern: how can data users be authenticated in a secure and efficient way? The most widely used password-based authentication methods suffer from numerous disadvantages in terms of security. Multi-factor authentication protocols based on a variety of communication channels, such as SMS, biometric, or hardware tokens, may improve security but inevitably reduce usability. To this end, we present a data block-based authentication scheme that is secure and guarantees usability in such a manner where users do nothing more than enter a password. In addition, the proposed scheme can be effectively used to revoke user rights. To the best of our knowledge, our scheme is the first data block-based authentication scheme for outsourced data that is proven to be secure without degradation in usability. An experiment was conducted using the Amazon EC2 cloud service, and the results show that the proposed scheme guarantees a nearly constant time for user authentication.

An Analysis of Subject Authorities Related to Korea in the National Diet Library of Japan (일본국립국회도서관의 한국 관련 주제명 전거데이터 분석)

  • Kim, Jeong-Hyen
    • Journal of Korean Library and Information Science Society
    • /
    • v.52 no.3
    • /
    • pp.49-72
    • /
    • 2021
  • Based on the analysis of the NDL authority system, this study was conducted to analyze the characteristics of subject authorities related to Korea in the NDL. The results are as follows. First, NDL subject authorities related to Korea are 3,143 in total including 2,205 headings and 938 subdivisions. Among them, social sciences accounted for more than half with 52.4%, and economics by individual discipline accounted for the most with 552 cases, 17.6%. Second, most of the subject headings of historical events caused by or directly related to Japan are described from the Japanese perspective, and terms familiar to Korea are mainly described in reference, not in heading. Third, subject headings representing Korean characteristics or historical events are considerably lacking or nondescript. Forth, when referring the name of the country, the term 'Joseon (朝鮮)' continues to be used to refer to both South and North Korea; however, it is necessary to subdivide the history after 1948 which includes the era of the 'Republic of Korea' and the 'Democratic People's Republic of Korea'. Using the term Joseon to refer North and South Korea may cause the reader to perceive Korea as persisting in the Joseon Dynasty. Furthermore, while 'Balhae (渤海)' is regarded as Chinese history, it is a part of Korean history and should be added to the Korean historical periods.

On correlation and causality in the analysis of big data (빅 데이터 분석에서 상관성과 인과성)

  • Kim, Joonsung
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.8
    • /
    • pp.845-852
    • /
    • 2018
  • Mayer-Schönberger and Cukier(2013) explain why big data is important for our life, while showing many cases in which analysis of big data has great significance for our life and raising intriguing issues on the analysis of big data. The two authors claim that correlation is in many ways practically far more efficient and versatile in the analysis of big data than causality. Moreover, they claim that causality could be abandoned since analysis and prediction founded on correlation must prevail. I critically examine the two authors' accounts of causality and correlation. First, I criticize that corelation is sufficient for our analysis of data and our prediction founded on the analysis. I point out their misunderstanding of the distinction between correlation and causality. I show that spurious correlation misleads our decision while analyzing Simpson paradox. Second, I criticize not only that causality is more inefficient in the analysis of big data than correlation, but also that there is no mathematical theory for causality. I introduce the mathematical theories of causality founded on structural equation theory, and show that causality has great significance for the analysis of big data.