• Title/Summary/Keyword: 인터넷 정보 신뢰도

Search Result 1,152, Processing Time 0.022 seconds

An contention-aware ordered sequential collaborative spectrum sensing scheme for CRAHN (무선인지 애드 혹 네트워크를 위한 순차적 협력 스펙트럼 센싱 기법)

  • Nguyen-Thanh, Nhan;Koo, In-Soo
    • Journal of Internet Computing and Services
    • /
    • v.12 no.4
    • /
    • pp.35-43
    • /
    • 2011
  • Cognitive Radio (CR) ad hoc network is highly considered as one of promising future ad hoc networks, which enables opportunistic access to under-utilized licensed spectrum. Similarly to other CR networks, the spectrum sensing is a prerequisite in CR ad hoc network. Collaborative spectrum sensing can help increasing sensing performance. For such an infrastructureless network, however the coordination for the sensing collaboration is really complicated due to the lack of a central controller. In this paper, we propose a novel collaborative spectrum sensing scheme in which the final decision is made by the node with the highest data reliability based on a sequential Dempster Shafer theory. The collaboration of sensing data is also executed by the proposed contention-aware reporting mechanism which utilizes the sensing data reliability order for broadcasting spectrum sensing result. The proposed method reduces the collecting time and the overhead of the control channel due to the efficiency of the ordered sequential combination while keeping the same sensing performance in comparison with the conventional cooperative centralized spectrum sensing scheme.

Building Transparency on the Total System Performance Assessment of Radioactive Repository through the Development of the Cyber R&D Platform; Application for Development of Scenario and Input of TSPA Data through QA Procedures (Cyber R&D Platform개발을 통한 방사성폐기물 처분종합성능평가(TSPA) 투명성 증진에 관한 연구; 시나리오 도출 과정과 TSPA 데이터 입력에서의 품질보증 적용 사례)

  • Seo, Eun-Jin;Hwang, Yong-Soo;Kang, Chul-Hyung
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.4 no.1
    • /
    • pp.65-75
    • /
    • 2006
  • Transparency on the Total System Performance Assessment (TSPA) is the key issue to enhance the public acceptance for a radioactive repository. To approve it, all performances on TSPA through Quality Assurance is necessary. The integrated Cyber R&D Platform is developed by KAERI using the T2R3 principles applicable for five major steps : planning, research work, documentation, and internal & external audits in R&D's. The proposed system is implemented in the web-based system so that all participants in TSPA are able to access the system. It is composed of three sub-systems; FEAS (FEp to Assessment through Scenario development) showing systematic approach from the FEPs to Assessment methods flow chart, PAID (Performance Assessment Input Databases) being designed to easily search and review field data for TSPA and QA system containing the administrative system for QA on five key steps in R&D's in addition to approval and disapproval processes, corrective actions, and permanent record keeping. All information being recorded in QA system through T2R3 principles is integrated into Cyber R&D Platform so that every data in the system can be checked whenever necessary. Throughout the next phase R&D, Cyber R&D Platform will be connected with the assessment tool for TSPA so that it will be expected to search the whole information in one unified system.

  • PDF

A Study on the Artificial Intelligence Ethics Measurement indicators for the Protection of Personal Rights and Property Based on the Principles of Artificial Intelligence Ethics (인공지능 윤리원칙 기반의 인격권 및 재산보호를 위한 인공지능 윤리 측정지표에 관한 연구)

  • So, Soonju;Ahn, Seongjin
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.111-123
    • /
    • 2022
  • Artificial intelligence, which is developing as the core of an intelligent information society, is bringing convenience and positive life changes to humans. However, with the development of artificial intelligence, human rights and property are threatened, and ethical problems are increasing, so alternatives are needed accordingly. In this study, the most controversial artificial intelligence ethics problem in the dysfunction of artificial intelligence was aimed at researching and developing artificial intelligence ethical measurement indicators to protect human personality rights and property first under artificial intelligence ethical principles and components. In order to research and develop artificial intelligence ethics measurement indicators, various related literature, focus group interview(FGI), and Delphi surveys were conducted to derive 43 items of ethics measurement indicators. By survey and statistical analysis, 40 items of artificial intelligence ethics measurement indicators were confirmed and proposed through descriptive statistics analysis, reliability analysis, and correlation analysis for ethical measurement indicators. The proposed artificial intelligence ethics measurement indicators can be used for artificial intelligence design, development, education, authentication, operation, and standardization, and can contribute to the development of safe and reliable artificial intelligence.

Efficient Privacy-Preserving Duplicate Elimination in Edge Computing Environment Based on Trusted Execution Environment (신뢰실행환경기반 엣지컴퓨팅 환경에서의 암호문에 대한 효율적 프라이버시 보존 데이터 중복제거)

  • Koo, Dongyoung
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.9
    • /
    • pp.305-316
    • /
    • 2022
  • With the flood of digital data owing to the Internet of Things and big data, cloud service providers that process and store vast amount of data from multiple users can apply duplicate data elimination technique for efficient data management. The user experience can be improved as the notion of edge computing paradigm is introduced as an extension of the cloud computing to improve problems such as network congestion to a central cloud server and reduced computational efficiency. However, the addition of a new edge device that is not entirely reliable in the edge computing may cause increase in the computational complexity for additional cryptographic operations to preserve data privacy in duplicate identification and elimination process. In this paper, we propose an efficiency-improved duplicate data elimination protocol while preserving data privacy with an optimized user-edge-cloud communication framework by utilizing a trusted execution environment. Direct sharing of secret information between the user and the central cloud server can minimize the computational complexity in edge devices and enables the use of efficient encryption algorithms at the side of cloud service providers. Users also improve the user experience by offloading data to edge devices, enabling duplicate elimination and independent activity. Through experiments, efficiency of the proposed scheme has been analyzed such as up to 78x improvements in computation during data outsourcing process compared to the previous study which does not exploit trusted execution environment in edge computing architecture.

An Apache-based WebDAV Server Supporting Reliable Reliable Resource Management (아파치 기반의 신뢰성 있는 자원관리를 지원하는 웹데브 서버)

  • Jung, Hye-Young;Ahn, Geon-Tae;Park, Yang-Soo;Lee, Myung-Joon
    • The KIPS Transactions:PartC
    • /
    • v.11C no.4
    • /
    • pp.545-554
    • /
    • 2004
  • WebDAV is a protocol to support collaboration among the workers in geographically distant locations through the Internet. WebDAV extends the web communication protocol HTTP/1.1 to provide a standard infrastructure for supporting asynchronous collaboration for various contents across the Internet. To provide the WebDAV functionality in legacy applications such as web-based collaborative systems or document management systems, those systems need to be implemented additionally to handle the WebDAV methods and headers information. In this paper, we developed an Apache-based WebDAV server, named DAVinci(WebDAV Is New Collaborative web-authoring Innovation)which supports the WebDAV specification. DAVinci was implemented as a form of service provider on a mod_dav Apache module. Mod_day, which is an Apache module, is an open source module to provide WebDAV capabilities in an Apache web server. We used a file system for storing resources and the PostgreSQL database for their properties. In addition, the system provides a consistency manager to guarantee that both resources and properties are maintained without inconsistency between resources and their properties.

A Study on Authentication of Mobile Agency AP Connection Using Trusted Third Party in Smart Phone Environment (스마트폰 환경에서 신뢰기관을 이용한 이동 통신사 AP 접속 인증에 관한 연구)

  • Lee, Gi-Sung;Min, Dae-Gi;Jun, Moon-Seog
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.11
    • /
    • pp.5496-5505
    • /
    • 2012
  • As the IT industry develops, the smart-phone technology and functions which are actively being studied at the moment greatly influence the entire living environment. With the smart-phone technology and functions, people's interest for the wireless LAN which can be used to get access to the Internet anytime anywhere is gradually increasing. However, since the malicious attacker can easily carry out hacking or approach the contents due to the characteristics of the wireless radio wave, the personal information with a high level of importance for data security is easily exposed due to Spoofing, Denial of Service attack and Man in the Middle attack. Therefore, the demand for security is gradually increasing. In this paper, the safe wireless network service environment is provided by supplementing the vulnerability in regard to Spoofing, Session Hijacking and Man in the Middle attack after executing the client's authentication process, the AP authentication process and the Mobile Agency authentication process with the client's information in the USIM, the AP information and the Mobile Agency information when the client uses the wireless Internet through the Mobile Agency AP access in the smart phone environment.

Analyzing Service Quality Factors for Affecting Government 2.0 Users' Satisfaction (이용자 만족에 영향을 미치는 Government 2.0 서비스 품질 요인에 관한 연구)

  • Song, Ju-Ho;Park, Soo-Kyung;Lee, Bong-Gyou
    • Journal of Internet Computing and Services
    • /
    • v.12 no.2
    • /
    • pp.149-161
    • /
    • 2011
  • The purpose of this study is to present the direction of improving the Korean Government 2.0 services by evaluating the quality of the service and analyzing its effects on user satisfaction. Recently, on the extension of e-government, Government 2.0, which is the government service combined with Web 2.0, has emerged as a new paradigm. However, there are very few studies on the impact of Government 2.0 on general society and industries. Especially, there is little or no practical analysis and evaluation for the quality of Government 2.0 service. Because the service quality is typically used as the leading indicator of user satisfaction, this study applies it to the Government 2.0 for the validation of the existingtheory in a particular subject. The service quality was measured by the tangibles, reliability, responsiveness of revised SERVQUAL, the efficiency and security of the E-S-QUAL. Inconclusion, this study has empirically significant implications for providing a theoretical foundation for measuring the quality of the Government 2.0 service.

Development of the Fundamental Technology for Ubiquitous Road Disaster Management System (유비쿼터스 도로재해관리시스템을 위한 기반기술 개발)

  • Choi, Young-Taek;Cho, Gi-Sung
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.14 no.3 s.37
    • /
    • pp.39-46
    • /
    • 2006
  • This study is aimed at the development of ubiquitous based road disaster management system. The fundamental technologies used for developing this system are classified into three modules - wireless internet communication module, mobile module and server module. These fundamental technologies can be used not only for developing road disaster management system but also for developing various mobile or ubiquitous systems. With this system, workers can download many DB (Digital map, Attribute information etc.) from server to the field in realtime. The accuracy and objectivity of the DB could be improved with these informations collected at fields because these data can be used as basic data for road disaster information collection. Because in the web based server module - Web based Road Disaster Management System (URDMS) - field disaster information was showed link up with exist DB on road by absolute coordinate, the decision making with all of the field information was made and it sent to a field staff in realtime. The problems of current road disaster management rule ran be solved by this URDMS.

  • PDF

A Study on Improvement of Pedestrian Care System for Cooperative Automated Driving (자율협력주행을 위한 보행자Care 시스템 개선에 관한 연구)

  • Lee, Sangsoo;Kim, Jonghwan;Lee, Sunghwa;Kim, Jintae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.111-116
    • /
    • 2021
  • This study is a study on improving the pedestrian Care system, which delivers jaywalking events in real time to the autonomous driving control center and Autonomous driving vehicles in operation and issues warnings and announcements to pedestrians based on pedestrian signals. In order to secure reliability of object detection method of pedestrian Care system, the inspection method combined with camera sensor with Lidar sensor and the improved system algorithm were presented. In addition, for the occurrence events of Lidar sensors and intelligent CCTV received during the operation of autonomous driving vehicles, the system algorithm for the elimination of overlapping events and the improvement of accuracy of the same time, place, and object was presented.

A New Web Cluster Scheme for Load Balancing among Internet Servers (인터넷 환경에서 서버간 부하 분산을 위한 새로운 웹 클러스터 기법)

  • Kim, Seung-Young;Lee, Seung-Ho
    • The KIPS Transactions:PartC
    • /
    • v.9C no.1
    • /
    • pp.115-122
    • /
    • 2002
  • This paper presents a new web cluster scheme based on dispatcher which does not depend on operating system for server and can examine server's status interactively. Two principal functions are proposed for new web cluster technique. The one is self-controlled load distribution and the other is transaction fail-safe. Self-controlled load distribution function checks response time and status of servers periodically, then it decides where the traffic goes to guarantee rapid response for every query. Transaction fail-safe function can recover lost queries including broken transaction immediately from server errors. Proposed new web cluster scheme is implemented by C language on Unix operating system and compared with legacy web cluster products. On the comparison with broadcast based web cluster, proposed new web cluster results higher performance as more traffic comes. And on the comparison with a round-robin DNS based web cluster, it results similar performance at the case of traffic processing. But when the situation of one server crashed, proposed web cluster processed traffics more reliably without lost queries. So, new web cluster scheme Proposed on this dissertation can give alternative plan about highly increasing traffics and server load due to heavy traffics to build more reliable and utilized services.