• Title/Summary/Keyword: 노드 내 통신

Search Result 445, Processing Time 0.022 seconds

The DSRR Organizing Algorithm for Efficient Mobility Management in the SIP (SIP에서의 효율적인 이동성 관리를 위한 방향성 사전등록영역 구성 알고리즘)

  • 서혜숙;한상범;이근호;황종선
    • Journal of KIISE:Information Networking
    • /
    • v.31 no.5
    • /
    • pp.490-500
    • /
    • 2004
  • In mobile/wireless environment, mobility management is widely being focused as one popular researches. But, disruption happens when messages are exchanged between nodes as registration is made after handoff, and unnecessary traffic occurs because of the use of the Random-walk model, in which the probability for MN to move to neighboring cells is equal. In order to solve these problems, this study proposes a technique and algorithm for composing Directional Shadow Registration Region (DSRR) that provides seamless mobility. The core of DSRR is to prevent disruption and unnecessary traffic by minimizing the number o) neighboring cells with a high probability of handoff (AAAF). This study sensed the optimal time for handoff through regional cell division by introducing a division scheme, and then decided DSRR, the region for shadow registration, by applying direction vector (DV) obtained through directional cell sectoring. According to the result of the experiment, the proposed DSRR processes message exchange between nodes within the intra-domain, the frequency of disruptions decreased significantly compared to that in previous researches that process in inter-domain environment. In addition, traffic that occurs at every handoff happened twice in DSRR compared to n (the number of neighboring cells) times in Previous researches. As an additional effect, divided regions obtained from the process of composing DSRR filter MN that moves regardless of handoff.

An Enhanced WLAN MAC Protocol for Directional Broadcast (지향성 브로드캐스트를 위한 무선 LAN MAC 프로토콜)

  • Cha, Woo-Suk;Cho, Gi-Hwan
    • Journal of KIISE:Information Networking
    • /
    • v.33 no.1
    • /
    • pp.16-27
    • /
    • 2006
  • The wireless transmission medium inherently broadcasts a signal to all neighbor nodes in the transmission range. Existing asynchronous MAC protocols do not provide a concrete solution for reliable broadcast in link layer. This mainly comes from that an omni-directional broadcasting causes to reduce the network performance due to the explosive collisions and contentions. This paper proposes a reliable broadcast protocol in link taller based on directional antennas, named MDB(MAC protocol for Directional Broadcasting). This protocol makes use of DAST(Directional Antennas Statement Table) information and D-MACA(Directional Multiple Access and Collision Avoidance) scheme through 4-way handshake to resolve the many collision problem wit]1 omni-directional antenna. To analyze its performance, MDB protocol is compared with IEEE 802.11 DCF protocol [9] and the protocol 2 of reference [3], in terms of the success rate of broadcast and the collision rate. As a result of performance analysis through simulation, it was confirmed that the collision rate of the MDB protocol is lower than those of IEEE 802.11 and the protocol 2 of reference [3], and that the completion rate of broadcast of MDB protocol is higher than those of IEEE 802.11 and the protocol 2 of reference [3].

A Block-based Uniformly Distributed Random Node Arrangement Method Enabling to Wirelessly Link Neighbor Nodes within the Communication Range in Free 3-Dimensional Network Spaces (장애물이 없는 3차원 네트워크 공간에서 통신 범위 내에 무선 링크가 가능한 블록 기반의 균등 분포 무작위 노드 배치 방법)

  • Lim, DongHyun;Kim, Changhwa
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1404-1415
    • /
    • 2022
  • The 2-dimensional arrangement method of nodes has been used in most of RF (Radio Frequency) based communication network simulations. However, this method is not useful for the an none-obstacle 3-dimensional space networks in which the propagation delay speed in communication is very slow and, moreover, the values of performance factors such as the communication speed and the error rate change on the depth of node. Such a typical example is an underwater communication network. The 2-dimensional arrangement method is also not useful for the RF based network like some WSNs (Wireless Sensor Networks), IBSs (Intelligent Building Systems), or smart homes, in which the distance between nodes is short or some of nodes can be arranged overlapping with their different heights in similar planar location. In such cases, the 2-dimensional network simulation results are highly inaccurate and unbelievable so that they lead to user's erroneous predictions and judgments. For these reasons, in this paper, we propose a method to place uniformly and randomly communication nodes in 3-dimensional network space, making the wireless link with neighbor node possible. In this method, based on the communication rage of the node, blocks are generated to construct the 3-dimensional network and a node per one block is generated and placed within a block area. In this paper, we also introduce an algorithm based on this method and we show the performance results and evaluations on the average time in a node generation and arrangement, and the arrangement time and scatter-plotted visualization time of all nodes according to the number of them. In addition, comparison with previous studies is conducted. As a result of evaluating the performance of the algorithm, it was found that the processing time of the algorithm was proportional to the number of nodes to be created, and the average generation time of one node was between 0.238 and 0.28 us. ultimately, There is no problem even if a simulation network with a large number of nodes is created, so it can be sufficiently introduced at the time of simulation.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Performance Optimization of Numerical Ocean Modeling on Cloud Systems (클라우드 시스템에서 해양수치모델 성능 최적화)

  • JUNG, KWANGWOOG;CHO, YANG-KI;TAK, YONG-JIN
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.3
    • /
    • pp.127-143
    • /
    • 2022
  • Recently, many attempts to run numerical ocean models in cloud computing environments have been tried actively. A cloud computing environment can be an effective means to implement numerical ocean models requiring a large-scale resource or quickly preparing modeling environment for global or large-scale grids. Many commercial and private cloud computing systems provide technologies such as virtualization, high-performance CPUs and instances, ether-net based high-performance-networking, and remote direct memory access for High Performance Computing (HPC). These new features facilitate ocean modeling experimentation on commercial cloud computing systems. Many scientists and engineers expect cloud computing to become mainstream in the near future. Analysis of the performance and features of commercial cloud services for numerical modeling is essential in order to select appropriate systems as this can help to minimize execution time and the amount of resources utilized. The effect of cache memory is large in the processing structure of the ocean numerical model, which processes input/output of data in a multidimensional array structure, and the speed of the network is important due to the communication characteristics through which a large amount of data moves. In this study, the performance of the Regional Ocean Modeling System (ROMS), the High Performance Linpack (HPL) benchmarking software package, and STREAM, the memory benchmark were evaluated and compared on commercial cloud systems to provide information for the transition of other ocean models into cloud computing. Through analysis of actual performance data and configuration settings obtained from virtualization-based commercial clouds, we evaluated the efficiency of the computer resources for the various model grid sizes in the virtualization-based cloud systems. We found that cache hierarchy and capacity are crucial in the performance of ROMS using huge memory. The memory latency time is also important in the performance. Increasing the number of cores to reduce the running time for numerical modeling is more effective with large grid sizes than with small grid sizes. Our analysis results will be helpful as a reference for constructing the best computing system in the cloud to minimize time and cost for numerical ocean modeling.