• Title/Summary/Keyword: Node Speed

Search Result 447, Processing Time 0.026 seconds

Improving TCP Performance by Limiting Congestion Window in Fixed Bandwidth Networks (고정대역 네트워크에서 혼잡윈도우 제한에 의한 TCP 성능개선)

  • Park, Tae-Joon;Lee, Jae-Yong;Kim, Byung-Chul
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.42 no.12
    • /
    • pp.149-158
    • /
    • 2005
  • This paper proposes a congestion avoidance algorithm which provides stable throughput and transmission rate regardless of buffer size by limiting the TCP congestion window in fixed bandwidth networks. Additive Increase, Multiplicative Decrease (AIMD) is the most commonly used congestion control algorithm. But, the AIMD-based TCP congestion control method causes unnecessary packet losses and retransmissions from the congestion window increment for available bandwidth verification when used in fixed bandwidth networks. In addition, the saw tooth variation of TCP throughput is inappropriate to be adopted for the applications that require low bandwidth variation. We present an algorithm in which congestion window can be limited under appropriate circumstances to avoid congestion losses while still addressing fairness issues. The maximum congestion window is determined from delay information to avoid queueing at the bottleneck node, hence stabilizes the throughput and the transmission rate of the connection without buffer and window control process. Simulations have performed to verify compatibility, steady state throughput, steady state packet loss count, and the variance of congestion window. The proposed algorithm can be easily adopted to the sender and is easy to deploy avoiding changes in network routers and user programs. The proposed algorithm can be applied to enhance the performance of the high-speed access network which is one of the fixed bandwidth networks.

Localization Using Extended Kalman Filter based on Chirp Spread Spectrum Ranging (확장 Kalman 필터를 적용한 첩 신호 대역확산 거리 측정 기반의 위치추정시스템)

  • Bae, Byoung-Chul;Nam, Yoon-Seok
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.49 no.4
    • /
    • pp.45-54
    • /
    • 2012
  • Location-based services with GPS positioning technology as a key technology, but recognizing the current location through satellite communication is not possible in an indoor location-aware technology, low-power short-range communication is primarily made of the study. Especially, as Chirp Spread Spectrum(CSS) based location-aware approach for low-power physical layer IEEE802.15.4a is selected as a standard, Ranging distance estimation techniques and data transfer speed enhancements have been more developed. It is known that the distance measured by CSS ranging has quite a lot of noise as well as its bias. However, the noise problem can be adjusted by modeling the non-zero mean noise value by a scaling factor which corresponds to the change of magnitude of a measured distance vector. In this paper, we propose a localization system using the CSS signal to measure distance for a mobile node taken a measurement of the exact coordinates. By applying the extended kalman filter and least mean squares method, the localization system is faster, more stable. Finally, we evaluate the reliability and accuracy of the proposed algorithm's performance by the experiment for the realization of localization system.

A Real-Time Certificate Status Verification Method based on Reduction Signature (축약 서명 기반의 실시간 인증서 상태 검증 기법)

  • Kim Hyun Chul;Ahn Jae Myoung;Lee Yong Jun;Oh Hae Seok
    • The KIPS Transactions:PartC
    • /
    • v.12C no.2 s.98
    • /
    • pp.301-308
    • /
    • 2005
  • According to banking online transaction grows very rapidly, guarantee validity about business transaction has more meaning. To offer guarantee validity about banking online transaction efficiently, certificate status verification system is required that can an ieai-time offer identity certification, data integrity, guarantee confidentiality, non-repudiation. Existing real-time certificate status verification system is structural concentration problem generated that one node handling all transactions. And every time status verification is requested, network overload and communication bottleneck are occurred because ail useless informations are transmitted. it does not fit to banking transaction which make much account of real response time because of these problem. To improve problem by unnecessary information and structural concentration when existing real-time certificate status protocol requested , this paper handle status verification that break up inspection server by domain. This paper propose the method of real~time certificate status verification that solves network overload and communication bottleneck by requesting certification using really necessary Reduction information to certification status verification. And we confirm speed of certificate status verification $15\%$ faster than existing OCSP(Online Certificate Status Protocol) method by test.

A New Incentive Based Bandwidth Allocation Scheme For Cooperative Non-Orthogonal Multiple Access (협력 비직교 다중 접속 네트워크에서 새로운 인센티브 기반 주파수 할당 기법)

  • Kim, Jong Won;Kim, Sung Wook
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.6
    • /
    • pp.173-180
    • /
    • 2021
  • Non Orthogonal Multiple Access (NOMA) is a technology to guarantee the explosively increased Quality of Service(QoS) of users in 5G networks. NOMA can remove the frequent orthogonality in Orthogonal Multiple Access (OMA) while allocating the power differentially to classify user signals. NOMA can guarantee higher communication speed than OMA. However, the NOMA has one disadvantage; it consumes a more energy power when the distance increases. To solve this problem, relay nodes are employed to implement the cooperative NOMA control idea. In a cooperative NOMA network, relay node participations for cooperative communications are essential. In this paper, a new bandwidth allocation scheme is proposed for cooperative NOMA platform. By employing the idea of Vickrey-Clarke-Groves (VCG) mechanism, the proposed scheme can effectively prevent selfishly actions of relay nodes in the cooperative NOMA network. Especially, base stations can pay incentives to relay nodes as much as the contributes of relay nodes. Therefore, the proposed scheme can control the selfish behavior of relay nodes to improve the overall system performance.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

A Study of Intangible Cultural Heritage Communities through a Social Network Analysis - Focused on the Item of Jeongseon Arirang - (소셜 네트워크 분석을 통한 무형문화유산 공동체 지식연결망 연구 - 정선아리랑을 중심으로 -)

  • Oh, Jung-shim
    • Korean Journal of Heritage: History & Science
    • /
    • v.52 no.3
    • /
    • pp.172-187
    • /
    • 2019
  • Knowledge of intangible cultural heritage is usually disseminated through word-of-mouth and actions rather than written records. Thus, people assemble to teach others about it and form communities. Accordingly, to understand and spread information about intangible cultural heritage properly, it is necessary to understand not only their attributes but also a community's relational characteristics. Community members include specialized transmitters who work under the auspices of institutions, and general transmitters who enjoy intangible cultural heritage in their daily lives. They converse about intangible cultural heritage in close relationships. However, to date, research has focused only on professionals. Thus, this study focused on the roles of general transmitters of intangible cultural heritage information by investigating intangible cultural heritage communities centering around Jeongseon Arirang; a social network analysis was performed. Regarding the research objectives presented in the introduction, the main findings of the study are summarized as follows. First, there were 197 links between 74 members of the Jeongseon Arirang Transmission Community. One individual had connections with 2.7 persons on average, and all were connected through two steps in the community. However, the density and the clustering coefficient were low, 0.036 and 0.32, respectively; therefore, the cohesiveness of this community was low, and the relationships between the members were not strong. Second, 'Young-ran Yu', 'Nam-gi Kim' and 'Gil-ja Kim' were found to be the prominent figures of the Jeongseon Arirang Transmission Community, and the central structure of the network was concentrated around these three individuals. Being located in the central structure of the network indicates that a person is popular and ranked high. Also, it means that a person has an advantage in terms of the speed and quantity of the acquisition of information and resources, and is in a relatively superior position in terms of bargaining power. Third, to understand the replaceability of the roles of Young-ran Yu, Nam-gi Kim, and Gil-ja Kim, who were found to be the major figures through an analysis of the central structure, structural equivalence was profiled. The results of the analysis showed that the positions and roles of Young-ran Yu, Nam-gi Kim, and Gil-ja Kim were unrivaled and irreplaceable in the Jeongseon Arirang Transmission Community. However, considering that these three members were in their 60s and 70s, it seemed that it would be necessary to prepare measures for the smooth maintenance and operation of the community. Fourth, to examine the subgroup hidden in the network of the Jeongseon Arirang Transmission Community, an analysis of communities was conducted. A community refers to a subgroup clearly differentiated based on modularity. The results of the analysis identified the existence of four communities. Furthermore, the results of an analysis of the central structure showed that the communities were formed and centered around Young-ran Yu, Hyung-jo Kim, Nam-gi Kim, and Gil-ja Kim. Most of the transmission TAs recommended by those members, students who completed a course, transmission scholarship holders, and the general members taught in the transmission classes of the Jeongseon Arirang Preservation Society were included as members of the communities. Through these findings, it was discovered that it is possible to maintain the transmission genealogy, making an exchange with the general members by employing the present method for the transmission of Jeongseon Arirang, the joint transmission method. It is worth paying attention to the joint transmission method as it overcomes the demerits of the existing closed one-on-one apprentice method and provides members with an opportunity to learn their masters' various singing styles. This study is significant for the following reasons: First, by collecting and examining data using a social network analysis method, this study analyzed phenomena that had been difficult to investigate using existing statistical analyses. Second, by adopting a different approach to the previous method in which the genealogy was understood, looking at oral data, this study analyzed the structures of the transmitters' relationships with objective and quantitative data. Third, this study visualized and presented the abstract structures of the relationships among the transmitters of intangible cultural heritage information on a 2D spring map. The results of this study can be utilized as a baseline for the development of community-centered policies for the protection of intangible cultural heritage specified in the UNESCO Convention for the Safeguarding of Intangible Cultural Heritage. To achieve this, it would be necessary to supplement this study through case studies and follow-up studies on more aspects in the future.