• Title/Summary/Keyword: Optimization of Computer Network

Search Result 498, Processing Time 0.027 seconds

Symbolizing Numbers to Improve Neural Machine Translation (숫자 기호화를 통한 신경기계번역 성능 향상)

  • Kang, Cheongwoong;Ro, Youngheon;Kim, Jisu;Choi, Heeyoul
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1161-1167
    • /
    • 2018
  • The development of machine learning has enabled machines to perform delicate tasks that only humans could do, and thus many companies have introduced machine learning based translators. Existing translators have good performances but they have problems in number translation. The translators often mistranslate numbers when the input sentence includes a large number. Furthermore, the output sentence structure completely changes even if only one number in the input sentence changes. In this paper, first, we optimized a neural machine translation model architecture that uses bidirectional RNN, LSTM, and the attention mechanism through data cleansing and changing the dictionary size. Then, we implemented a number-processing algorithm specialized in number translation and applied it to the neural machine translation model to solve the problems above. The paper includes the data cleansing method, an optimal dictionary size and the number-processing algorithm, as well as experiment results for translation performance based on the BLEU score.

Location Trigger System for the Application of Context-Awareness based Location services

  • Lee, Yon-Sik;Jang, Min-Seok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.10
    • /
    • pp.149-157
    • /
    • 2019
  • Recent research has been actively carried out on systems that want to optimize resource utilization by analyzing the intended behavior and pattern of behavior of objects (users, consumers). A service system that applies information about an object's location or behavior must include a location trigger processing system for tracking an object's real-time location. In this paper, we analyze design problems for the implementation of a context-awareness based location trigger system, and present system models based on analysis details. For this purpose, this paper introduces the concept of location trigger for intelligent location tracking techniques about moving situations of objects, and suggests a mobile agent system with active rules that can perform monitoring and appropriate actions based on sensing information and location context information, and uses them to design and implement the location trigger system for context-awareness based location services. The proposed system is verified by implementing location trigger processing scenarios and trigger service and action service protocols. In addition, through experiments on mobile agents with active rules, it is suggested that the proposed system can optimize the role and function of the application system by using rules appropriate to the service characteristics and that it is scalable and effective for location-based service systems. This paper is a preliminary study for the establishment of an optimization system for utilizing resources (equipment, power, manpower, etc.) through the active characteristics of systems such as real-time remote autonomous control and exception handling over consumption patterns and behavior changes of power users. The proposed system can be used in system configurations that induce optimization of resource utilization through intelligent warning and action based on location of objects, and can be effectively applied to the development of various location service systems.

Performance Analysis on Link Quality of Handover Mechanism based on the Terminal Mobility in Wired and Wireless Integrated Networks (유무선 복합망에서 이동 단말 기반 핸드오버의 링크 품질에 관한 성능 분석)

  • Park, Nam-Hun;Gwon, O-Jun;Kim, Yeong-Seon;Gam, Sang-Ha
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.8S
    • /
    • pp.2608-2619
    • /
    • 2000
  • This paper proposes the Handover Scheme for the mobile and describes the result of the performance analysis. In the conventional scheme of handover request, the withdrawal of terminal may occur because handover request is performed based on fixed signal level without considering network load and terminal mobility. The proposed scheme offers the minimization of withdrawal and handover blocking probability by means of the handover request of terminal based on the network load and terminal mobility. Conventional handover scheme has the sequential procedure that network performs resource check and path rerouting on the handover by MT(Mobile Terminal). Proposed handover scheme pre-processes the resource check before the handover request by predicting the handover request timo so that handover latency can be reduced. Moreover, path optimization is executed after the completion of handover in order to reduce handover latency. The rdduction of handover latency prevents the dropping of service by minimizing backward handover blocking. In summary, we propose the prediction of handover request time and decision method based on terminal, validating the performance of proposed scheme considering various cases of simulation.

  • PDF

A Simulator for Integrated Voice/Data Packet Communication Networks (음성과 데이터가 집적된 패킷통신망을 위한 시뮬레이터 개발)

  • Park, Soon;Un, Chong-Kwan
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.11 no.2
    • /
    • pp.108-121
    • /
    • 1986
  • In this paper, the development of a simulator for the performance estimation and parameter optimization of an integrates voice/data packet communication network is described. The simulator implemented is capable of simulating the integrated voice/data network that handles packet voice terminals as well as data terminals and hosts operating under standard CCITT protocols. Of the three descrete event simulation approaches presently known, the process interaction method has been chose. With this approach one can implement a simulator that is related most Closely with the real system. The simulator has been implemented in PL/I and GPSS simulation languages, resulting in a software package of about 4,000 lines. To reduce the computer run time of the simulator, we have used a method of reducing conditional events based on a GPSS LINK block. We describe various aspects of the simulation model developed. We then investigate the performance of a 7-node network using the simulator, and present the results. For validation of the simulator developed, we construct a simulation model for a simple voice/ data multiplexer, and compare the results of simulation with those of an analytical model.

  • PDF

STAR-24K: A Public Dataset for Space Common Target Detection

  • Zhang, Chaoyan;Guo, Baolong;Liao, Nannan;Zhong, Qiuyun;Liu, Hengyan;Li, Cheng;Gong, Jianglei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.365-380
    • /
    • 2022
  • The target detection algorithm based on supervised learning is the current mainstream algorithm for target detection. A high-quality dataset is the prerequisite for the target detection algorithm to obtain good detection performance. The larger the number and quality of the dataset, the stronger the generalization ability of the model, that is, the dataset determines the upper limit of the model learning. The convolutional neural network optimizes the network parameters in a strong supervision method. The error is calculated by comparing the predicted frame with the manually labeled real frame, and then the error is passed into the network for continuous optimization. Strongly supervised learning mainly relies on a large number of images as models for continuous learning, so the number and quality of images directly affect the results of learning. This paper proposes a dataset STAR-24K (meaning a dataset for Space TArget Recognition with more than 24,000 images) for detecting common targets in space. Since there is currently no publicly available dataset for space target detection, we extracted some pictures from a series of channels such as pictures and videos released by the official websites of NASA (National Aeronautics and Space Administration) and ESA (The European Space Agency) and expanded them to 24,451 pictures. We evaluate popular object detection algorithms to build a benchmark. Our STAR-24K dataset is publicly available at https://github.com/Zzz-zcy/STAR-24K.

Cable damage identification of cable-stayed bridge using multi-layer perceptron and graph neural network

  • Pham, Van-Thanh;Jang, Yun;Park, Jong-Woong;Kim, Dong-Joo;Kim, Seung-Eock
    • Steel and Composite Structures
    • /
    • v.44 no.2
    • /
    • pp.241-254
    • /
    • 2022
  • The cables in a cable-stayed bridge are critical load-carrying parts. The potential damage to cables should be identified early to prevent disasters. In this study, an efficient deep learning model is proposed for the damage identification of cables using both a multi-layer perceptron (MLP) and a graph neural network (GNN). Datasets are first generated using the practical advanced analysis program (PAAP), which is a robust program for modeling and analyzing bridge structures with low computational costs. The model based on the MLP and GNN can capture complex nonlinear correlations between the vibration characteristics in the input data and the cable system damage in the output data. Multiple hidden layers with an activation function are used in the MLP to expand the original input vector of the limited measurement data to obtain a complete output data vector that preserves sufficient information for constructing the graph in the GNN. Using the gated recurrent unit and set2set model, the GNN maps the formed graph feature to the output cable damage through several updating times and provides the damage results to both the classification and regression outputs. The model is fine-tuned with the original input data using Adam optimization for the final objective function. A case study of an actual cable-stayed bridge was considered to evaluate the model performance. The results demonstrate that the proposed model provides high accuracy (over 90%) in classification and satisfactory correlation coefficients (over 0.98) in regression and is a robust approach to obtain effective identification results with a limited quantity of input data.

Incentive Optimization Scheme for Small Cell Base Station Cooperation in Heterogeneous Networks (이기종 네트워크에서 스몰셀 기지국 협력을 위한 인센티브 최적화 기법)

  • Jung, Sukwon;Kim, Taejoon
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.7 no.8
    • /
    • pp.203-210
    • /
    • 2018
  • Mobile traffic is increasing consistently, and mobile carriers are becoming more and more hard to meet this ever-increasing mobile traffic demand by means of additional installation of base stations. To overcome this problem, heterogeneous networks, which can reuse space and frequency by installing small cells such as femto cells in existing macro cells, were introduced. However, existing macro cell users are difficult to increase the spectral efficiency without the cooperation of femto owners. Femto owners are also reluctant to accommodate other mobile stations in their femto stations without proper incentive. In this paper, a method of obtaining the optimal incentive is proposed, which adopts a utility function based on the logarithm of throughput of mobile stations, and the incentive is calculated to maximize the utility of the entire network.

ICARP: Interference-based Charging Aware Routing Protocol for Opportunistic Energy Harvesting Wireless Networks (ICARP: 기회적 에너지 하베스팅 무선 네트워크를 위한 간섭 기반 충전 인지 라우팅 프로토콜)

  • Kim, Hyun-Tae;Ra, In-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.27 no.1
    • /
    • pp.1-6
    • /
    • 2017
  • Recent researches on radio frequency energy harvesting networks(RF-EHNs) with limited energy resource like battery have been focusing on the development of a new scheme that can effectively extend the whole lifetime of a network to semipermanent. In order for considerable increase both in the amount of energy obtained from radio frequency energy harvesting and its charging effectiveness, it is very important to design a network that supports energy harvesting and data transfer simultaneously with the full consideration of various characteristics affecting the performance of a RF-EHN. In this paper, we proposes an interference-based charging aware routing protocol(ICARP) that utilizes interference information and charging time to maximize the amount of energy harvesting and to minimize the end-to-end delay from a source to the given destination node. To accomplish the research objectives, this paper gives a design of ICARP adopting new network metrics such as interference information and charging time to minimize end-to-end delay in energy harvesting wireless networks. The proposed method enables a RF-EHN to reduce the number of packet losses and retransmissions significantly for better energy consumption. Finally, simulation results show that the network performance in the aspects of packet transmission rate and end-to-end delay has enhanced with the comparison of existing routing protocols.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

Optimization of attention map based model for improving the usability of style transfer techniques

  • Junghye Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.31-38
    • /
    • 2023
  • Style transfer is one of deep learning-based image processing techniques that has been actively researched recently. These research efforts have led to significant improvements in the quality of result images. Style transfer is a technology that takes a content image and a style image as inputs and generates a transformed result image by applying the characteristics of the style image to the content image. It is becoming increasingly important in exploiting the diversity of digital content. To improve the usability of style transfer technology, ensuring stable performance is crucial. Recently, in the field of natural language processing, the concept of Transformers has been actively utilized. Attention maps, which forms the basis of Transformers, is also being actively applied and researched in the development of style transfer techniques. In this paper, we analyze the representative techniques SANet and AdaAttN and propose a novel attention map-based structure which can generate improved style transfer results. The results demonstrate that the proposed technique effectively preserves the structure of the content image while applying the characteristics of the style image.