• Title/Summary/Keyword: Backward propagation

Search Result 65, Processing Time 0.022 seconds

Development of A Network loading model for Dynamic traffic Assignment (동적 통행배정모형을 위한 교통류 부하모형의 개발)

  • 임강원
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.3
    • /
    • pp.149-158
    • /
    • 2002
  • For the purpose of preciously describing real time traffic pattern in urban road network, dynamic network loading(DNL) models able to simulate traffic behavior are required. A number of different methods are available, including macroscopic, microscopic dynamic network models, as well as analytical model. Equivalency minimization problem and Variation inequality problem are the analytical models, which include explicit mathematical travel cost function for describing traffic behaviors on the network. While microscopic simulation models move vehicles according to behavioral car-following and cell-transmission. However, DNL models embedding such travel time function have some limitations ; analytical model has lacking of describing traffic characteristics such as relations between flow and speed, between speed and density Microscopic simulation models are the most detailed and realistic, but they are difficult to calibrate and may not be the most practical tools for large-scale networks. To cope with such problems, this paper develops a new DNL model appropriate for dynamic traffic assignment(DTA), The model is combined with vertical queue model representing vehicles as vertical queues at the end of links. In order to compare and to assess the model, we use a contrived example network. From the numerical results, we found that the DNL model presented in the paper were able to describe traffic characteristics with reasonable amount of computing time. The model also showed good relationship between travel time and traffic flow and expressed the feature of backward turn at near capacity.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

A Switch Behavior Supporting Effective ABR Traffic Control for Remote Destinations in a Multiple Connection (다중점 연결의 원거리 수신원에 대한 효율적이 ABR 트래픽 제어를 제공하는 스위치 동작 방식)

  • Lee, Sook-Young;Lee, Mee-Jeong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.5 no.6
    • /
    • pp.1610-1619
    • /
    • 1998
  • The ABR service class provides feedback based traffic control to transport bursty data traffic efficiently. Feedback based congestion control has first been studied to be applied to unicast connections. Recently. several congestion control algorithms for multicast connections have also been proposed as the number of ABR applications requiring multicast increases. With feedback based congestion control, the effectiveness of a traffic control scheme diminishes as propagation delay increases. Especially for a multicast connection, a remote destination may suffer unfair service compared to a local destination due to the delayed feedback. Amelioration of the disadvantages caused by feedback delay is therefore more important for remote destinations in multicast connections. This paper proposes a new switch behavior to provide effective feedback based mathc control for rentoh destinations. The proposed switches adjust the service rate dynamically in accordance woth the state of the downstream, that is, the congestion of the destinaion is immediately controlled by the nearest apstream switch before the source to ramp down the transmission rate of the connection. The proposed switch has an implementation overhead to have a separate buffer for each VC to adjust the service rate in accordance with a backward Rm cell of each VC. The buffer requirement id also increased at intermediate switches. Simulation results show that the proposed switch reduces the cell loss rate in both the local and the remote destinations and slso amelioratd the between the two destinations.

  • PDF

ABE MITUIE's Movements in Korean and Japanese Buddism (아베 미츠이에(阿部充家)의 한(韓)·일(日) 불교(佛敎) 관련(關聯) 활동(活動) -신자료(新資料) 「중앙학림학생제군(中央學林學生諸君)」 (1915), 「조선불교(朝鮮佛敎)の금석(今昔)」(1918)의 공개(公開)와 더불어-)

  • Shim, Won-Sup
    • The Journal of Korean-Japanese National Studies
    • /
    • no.21
    • /
    • pp.1-43
    • /
    • 2011
  • This article introduces Abe Mituie's activities related to Korean and Japanese Buddhism and two newly discovered materials. He worked as a brain of Japanese cultural rule over Joseon Korea while holding various positions such as the president of KyeongSung Il Bo, the vice president of Kokmin Newspaper and the director of Central Joseon Association. Abe was responsible for Enkak Temple, the head temple of Japanese Rinzai section, and was one of the layman followers of Syak Soen who worked for the spread of modern Japanese Buddhism to Europe and America. He was a respectful Buddhist layman so as to teach Zen Buddhism for young Buddhist monks in Japan. After he started to assume charge in the Kyeong Sung Il Bo, he was also very active in movements in relation to Joseon Buddhism to the extent that he was found to be deeply involved in Joseon Buddhism sections. On the other hand, he concluded Joseon culture to be 'devastated.' He asserted that it was necessary to develop spiritual culture and revive Buddhism in order to resolve the devastation in the Joseon. In addition, he thought that Joseon Buddhism was ruined due to the misgovernment of the Joseon Dynasty, but had its own as great tradition as Japanese Buddhism. Therefore, in his opinion, there was a need to do research on Joseon Buddhism and find some way out of the contemporary difficulties. In order to save the situation, he made efforts to protect and revive Joseon Buddhism while paying continuous visits to Joseon Buddhist temples, supporting the publication of Buddhist canons and proposing to have a regular meeting of 'The Invitation of 30 Head Temples.' From his visit to Youngju Temple and his consistent relationship with Kang Daeryeon, it can be assumed that he was involved in reorganizing power structure in Joseon Buddhism and establishing various institutions. He emphasized the strict adherence of individuals and communities to rules in his lecture for students at Jung Ang Hak Rim. It was a way to revive Joseon Buddhism by creating a new social image of Joseon Buddhism. He continued to work for the restoration of Joseon Buddhism even after he retired from Kyeong Sung Il Bo and returned to Japan. He introduced the originality of Joseon Buddhism history to Japan and sent Japanese monks to Korea in order to do research and contribute to exchange between Korean and Japanese Buddhism. All things taken together, it is evident that Abe Mitzihe regarded Joseon as backward or stagnant from a perspective of evolutionist or orientalist, and was a Japanese elite to believe that it was just for Japan to control Korea. However, he was different from other Japanese elites in that he did not considered Joseon Buddhism merely as the object of propagation. He thought that Joseon Buddhism possessed its own great tradition and culture, but was ruined because of the misadministration of the Joseon Royal House. Therefore, in his opinion, Joseon Buddhism should be recovered by means of some supports, and its revival would lead to the restoration of Joseon culture as a whole, which would be realized by Japanese rule over Korea and Japanese elites' generous assistance.

A 2×2 MIMO Spatial Multiplexing 5G Signal Reception in a 500 km/h High-Speed Vehicle using an Augmented Channel Matrix Generated by a Delay and Doppler Profiler

  • Suguru Kuniyoshi;Rie Saotome;Shiho Oshiro;Tomohisa Wada
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.1-10
    • /
    • 2023
  • This paper proposes a method to extend Inter-Carrier Interference (ICI) canceling Orthogonal Frequency Division Multiplexing (OFDM) receivers for 5G mobile systems to spatial multiplexing 2×2 MIMO (Multiple Input Multiple Output) systems to support high-speed ground transportation services by linear motor cars traveling at 500 km/h. In Japan, linear-motor high-speed ground transportation service is scheduled to begin in 2027. To expand the coverage area of base stations, 5G mobile systems in high-speed moving trains will have multiple base station antennas transmitting the same downlink (DL) signal, forming an expanded cell size along the train rails. 5G terminals in a fast-moving train can cause the forward and backward antenna signals to be Doppler-shifted in opposite directions, so the receiver in the train may have trouble estimating the exact channel transfer function (CTF) for demodulation. A receiver in such high-speed train sees the transmission channel which is composed of multiple Doppler-shifted propagation paths. Then, a loss of sub-carrier orthogonality due to Doppler-spread channels causes ICI. The ICI Canceller is realized by the following three steps. First, using the Demodulation Reference Symbol (DMRS) pilot signals, it analyzes three parameters such as attenuation, relative delay, and Doppler-shift of each multi-path component. Secondly, based on the sets of three parameters, Channel Transfer Function (CTF) of sender sub-carrier number n to receiver sub-carrier number l is generated. In case of n≠l, the CTF corresponds to ICI factor. Thirdly, since ICI factor is obtained, by applying ICI reverse operation by Multi-Tap Equalizer, ICI canceling can be realized. ICI canceling performance has been simulated assuming severe channel condition such as 500 km/h, 8 path reverse Doppler Shift for QPSK, 16QAM, 64QAM and 256QAM modulations. In particular, 2×2MIMO QPSK and 16QAM modulation schemes, BER (Bit Error Rate) improvement was observed when the number of taps in the multi-tap equalizer was set to 31 or more taps, at a moving speed of 500 km/h and in an 8-pass reverse doppler shift environment.