• Title/Summary/Keyword: Bi-directional Information

Search Result 301, Processing Time 0.026 seconds

Motion Linearity-based Frame Rate Up Conversion Method (선형 움직임 기반 프레임률 향상 기법)

  • Kim, Donghyung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.7
    • /
    • pp.734-740
    • /
    • 2017
  • A frame rate up-conversion scheme is needed when moving pictures with a low frame rate is played on appliances with a high frame rate. Frame rate up-conversion methods interpolate the frame with two consecutive frames of the original source. This can be divided into the frame repetition method and motion estimation-based the frame interpolation one. Frame repetition has very low complexity, but it can yield jerky artifacts. The interpolation method based on a motion estimation and compensation can be divided into pixel or block interpolation methods. In the case of pixel interpolation, the interpolated frame was classified into four areas, which were interpolated using different methods. The block interpolation method has relatively low complexity, but it can yield blocking artifacts. The proposed method is the frame rate up-conversion method based on a block motion estimation and compensation using the linearity of motion. This method uses two previous frames and one next frame for motion estimation and compensation. The simulation results show that the proposed algorithm effectively enhances the objective quality, particularly in a high resolution image. In addition, the proposed method has similar or higher subjective quality than other conventional approaches.

T-Commerce Sale Prediction Using Deep Learning and Statistical Model (딥러닝과 통계 모델을 이용한 T-커머스 매출 예측)

  • Kim, Injung;Na, Kihyun;Yang, Sohee;Jang, Jaemin;Kim, Yunjong;Shin, Wonyoung;Kim, Deokjung
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.803-812
    • /
    • 2017
  • T-commerce is technology-fusion service on which the user can purchase using data broadcasting technology based on bi-directional digital TVs. To achieve the best revenue under a limited environment in regard to the channel number and the variety of sales goods, organizing broadcast programs to maximize the expected sales considering the selling power of each product at each time slot. For this, this paper proposes a method to predict the sales of goods when it is assigned to each time slot. The proposed method predicts the sales of product at a time slot given the week-in-year and weather of the target day. Additionally, it combines a statistical predict model applying SVD (Singular Value Decomposition) to mitigate the sparsity problem caused by the bias in sales record. In experiments on the sales data of W-shopping, a T-commerce company, the proposed method showed NMAE (Normalized Mean Absolute Error) of 0.12 between the prediction and the actual sales, which confirms the effectiveness of the proposed method. The proposed method is practically applied to the T-commerce system of W-shopping and used for broadcasting organization.

Development of Satellite and Terrestrial Convergence Technology for Internet Services on High-Speed Trains (Service Scenarios) (고속열차대상의 위성인터넷 서비스 제공을 위한 위성무선연동 기술(서비스 시나리오 관점))

  • Shin, Min-Su;Chang, Dae-Ig;Lee, Ho-Jin
    • Journal of Satellite, Information and Communications
    • /
    • v.2 no.2
    • /
    • pp.69-74
    • /
    • 2007
  • Recently, the demands for the satellite broadband mobile communication services are increased. To provide these services, mobile satellite communication systems for the passengers or crews on the high-speed moving vehicles, are being developed for the last several years especially in the Europe and North America. However, most of these systems can provide only several hundred kbps of transmission rate and this is not enough performance to provide satellite internet service for the group users such as passengers on the high-speed train. Moreover, service availability with these systems is limited to be rather low because they don't have any countermeasure scheme for the N-LOS environment which happens often along the railway. This paper describes mobile broadband satellite communication system, which is on the development, to provide high data-rate internet services to the high-speed trains. This system is applied with the inter-networking scenarios of both satellite/terrestrial network and satellite/gap-filler network so that it can provide seamless service even in the train operating environment, and these inter-networking schemes result in high service availability. And this system also has the countermeasure schemes, such as upper layer FEC and antenna diversity, for the short fading which is occurred periodically on the railway due to the power supplying structures so that it can provide high speed internet services. Mobile DVB-S2 technology which is now being standardized in the DVB is used for the forward-link transmission and DVB-RCS for the return-link.

  • PDF

Development of Multi-agent Based Deadlock-Free AGV Simulator for Material Handling System (자재 취급 시스템을 위한 다중 에이전트 기반의 교착상태에 자유로운 AGV 시뮬레이터 개발)

  • Lee, Jae-Yong;Seo, Yoon-Ho
    • Journal of the Korea Society for Simulation
    • /
    • v.17 no.2
    • /
    • pp.91-103
    • /
    • 2008
  • In order to simulate the behavior of automated manufacturing systems, the performance of material handling systems should be measured dynamically. Multi-Agent technology could be well adapted for the development of simulator for distributed and intelligent manufacture systems. A multi-agent system is composed of one coordination agent and multiple application agents. Issues in AGVS simulator can be classified by the set-up and operating problems. Decisions on the number of vehicles, bi- or uni-directional guide-path, etc. are fallen into the set-up problem category, while deadlock tree algorithm and conflict resolution are in operating problem. In this paper, a multi-agent based deadlock-free simulator for automated guided vehicle system(AGVS) are proposed through the use of multi-agent technologies and the development of deadlock-free algorithm. In this AGVS simulator proposed, well-known Floyd algorithm is used to create AGVS Guide path, through which AGVS move. Also, AGVs avoid vehicle conflict and deadlock using check path algorithm. And Moving vehicle agents are operated in real-time control by coordination agent. AGV position is dynamically calculated based on the concept of rolling time horizon. Simulator receives and presents operating information of vehicle in AGVS Gaunt chart. The performance of the proposed algorithm and developed simulator based on multi-agent are validated through set of experiments.

  • PDF

Enhanced PMIPv6 Route Optimization Handover using PFMIPv6 in Mobile Cloud Environment (모바일 클라우드 환경에서 PFMIPv6를 이용한 향상된 PMIPv6 경로 최적화 핸드오버 기법)

  • Na, Je-Gyun;Seo, Dae-Hee;Nah, Jae-Hoon;Mun, Young-Song
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.47 no.12
    • /
    • pp.17-23
    • /
    • 2010
  • In the mobile cloud computing, the mobile node should request and receive the services while being connected. In PMIPv6, all packets sent by mobile nodes or correspondent nodes are transferred through the local mobility anchor. This unnecessary detour still results in high delivery latency and significant processing cost. Several PMIPv6 route optimization schemes have been proposed to solve this issue. However, they also suffer from the high signaling costs and handover latency when determining the optimized path. We propose the route optimization handover scheme which adopts the prediction algorithm in PFMIPv6. In the proposed scheme, the new mobile access gateway establishes the bi-directional tunnel with the correspondent node's MAG using the context message when the mobile node's handover is imminent. This tunnel may eliminate the need of separate route optimization procedure. Hence, the proposed scheme can reduce the signaling cost than other conventional schemes do. Analytical performance evaluation is preformed to show the effectiveness of the proposed scheme. The result shows that our scheme is more effective than other schemes.

The Study on Control Algorithm of Elevator EDLC Emergency Power Converter (승강기 EDLC 비상전원 전력변환장치 제어 알고리즘 연구)

  • Lee, Sang-min;Kim, IL-Song;Kim, Nam
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.709-718
    • /
    • 2017
  • The installation of the elevator ARD(Automatic Rescue Device) system has been forced into law in these days in order to safely rescue passengers during power failure. The configuration of the ARD system consists of energy storage device, power converter and control systems. The EDLC(Electric Double Layer Capacitor) are used as energy storage device for rapid charge/discharge purposes. The power conditioning system (PCS) consists of bi-directional converter, 3-phase converter and control system. The dead-beat control system is adopted for most systems however it requires complex mathematical calculations, the high performance microprocessors are mandatory and thus it can be a cause of high manufacturing cost. In this paper the new control method for average current mode control is presented for simple structure. The control algorithm is applied to the single phase system and then expands to three phase system to meet the sysem requirements. The mathematical modeling using average modeling method is presented and analysed by PSIM computer simulation to verifie the validity of the proposed control methods.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

The Study of Land Surface Change Detection Using Long-Term SPOT/VEGETATION (장기간 SPOT/VEGETATION 정규화 식생지수를 이용한 지면 변화 탐지 개선에 관한 연구)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, In-Hwan
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.13 no.4
    • /
    • pp.111-124
    • /
    • 2010
  • To monitor the environment of land surface change is considered as an important research field since those parameters are related with land use, climate change, meteorological study, agriculture modulation, surface energy balance, and surface environment system. For the change detection, many different methods have been presented for distributing more detailed information with various tools from ground based measurement to satellite multi-spectral sensor. Recently, using high resolution satellite data is considered the most efficient way to monitor extensive land environmental system especially for higher spatial and temporal resolution. In this study, we use two different spatial resolution satellites; the one is SPOT/VEGETATION with 1 km spatial resolution to detect coarse resolution of the area change and determine objective threshold. The other is Landsat satellite having high resolution to figure out detailed land environmental change. According to their spatial resolution, they show different observation characteristics such as repeat cycle, and the global coverage. By correlating two kinds of satellites, we can detect land surface change from mid resolution to high resolution. The K-mean clustering algorithm is applied to detect changed area with two different temporal images. When using solar spectral band, there are complicate surface reflectance scattering characteristics which make surface change detection difficult. That effect would be leading serious problems when interpreting surface characteristics. For example, in spite of constant their own surface reflectance value, it could be changed according to solar, and sensor relative observation location. To reduce those affects, in this study, long-term Normalized Difference Vegetation Index (NDVI) with solar spectral channels performed for atmospheric and bi-directional correction from SPOT/VEGETATION data are utilized to offer objective threshold value for detecting land surface change, since that NDVI has less sensitivity for solar geometry than solar channel. The surface change detection based on long-term NDVI shows improved results than when only using Landsat.

A Reflectance Normalization Via BRDF Model for the Korean Vegetation using MODIS 250m Data (한반도 식생에 대한 MODIS 250m 자료의 BRDF 효과에 대한 반사도 정규화)

  • Yeom, Jong-Min;Han, Kyung-Soo;Kim, Young-Seup
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.6
    • /
    • pp.445-456
    • /
    • 2005
  • The land surface parameters should be determined with sufficient accuracy, because these play an important role in climate change near the ground. As the surface reflectance presents strong anisotropy, off-nadir viewing results a strong dependency of observations on the Sun - target - sensor geometry. They contribute to the random noise which is produced by surface angular effects. The principal objective of the study is to provide a database of accurate surface reflectance eliminated the angular effects from MODIS 250m reflective channel data over Korea. The MODIS (Moderate Resolution Imaging Spectroradiometer) sensor has provided visible and near infrared channel reflectance at 250m resolution on a daily basis. The successive analytic processing steps were firstly performed on a per-pixel basis to remove cloudy pixels. And for the geometric distortion, the correction process were performed by the nearest neighbor resampling using 2nd-order polynomial obtained from the geolocation information of MODIS Data set. In order to correct the surface anisotropy effects, this paper attempted the semiempirical kernel-driven Bi- directional Reflectance Distribution Function(BRDF) model. The algorithm yields an inversion of the kernel-driven model to the angular components, such as viewing zenith angle, solar zenith angle, viewing azimuth angle, solar azimuth angle from reflectance observed by satellite. First we consider sets of the model observations comprised with a 31-day period to perform the BRDF model. In the next step, Nadir view reflectance normalization is carried out through the modification of the angular components, separated by BRDF model for each spectral band and each pixel. Modeled reflectance values show a good agreement with measured reflectance values and their RMSE(Root Mean Square Error) was totally about 0.01(maximum=0.03). Finally, we provide a normalized surface reflectance database consisted of 36 images for 2001 over Korea.

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.