• Title/Summary/Keyword: Distribution Information

Search Result 11,984, Processing Time 0.049 seconds

Analysis of GPS Galileo Time Offset Effects on Positioning (GPS Galileo Time Offset (GGTO)의 항법해 영향 분석)

  • Joo, Jung-Min;Cho, Jeong-Ho;Heo, Moon-Beom
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.12
    • /
    • pp.1310-1317
    • /
    • 2012
  • The Global Navigation Satellite System (GNSS) like US Global Positioning System (GPS) and EU Galileo are based on providing precise time and frequency synchronized ranging signals. Because of the exploitation of very precise timing signals these GNSS are used to provide both navigation and time distribution services. Moreover, because the positioning accuracy will improve as more satellites become available, we should expect that a combination of Galileo and GPS will provide better performance than those of both systems separately. However, Galileo will not use the same time reference as GPS and thus, a time difference arises - the GPS-Galileo Time Offset (GGTO). The navigation solution calculated by receivers using signals from both navigation systems will consequently contain a supplementary error if the GGTO is not accounted for. In this paper, we compared GPS Time (GPST) with Galileo Sytem Time (GST) and analyzed the effects of GGTO on positioning accuracy by simulation test. And then we also analyzed the characteristics of two representative GGTO correction methods such as the navigation message based method at system level and the estimation method at user level and propose the conceptual design of the novel correction method being capable of preventing previous method's problems.

Performance Analysis of a Bit Mapper of the Dual-Polarized MIMO DVB-T2 System (이중 편파 MIMO를 쓰는 DVB-T2 시스템의 비트 매퍼 성능 분석)

  • Kang, In-Woong;Kim, Youngmin;Seo, Jae Hyun;Kim, Heung Mook;Kim, Hyoung-Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38A no.9
    • /
    • pp.817-825
    • /
    • 2013
  • The UHDTV system, which provides realistic service with ultra-high definite video and multi-channel audio, has been studied as a next generation broadcasting service. Since the conventional digital terrestrial transmission system is not capable to cover the increased transmission data rate of the UHDTV service, there are great necessity of researches about increase of data rate. Accordingly, the researches has been studied to increase the transmission data rate of the DVB-T2 system using dual-polarized MIMO technique and high order modulation. In order to optimize the MIMO DVB-T2 system where irregular LDPC codes are used, it is necessary to study the design of the bit mapper that matches the LDPC code and QAM symbols in MIMO channel. However, the research related to the design of the bit mapper has been limited to the SISO system. Therefore, this paper defines a new parameter that indicates the VND distribution of MIMO DVB-T2 system and performs the performance analysis according to the parameter which will be helpful for designing a MIMO bit mapper.

Evaluation of Deformation Characteristics and Vulnerable Parts according to Loading on Compound Behavior Connector (복합거동연결체의 하중재하에 따른 변형 특성 및 취약부위 산정)

  • Kim, Ki-Sung;Kim, Dong-wook;Ahn, Jun-hyuk
    • Journal of the Society of Disaster Information
    • /
    • v.15 no.4
    • /
    • pp.524-530
    • /
    • 2019
  • Purpose: In this paper, we construct a detailed three-dimensional interface element using a three-dimensional analysis program, and evaluate the composite behavior stability of the connector by applying physical properties such as the characteristics of general members and those of reinforced members Method: The analytical model uses solid elements, including non-linear material behavior, to complete the modeling of beam structures, circular flanges, bolting systems, etc. to the same dimensions as the design drawing, with each member assembled into one composite behavior linkage. In order to more effectively control the uniformity and mesh generation of other element type contact surfaces, the partitioning was performed. Modeled with 50 carbon steel materials. Results: It shows the displacement, deformation, and stress state of each load stage by the contact adjoining part, load loading part, fixed end part, and vulnerable anticipated part by member, and after displacement, deformation, The effect of the stress distribution was verified and the validity of the design was verified. Conclusion: Therefore, if the design support of the micro pile is determined based on this result, it is possible to identify the Vulnerable Parts of the composite behavior connector and the degree of reinforcement.

An Approximate Estimation of Snow Weight Using KMA Weather Station Data and Snow Density Formulae (기상청 관측 자료와 눈 밀도 공식을 이용한 적설하중의 근사 추정)

  • Jo, Ji-yeong;Lee, Seung-Jae;Choi, Won
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.22 no.2
    • /
    • pp.92-101
    • /
    • 2020
  • To prevent and mitigate damage to farms due to heavy snowfall, snow weight information should be provided in addition to snow depth. This study reviews four formulae regarding snow density and weight used in extant studies and applies them in Suwon area to estimate snow weight in Korea. We investigated the observed snow depth of 94 meteorological stations and automatic weather stations (AWS) data over the past 30 years (1988-2017). Based on the spatial distribution of snow depth by area in Korea, much of the fresh snow cover, due to heavy snowfall, occurred in Jeollabuk-do and Gangwon-do. Record snowfalls occurred in Gyeongsangbuk-do and Gangwon-do. However, the most recent heavy snowfall in winter occurred in Gyeonggi-do, Gyeongsangbuk-do, and Jeollanam-do. This implies that even if the snow depth is high, there is no significant damage unless the snow weight is high. The estimation of snow weight in Suwon area yielded different results based on the calculation method of snow density. In general, high snow depth is associated with heavy snow weight. However, maximum snow weight and maximum snow depth do not necessarily occur on the same day. The result of this study can be utilized to estimate the snow weight at other locations in Korea and to carry out snow weight prediction based on a numerical model. Snow weight information is expected to aid in establishing standards for greenhouse design and to reduce the economic losses incurred by farms.

Reordering Scheme of Location Identifiers for Indexing RFID Tags (RFID 태그의 색인을 위한 위치 식별자 재순서 기법)

  • Ahn, Sung-Woo;Hong, Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.36 no.3
    • /
    • pp.198-214
    • /
    • 2009
  • Trajectories of RFID tags can be modeled as a line, denoted by tag interval, captured by an RFID reader and indexed in a three-dimensional domain, with the axes being the tag identifier (TID), the location identifier (LID), and the time (TIME). Distribution of tag intervals in the domain space is an important factor for efficient processing of a query for tracing tags and is changed according to arranging coordinates of each domain. Particularly, the arrangement of LIDs in the domain has an effect on the performance of queries retrieving the traces of tags as times goes by because it provides the location information of tags. Therefore, it is necessary to determine the optimal ordering of LIDs in order to perform queries efficiently for retrieving tag intervals from the index. To do this, we propose LID proximity for reordering previously assigned LIDs to new LIDs and define the LID proximity function for storing tag intervals accessed together closely in index nodes when a query is processed. To determine the sequence of LIDs in the domain, we also propose a reordering scheme of LIDs based on LID proximity. Our experiments show that the proposed reordering scheme considerably improves the performance of Queries for tracing tag locations comparing with the previous method of assigning LIDs.

The Study for Performance Analysis of Software Reliability Model using Fault Detection Rate based on Logarithmic and Exponential Type (로그 및 지수형 결함 발생률에 따른 소프트웨어 신뢰성 모형에 관한 신뢰도 성능분석 연구)

  • Kim, Hee-Cheul;Shin, Hyun-Cheul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.9 no.3
    • /
    • pp.306-311
    • /
    • 2016
  • Software reliability in the software development process is an important issue. Infinite failure NHPP software reliability models presented in the literature exhibit either constant, monotonic increasing or monotonic decreasing failure occurrence rates per fault. In this paper, reliability software cost model considering logarithmic and exponential fault detection rate based on observations from the process of software product testing was studied. Adding new fault probability using the Goel-Okumoto model that is widely used in the field of reliability problems presented. When correcting or modifying the software, finite failure non-homogeneous Poisson process model. For analysis of software reliability model considering the time-dependent fault detection rate, the parameters estimation using maximum likelihood estimation of inter-failure time data was made. The logarithmic and exponential fault detection model is also efficient in terms of reliability because it (the coefficient of determination is 80% or more) in the field of the conventional model can be used as an alternative could be confirmed. From this paper, the software developers have to consider life distribution by prior knowledge of the software to identify failure modes which can be able to help.

Balanced Scorecard using System Dynamics for Evaluating IT Investment (IT 투자 평가를 위한 시스템 다이나믹스를 활용한 밸런스스코어카드)

  • Baek, Sung-Won;Ju, Jung-Eun;Koo, Sang-Hoe
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.1
    • /
    • pp.19-34
    • /
    • 2008
  • IT investment is usually very costly and takes a long time to get the results out of investment. However, most of currently available evaluation methods for IT investment are based upon short-term effects, hence their results are not fully trustworthy. In addition, those methods commonly consider only financial aspects such as ROI. For more reliable evaluation, it is necessary to consider non-financial factors such as system utilization, customer satisfaction, public relations, and so on, as well as financial factors. In this research, we propose an evaluation method that can evaluate both financial and non-financial aspects on a long-term base. For this purpose, we employed the research results developed in System dynamics and Balanced scorecard. System dynamics is useful in analyzing long term behavior of a given system, and Balanced scorecard is useful for evaluating both financial and non-financial aspects. We demonstrated the usefulness of our method by applying it to the evaluation of RFID (Radio Frequency Identification) investment in a distribution and retail industry. From this application, we found that RFID investment may not be rewarding in the short term, but is sure to be returning the income relative to its investment in the long run.

  • PDF

Study on Context-Aware SOA based on Open Service Gateway initiative platform (OSGi플렛폼 기반의 상황인식 서비스지향아키텍쳐에 관한 연구)

  • Choi, Sung-Wook;Oh, Am-Suk;Kwon, Oh-Hyun;Kang, Si-Hyeob;Hong, Soon-Goo;Choi, Hyun-Rim
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.11
    • /
    • pp.2083-2090
    • /
    • 2006
  • In an proposed Context-Aware SOA(Service Oriented Architecture) based OSGi(Open Service Gateway initiative) platform, Service provider manages relative kinds of services in an integrative basis from various sensors, puts each service in a SOAP (Simple Object access Protocol) message, and register thorn to the UDDI(Universal Description Discovery and Integration) server of service registry, service requester retrivel the specified kinds of services and call them to service provider. Recently most context-aware technologies for ubiquitous home network are mainly putting emphasis on RFID/USN and location-based technology. Because of this, service-oriented architecture researches have not been made enough. Under the environment of an OSGi service platform, various context-aware services are dynamically mapping from various sensors, new services are being offered for the asking of users, and existing services are changing. Accordingly, the data sharing between services provided, management of service life cycle, and the facilitation of service distribution are needed. Taking into considering all these factors, this study has suggested an Context-Aware SOA based eclipse SOA Tools Platform using OSGi platform that can transaction throughtput of more than 546 TPS of distributional Little's Law from ATAM(Architecture Tradeoff Analysis Method) while remaining stable other condition.

Design and Analsis of a high speed switching system with two priority (두개의 우선 순위를 가지는 고속 스윗칭 시스템의 설계 및 성능 분석)

  • Hong, Yo-Hun;Choe, Jin-Sik;Jeon, Mun-Seok
    • The KIPS Transactions:PartC
    • /
    • v.8C no.6
    • /
    • pp.793-805
    • /
    • 2001
  • In the recent priority system, high-priority packet will be served first and low-priority packet will be served when there isn\`t any high-priority packet in the system. By the way, even high-priority packet can be blocked by HOL (Head of Line) contention in the input queueing System. Therefore, the whole switching performance can be improved by serving low-priority packet even though high-priority packet is blocked. In this paper, we study the performance of preemptive priority in an input queueing switch for high speed switch system. The analysis of this switching system is taken into account of the influence of priority scheduling and the window scheme for head-of-line contention. We derive queue length distribution, delay and maximum throughput for the switching system based on these control schemes. Because of the service dependencies between inputs, an exact analysis of this switching system is intractable. Consequently, we provide an approximate analysis based on some independence assumption and the flow conservation rule. We use an equivalent queueing system to estimate the service capability seen by each input. In case of the preemptive priority policy without considering a window scheme, we extend the approximation technique used by Chen and Guerin [1] to obtain more accurate results. Moreover, we also propose newly a window scheme that is appropriate for the preemptive priority switching system in view of implementation and operation. It can improve the total system throughput and delay performance of low priority packets. We also analyze this window scheme using an equivalent queueing system and compare the performance results with that without the window scheme. Numerical results are compared with simulations.

  • PDF

Image Compression Using DCT Map FSVQ and Single - side Distribution Huffman Tree (DCT 맵 FSVQ와 단방향 분포 허프만 트리를 이용한 영상 압축)

  • Cho, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2615-2628
    • /
    • 1997
  • In this paper, a new codebook design algorithm is proposed. It uses a DCT map based on two-dimensional discrete cosine of transform (2D DCT) and finite state vector quantizer (FSVQ) when the vector quantizer is designed for image transmission. We make the map by dividing input image according to edge quantity, then by the map, the significant features of training image are extracted by using the 2D DCT. A master codebook of FSVQ is generated by partitioning the training set using binary tree based on tree-structure. The state codebook is constructed from the master codebook, and then the index of input image is searched at not master codebook but state codebook. And, because the coding of index is important part for high speed digital transmission, it converts fixed length codes to variable length codes in terms of entropy coding rule. The huffman coding assigns transmission codes to codes of codebook. This paper proposes single-side growing huffman tree to speed up huffman code generation process of huffman tree. Compared with the pairwise nearest neighbor (PNN) and classified VQ (CVQ) algorithm, about Einstein and Bridge image, the new algorithm shows better picture quality with 2.04 dB and 2.48 dB differences as to PNN, 1.75 dB and 0.99 dB differences as to CVQ respectively.

  • PDF