• Title/Summary/Keyword: Fast Computation

Search Result 750, Processing Time 0.039 seconds

Connection between Fourier of Signal Processing and Shannon of 5G SmartPhone (5G 스마트폰의 샤논과 신호처리의 푸리에의 표본화에서 만남)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.6
    • /
    • pp.69-78
    • /
    • 2017
  • Shannon of the 5G smartphone and Fourier of the signal processing meet in the sampling theorem (2 times the highest frequency 1). In this paper, the initial Shannon Theorem finds the Shannon capacity at the point-to-point, but the 5G shows on the Relay channel that the technology has evolved into Multi Point MIMO. Fourier transforms are signal processing with fixed parameters. We analyzed the performance by proposing a 2N-1 multivariate Fourier-Jacket transform in the multimedia age. In this study, the authors tackle this signal processing complexity issue by proposing a Jacket-based fast method for reducing the precoding/decoding complexity in terms of time computation. Jacket transforms have shown to find applications in signal processing and coding theory. Jacket transforms are defined to be $n{\times}n$ matrices $A=(a_{jk})$ over a field F with the property $AA^{\dot{+}}=nl_n$, where $A^{\dot{+}}$ is the transpose matrix of the element-wise inverse of A, that is, $A^{\dot{+}}=(a^{-1}_{kj})$, which generalise Hadamard transforms and centre weighted Hadamard transforms. In particular, exploiting the Jacket transform properties, the authors propose a new eigenvalue decomposition (EVD) method with application in precoding and decoding of distributive multi-input multi-output channels in relay-based DF cooperative wireless networks in which the transmission is based on using single-symbol decodable space-time block codes. The authors show that the proposed Jacket-based method of EVD has significant reduction in its computational time as compared to the conventional-based EVD method. Performance in terms of computational time reduction is evaluated quantitatively through mathematical analysis and numerical results.

Histogram Equalization Based Color Space Quantization for the Enhancement of Mean-Shift Tracking Algorithm (실시간 평균 이동 추적 알고리즘의 성능 개선을 위한 히스토그램 평활화 기반 색-공간 양자화 기법)

  • Choi, Jangwon;Choe, Yoonsik;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.329-341
    • /
    • 2014
  • Kernel-based mean-shift object tracking has gained more interests nowadays, with the aid of its feasibility of reliable real-time implementation of object tracking. This algorithm calculates the best mean-shift vector based on the color histogram similarity between target model and target candidate models, where the color histograms are usually produced after uniform color-space quantization for the implementation of real-time tracker. However, when the image of target model has a reduced contrast, such uniform quantization produces the histogram model having large values only for a few histogram bins, resulting in a reduced accuracy of similarity comparison. To solve this problem, a non-uniform quantization algorithm has been proposed, but it is hard to apply to real-time tracking applications due to its high complexity. Therefore, this paper proposes a fast non-uniform color-space quantization method using the histogram equalization, providing an adjusted histogram distribution such that the bins of target model histogram have as many meaningful values as possible. Using the proposed method, the number of bins involved in similarity comparison has been increased, resulting in an enhanced accuracy of the proposed mean-shift tracker. Simulations with various test videos demonstrate the proposed algorithm provides similar or better tracking results to the previous non-uniform quantization scheme with significantly reduced computation complexity.

A Study on Dynamic Behaviour of Cable-Stayed Bridge by Vehicle Load (차량하중에 의한 사장교의 동적거동에 관한 연구)

  • Park, Cheun Hyek;Han, Jai Ik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.14 no.6
    • /
    • pp.1299-1308
    • /
    • 1994
  • This paper is considered on the dynamic behavior and the dynamic impact coefficient on the cable-stayed bridge under the vehicle load. The method of static analysis, that is, the transfer matrix method is used to get influence values about displacements, section forces of girder and cable forces. Gotten influence values were used as basic data to analyse dynamic behavior. This paper used the transfer matrix method because it is relatively simpler than the finite element method, and calculating speed of computer is very fast and the precision of computation is high. In the process of dynamic analysis, the uncoupled equation of motion is derived from simultaneous equation of the motion of cable-stayed bridge and vehicle travelling by using mode shape, which was borne from system of undamped free vibration. The solution of the uncoupled equation of motion, that is, time history of response of deflections, velocity and acceleration on reference coordinate system, is found by Newmark-${\beta}$ method, a kind of direct integral method. After the time history of dynamic response was gotten, and it was transfered to the time history of dynamic response of cable-stayed bridge by linear transformation of coordinates. As a result of this numerical analysis, effect of dynamic behavior for cable-stayed bridge under the vehicle load has varied depending on parameter of design, that is, the ratio of span, the ratio of main span length, tower height, the flexural rigidity of longitudinal girder, the flexural rigidity of tower, and the cable stiffness, investigated. Very good agreements with the existing solution in the literature are shown for the uncracked plate as well as the cracked plate.

  • PDF

Content-based Image Retrieval Using Color Adjacency and Gradient (칼라 인접성과 기울기를 이용한 내용 기반 영상 검색)

  • Jin, Hong-Yan;Lee, Ho-Young;Kim, Hee-Soo;Kim, Gi-Seok;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.1
    • /
    • pp.104-115
    • /
    • 2001
  • A new content-based color image retrieval method integrating the features of the color adjacency and the gradient is proposed in this paper. As the most used feature of color image, color histogram has its own advantages that it is invariant to the changes in viewpoint and the rotation of the image etc., and the computation of the feature is simple and fast. However, it is difficult to distinguish those different images having similar color distributions using histogram-based image retrieval, because the color histogram is generated on uniformly quantized colors and the histogram itself contains no spatial information. And another shortcoming of the histogram-based image retrieval is the storage of the features is usually very large. In order to prevent the above drawbacks, the gradient that is the largest color difference of neighboring pixels is calculated in the proposed method instead of the uniform quantization which is commonly used at most histogram-based methods. And the color adjacency information which indicates major color composition feature of an image is extracted and represented as a binary form to reduce the amount of feature storage. The two features are integrated to allow the retrieval more robust to the changes of various external conditions.

  • PDF

Determination of the Gravity Anomaly in the Ocean Area of Korean Peninsula using Satellite Altimeter Data (위성 고도자료를 이용한 한반도 해상지역에서의 중력이상의 결정)

  • 김광배;최재화;윤홍식;이석배
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.13 no.2
    • /
    • pp.177-185
    • /
    • 1995
  • Gravity anomalies were recovered on a $5'\times{5'}$grid using the sea surface height data obtained from the combination of Geosat, ERS-1, Topex/Poseidon altimeter data around Korean Peninsula bounded by latitude between $30^\circ{N}\;and\;50^\circ{N}$ and longitude $120^\circ{E}\;to\;140^\circ{E.}$ In order to recover the gravity anomalies from SSH(Sea Surface Height), inverse FFT technique was applied. The estimated gravity anomalies were compared with gravity anomalies measured by shipboard around Korean Peninsula. In comparison with the differences of gravity anomaly between measured data and altimeter data, the mean and the standard deviation were found to be -0.51 mGal and 13.48 mGal, respectively. In case of comparison between the measured data and the OSU91A geopotential model, the mean and the standard deviation were found to be 11.93 mGal and 19.19 mGal, respectively. The comparison of gravity anomalies obtained from the OSU91A geopotential model and the altimeter data was carried out. The results were mean of 5.30 meal and standard deviation of 19.62 mGal. From the results, we could be concluded that the gravity anomalies computed from the altimeter data is used to the geoid computation instead of the measured data.

  • PDF

A Comparison of the Gravimetric Geoid and the Geometric Geoid Using GPS/Leveling Data (GPS/Leveling 데이터를 이용한 기하지오이드와 중력지오이드의 비교 분석)

  • Kim, Young-Gil;Choi, Yun-Soo;Kwon, Jay-Hyoun;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.2
    • /
    • pp.217-222
    • /
    • 2010
  • The geoid is the level surface that closely approximates mean sea level and usually used for the origin of vertical datum. For the computation of geoid, various sources of gravity measurements are used in South Korea and, as a consequence, the geoid models may show different results. however, a limited analysis has been performed due to a lack of controlled data, namely the GPS/Leveling data. Therefore, in this study, the gravimetric geoids are compared with the geodetic geoid which is obtained through the GPS/Leveling procedures. The gravimetric geoids are categorized into geoid from airborne gravimetry, geoid from the terrestrial gravimetry, NGII geoid(geoids published by National Geographic Information Institute) and NORI geoid(geoi published by National Oceanographic Research Institute), respectively. For the analysis, the geometric geoid is obtained at each unified national control point and the difference between geodetic and gravimetric geoid is computed. Also, the geoid height data is gridded on a regular $10{\times}10-km$ grid so that the FFT method can be applied to analyze the geoid height differences in frequency domain. The results show that no significant differences in standard deviation are observed when the geoids from the airborne and terrestrial gravimetry are compared with the geomertric geoid while relatively large difference are shown when NGII geoid and NORI geoid are compared with geometric geoid. Also, NGII geoid and NORI geoid are analyzed in frequency domain and the deviations occurs in long-wavelength domain.

Primary Solution Evaluations for Interpreting Electromagnetic Data (전자탐사 자료 해석을 위한 1차장 계산)

  • Kim, Hee-Joon;Choi, Ji-Hyang;Han, Nu-Ree;Song, Yoon-Ho;Lee, Ki-Ha
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.4
    • /
    • pp.361-366
    • /
    • 2009
  • Layered-earth Green's functions in electormagnetic (EM) surveys play a key role in modeling the response of exploration targets. They are computed through the Hankel transforms of analytic kernels. Computational precision depends upon the choice of algebraically equivalent forms by which these kemels are expressed. Since three-dimensional (3D) modeling can require a huge number of Green's function evaluations, total computational time can be influenced by computational time for the Hankel transform evaluations. Linear digital filters have proven to be a fast and accurate method of computing these Hankel transforms. In EM modeling for 3D inversion, electric fields are generally evaluated by the secondary field formulation to avoid the singularity problem. In this study, three components of electric fields for five different sources on the surface of homogeneous half-space were derived as primary field solutions. Moreover, reflection coefficients in TE and TM modes were produced to calculate EM responses accurately for a two-layered model having a sea layer. Accurate primary fields should substantially improve accuracy and decrease computation times for Green's function-based problems like MT problems and marine EM surveys.

Improvement of Address Pointer Assignment in DSP Code Generation (DSP용 코드 생성에서 주소 포인터 할당 성능 향상 기법)

  • Lee, Hee-Jin;Lee, Jong-Yeol
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.1
    • /
    • pp.37-47
    • /
    • 2008
  • Exploitation of address generation units which are typically provided in DSPs plays an important role in DSP code generation since that perform fast address computation in parallel to the central data path. Offset assignment is optimization of memory layout for program variables by taking advantage of the capabilities of address generation units, consists of memory layout generation and address pointer assignment steps. In this paper, we propose an effective address pointer assignment method to minimize the number of address calculation instructions in DSP code generation. The proposed approach reduces the time complexity of a conventional address pointer assignment algorithm with fixed memory layouts by using minimum cost-nodes breaking. In order to contract memory size and processing time, we employ a powerful pruning technique. Moreover our proposed approach improves the initial solution iteratively by changing the memory layout for each iteration because the memory layout affects the result of the address pointer assignment algorithm. We applied the proposed approach to about 3,000 sequences of the OffsetStone benchmarks to demonstrate the effectiveness of the our approach. Experimental results with benchmarks show an average improvement of 25.9% in the address codes over previous works.

Clustering of Web Objects with Similar Popularity Trends (유사한 인기도 추세를 갖는 웹 객체들의 클러스터링)

  • Loh, Woong-Kee
    • The KIPS Transactions:PartD
    • /
    • v.15D no.4
    • /
    • pp.485-494
    • /
    • 2008
  • Huge amounts of various web items such as keywords, images, and web pages are being made widely available on the Web. The popularities of such web items continuously change over time, and mining temporal patterns in popularities of web items is an important problem that is useful for several web applications. For example, the temporal patterns in popularities of search keywords help web search enterprises predict future popular keywords, enabling them to make price decisions when marketing search keywords to advertisers. However, presence of millions of web items makes it difficult to scale up previous techniques for this problem. This paper proposes an efficient method for mining temporal patterns in popularities of web items. We treat the popularities of web items as time-series, and propose gapmeasure to quantify the similarity between the popularities of two web items. To reduce the computation overhead for this measure, an efficient method using the Fast Fourier Transform (FFT) is presented. We assume that the popularities of web items are not necessarily following any probabilistic distribution or periodic. For finding clusters of web items with similar popularity trends, we propose to use a density-based clustering algorithm based on the gap measure. Our experiments using the popularity trends of search keywords obtained from the Google Trends web site illustrate the scalability and usefulness of the proposed approach in real-world applications.

A study on the discriminant analysis of node deployment based on cable type Wi-Fi in indoor (케이블형 Wi-Fi 기반 실내 공간의 노드 배치 판별 분석에 관한 연구)

  • Zin, Hyeon-Cheol;Kim, Won-Yeol;Kim, Jong-Chan;Kim, Yoon-Sik;Seo, Dong-Hoan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.40 no.9
    • /
    • pp.836-841
    • /
    • 2016
  • An indoor positioning system using Wi-Fi is essential to produce a radio map that combines the indoor space of two or more dimensions, the information of node positions, and etc. in processing for constructing the radio map, the measurement of the received signal strength indicator(RSSI) and the confirmation of node placement information counsume substantial time. Especially, when the installed wireless environment is changed or a new space is created, easy installation of the node and fast indoor radio mapping are needed to provide indoor location-based services. In this paper, to reduce the time consumption, we propose an algorithm to distinguish the straight and curve lines of a corridor section by RSSI visualization and Sobel filter-based edge detection that enable accurate node deployment and space analysis using cable-type Wi-Fi node installed at a 3 m interval. Because the cable type Wi-Fi is connected by a same power line, it has an advantage that the installation order of nodes at regular intervals could be confirmed accurately. To be able to analyze specific sections in space based on this advantage, the distribution of the signal was confirmed and analyzed by Sobel filter based edge detection and total RSSI distribution(TRD) computation through a visualization process based on the measured RSSI. As a result to compare the raw data with the performance of the proposed algorithm, the signal intensity of proposed algorithm is improved by 13.73 % in the curve section. Besides, the characteristics of the straight and the curve line were enhanced as the signal intensity of the straight line decreased by an average of 34.16 %.