• Title/Summary/Keyword: Data Distortions

Search Result 159, Processing Time 0.029 seconds

The Effect of Earnings Management on the Bond Grading (이익조정이 신용등급에 미치는 영향)

  • Kim, Yang-Gu;Kwon, Hyeok-Gi;Park, Sang-Bong
    • Management & Information Systems Review
    • /
    • v.34 no.2
    • /
    • pp.113-130
    • /
    • 2015
  • This study considers the relation between firms' earnings management and credit rating. Unlike preceding papers only focusing earnings management by accrual(thereafter, AM), this paper examines the effect of accrual earnings management(AMs) and real earning management(thereafter, RM) on credit rating. RMs have more negative effects on firms' forward cash flow generation abilities and long term operating performances than AMs. So, RMs are more negative signals for credit analysts than AMs. But credit analysts have much difficulty in seeing through RM, because if credit analysts want to find out RMs, they have to understand firms' internal operating activities, cost structures, receivables collection practices, and review whether profit distortions are due to abnormal change of them. Sample of this study consists of 2,150firm-year data listed companies from 2002 to 2010. Empirical evidence shows that AMs and RMs are negatively related to credit rating. This result implies that credit analysts see through AMs and RMs in interpreting financial informations, that is to say, they discount credit rating in considering level of earnings management that consist of real activity and accrual earning management. This paper also finds that RMs are more negatively related to credit ratings than AMs. This result suggests that credit analysts don't take RMs into account in credit rating process as much as AMs.

  • PDF

Study on the Calibration of a Full-Polarimetric Scatterometer System at X-band (X-밴드 완전 편파 Scatterometer 시스템 보정에 관한 연구)

  • Hwang, Ji-Hwan;Park, Seong-Min;Kwon, Soon-Gu;Oh, Yi-Sok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.21 no.4
    • /
    • pp.408-416
    • /
    • 2010
  • A study on the calibration of an X-band HPS(Hongik Polarimetric Scatterometer) system for ground-based operation is presented in this paper. In order to calibrate the scatterometer system, the degree of its distortions are analyzed by comparison between theoretical- and measured-values using the theoretically well-known calibration targets such as a metal sphere, a trihedral corner reflector(CR) and a metal cylinder. The calibration works in the field conditions depend on the precise and stable measurements of those calibration target. we present a measurement technique, so-called, an automatic 2-D target-scanning technique, using the incidence-angle(${\xi}-$ and ${\phi}-$ directions) control of HPS system. Then, we used STCT(Single-Target Calibration Technique) and GCT(General Calibration Technique) to calibrate a distortion of the scatterometer system, and measured the polarimetric RCS(Radar Cross Section) and phase-difference of a trihedral-CR as a test-target to verify the accuracy of the calibration technique. Then, three different types(i.e., 10, 20, 30 cm) of trihedral-CR were used. we obtained the error ranges about ${\pm}1.0$dB, ${\pm}0.5$ dB in a polarimetric RCS and about $-20^{\circ}{\sim}0^{\circ}$ and ${\pm}5^{\circ}$ in the co-polarized phase-difference by using the GCT and STCT, respectively.

High Bit-Rates Quantization of the First-Order Markov Process Based on a Codebook-Constrained Sample-Adaptive Product Quantizers (부호책 제한을 가지는 표본 적응 프로덕트 양자기를 이용한 1차 마르코프 과정의 고 전송률 양자화)

  • Kim, Dong-Sik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.19-30
    • /
    • 2012
  • For digital data compression, the quantization is the main part of the lossy source coding. In order to improve the performance of quantization, the vector quantizer(VQ) can be employed. The encoding complexity, however, exponentially increases as the vector dimension or bit rate gets large. Much research has been conducted to alleviate such problems of VQ. Especially for high bit rates, a constrained VQ, which is called the sample-adaptive product quantizer(SAPQ), has been proposed for reducing the hugh encoding complexity of regular VQs. SAPQ has very similar structure as to the product VQ(PQ). However, the quantizer performance can be better than the PQ case. Further, the encoding complexity and the memory requirement for the codebooks are lower than the regular full-search VQ case. Among SAPQs, 1-SAPQ has a simple quantizer structure, where each product codebook is symmetric with respect to the diagonal line in the underlying vector space. It is known that 1-SAPQ shows a good performance for i.i.d. sources. In this paper, a study on designing 1-SAPQ for the first-order Markov process. For an efficient design of 1-SAPQ, an algorithm for the initial codebook is proposed, and through the numerical analysis it is shown that 1-SAPQ shows better quantizer distortion than the VQ case, of which encoding complexity is similar to that of 1-SAPQ, and shows distortions, which are close to that of the DPCM(differential pulse coded modulation) scheme with the Lloyd-Max quantizer.

Interpretation of Finite HMD Source EM Data using Cagniard Impedance (Cagniard 임피던스를 이용한 수평 자기쌍극자 송신원 전자탐사 자료의 해석)

  • Kwon Hyoung-Seok;Song Yoonho;Seol Soon-Jee;Son Jeong-Sul;Suh Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.5 no.2
    • /
    • pp.108-117
    • /
    • 2002
  • We have introduced a new approach to obtain the conductivity information of subsurface using Cagniard impedance over two-dimensional (2-D) model in the presence of horizontal magnetic dipole source with the frequency range of $1\;kHz\~1\;MHz$. Firstly, we designed the method to calculate the apparent resistivity from the ratio between horizontal electric and magnetic fields, Cagniard impedance, considering the source effects when the plane wave assumption is failed in finite source EM problem, and applied it to several numerical models such as homogeneous half-space or layered-earth model. It successfully provided subsurface information even though it is still rough, while the one with plane wave assumption is hard to give useful information. Next, through analyzing Cagniard impedance and apparent resistivity considering source effect over 2-D models containing conductive- or resistive-block, we showed that the possibility of obtaining conductivities of background media and anomaly using this approach. In addition, the apparent resistivity considering source effect and phase pseudosections constructed from Cagniard impedance over the isolated conductive- and resistive block model well demonstrated outlines of anomalies and conductivity distribution even though there were some distortions came from sidelobes caused by 2-D body.

The comparative analysis of image fusion results by using KOMPSAT-2/3 images (아리랑 2호/3호 영상을 이용한 영상융합 비교 분석)

  • Oh, Kwan Young;Jung, Hyung Sup;Jeong, Nam Ki;Lee, Kwang Jae
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.2
    • /
    • pp.117-132
    • /
    • 2014
  • This paper had a purpose on analyzing result data from pan-sharpening, which have applied on the KOMPSAT-2 and -3 image. Particularly, the study focused on comparing each relative spectral response functions, which considers to cause color distortions of fused image. Two images from same time and location have been collected by KOMPSAT-2 and -3 to apply in the experiment. State-of-the-art algorithms of GIHS, GS1, GSA and GSA-CA were employed for analyzing the results in quantitatively and qualitatively. Following analysis of previous studies, GSA and GSA-CA methods resulted excellent quality in both of KOMPSAT-2/3 results, since they minimize spectral discordances between intensity and PAN image by the linear regression algorithm. It is notable that performances from KOMPSAT-2 and- 3 are not equal under same circumstances because of different spectral characteristics. In fact, KOMPSAT-2 is known as over-injection of low spatial resolution components of blue and green band, are greater than that of the PAN band. KOMPSAT-3, however, has been advanced in most of misperformances and weaknesses comparing from the KOMPSAT-2.

Spacing of Intermediate Diaphragms Horizontally Curved Steel Box Girder Bridges considering Bending-distortional Warping Normal Stress Ratio (곡선 강박스 거더의 휨-뒤틀림 응력비에 따른 중간 다이아프램 간격)

  • Lee, Jeong-Hwa;Lee, Kee-Sei;Lim, Jeong-Hyun;Choi, Jun-Ho;Kang, Young-Jong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.9
    • /
    • pp.6325-6332
    • /
    • 2015
  • Although distortions of horizontally curved box girder are more susceptible than which of the straight girder due to curvature effect, current domestic design standards does not present spacing of intermediate diaphragms for the curved box girder. In this study, parametric studies for straight and curved box girder considering distortional warping normal stresses based on linear finite element analysis were carried out. Single span curved girders were chosen for analysis based on current domestic bridge data with 1-6 of solid intermediate diaphragms, 0-30 degree of subtended angle, 30m and 60m of span length and 2-3m of flange width and web height. The adequate spacing of diaphragms for the box girder were suggested considering subtended angles and bending and distortional warping normal stress ratios with 5%, 10%, 15% and 20%. The analysis results were also compared to a current design standard and suggested spacing of diaphragm were evaluated.

Problem Structuring in IT Policy: Boundary Analysis of IT Policy Problems (경계분석을 통한 정책문제 정의에 관한 연구 - 언론보도에 나타난 IT 정책문제 탐색을 중심으로 -)

  • Park, Chisung;Nam, Ki Bum
    • 한국정책학회보
    • /
    • v.21 no.4
    • /
    • pp.199-228
    • /
    • 2012
  • Policy problems are complex due to diverse participants and their relations in the policy processes. Defining the right problem in the first place is important because Type III error is likely to happen without removing rival hypothesis in defining the problem. This study applies Boundary Analysis suggested by Dunn to structure IT policy problems in Korea. The time frame of the study focuses on 5 years of Lee Administration and data are collected from four newspapers. Using content analysis, the study, first, elaborates total 2,614 policy problems from 1,908 stakeholders. After removing duplicating problems, 369 problems from 323 stakeholders are identified as a boundary of IT policy problem. Among others, failures in government policies are weighted as the most serious problems in IT policy field. However, many significant problems raised by stakeholders dated back to more than a decade, and those are intrinsic problems, which initially caused by market distortions in the IT industry. Therefore, we should be cautious not to overemphasize the most conspicuous problem as the only problem in the policy field when we interpret results of problem structuring.

Comparison of High Resolution Image by Ortho Rectification Accuracy and Correlation Each Band (고해상도 영상의 정사보정 정확도 검증 및 밴드별 상관성 비교연구)

  • Jin, Cheong-Gil;Park, So-Young;Kim, Hyung-Seok;Chun, Yong-Sik;Choi, Chul-Uong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.35-45
    • /
    • 2010
  • The objective of this study is to verify the positional accuracy by performing the orthometric corrections on the high resolution satellite images and to analyze the band correlation between the high resolution images corrected with orthometric correction. The objectives also included an analysis on the correlation of NDVI. For the orthometric correction of images from KOMPSAT2 and IKONOS, systematic errors were removed in use of RPC data, and non-planar distortions were corrected with GPS surveying data. Also, by preempting the image points at the same positions within ortho images, a comparison was performed on positional accuracies between image points of each image and GPS surveying points. The comparison was also made on the positional accuracies of image points. between the images. For correlation of band and correlation of NDVI, the descriptive statistics of DN values were acquired for respective bands by adding the Quickbird images and Aerial Photographs undergone through orthometric correction at the time of purchase. As result, from a comparison on positional accuracies of Orthoimages from KOMPSAT2 and Ortho Images of IKONOS was made. From the comparison the distance between the image points within each image and GPS surveying points was identified as 3.41m for KOMPSAT2 and as 1.45m for IKONOS, presenting a difference of 1.96m. Whereas, RMSE between image points was identified as 1.88m. The level of correlation was measured by using Quickbird, KOMPSAT2, IKONOS and Aerial Photographs between inter-image bands and NDVI, showing that there were high levels of correlation between Quickbird and IKONOS identified from all bands as well as from NDVI, except a high level of correlation that was identified between the Aerial Photographs and KOMPSAT2 from Band 2. Low levels of correlation were also identified between Quickbird and Aerial Photographs from Band 1. and between KOMPSAT2 and IKONOS from Band 2 and Band 4, whereas, KOMPSAT2 showed low correlations with Aerial Photographs from Band 3. For NDVI, KOMPSAT2 showed low level of correlations with both of QuickBird and IKONOS.

The Effect of Mean Brightness and Contrast of Digital Image on Detection of Watermark Noise (워터 마크 잡음 탐지에 미치는 디지털 영상의 밝기와 대비의 효과)

  • Kham Keetaek;Moon Ho-Seok;Yoo Hun-Woo;Chung Chan-Sup
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.305-322
    • /
    • 2005
  • Watermarking is a widely employed method tn protecting copyright of a digital image, the owner's unique image is embedded into the original image. Strengthened level of watermark insertion would help enhance its resilience in the process of extraction even from various distortions of transformation on the image size or resolution. However, its level, at the same time, should be moderated enough not to reach human visibility. Finding a balance between these two is crucial in watermarking. For the algorithm for watermarking, the predefined strength of a watermark, computed from the physical difference between the original and embedded images, is applied to all images uniformal. The mean brightness or contrast of the surrounding images, other than the absolute brightness of an object, could affect human sensitivity for object detection. In the present study, we examined whether the detectability for watermark noise might be attired by image statistics: mean brightness and contrast of the image. As the first step to examine their effect, we made rune fundamental images with varied brightness and control of the original image. For each fundamental image, detectability for watermark noise was measured. The results showed that the strength ot watermark node for detection increased as tile brightness and contrast of the fundamental image were increased. We have fitted the data to a regression line which can be used to estimate the strength of watermark of a given image with a certain brightness and contrast. Although we need to take other required factors into consideration in directly applying this formula to actual watermarking algorithm, an adaptive watermarking algorithm could be built on this formula with image statistics, such as brightness and contrast.

  • PDF