• Title/Summary/Keyword: Code Quality

Search Result 891, Processing Time 0.028 seconds

A Rapid Region-of-Interest Processing Technique using Mask Patterns for JPEG2000 (JPEG2000에서 마스크 패턴을 이용한 빠른 관심영역 처리 기법)

  • Lee, Jum-Sook;Ha, Seok-Woon;Park, Jae-Heung;Seo, Yeong-Geon;Kang, Ki-Jun;Hong, Seok-Won;Kim, Sang-Bok
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.6
    • /
    • pp.19-27
    • /
    • 2010
  • An region of interest processing technique is to handle preferentially some part of an image dynamically according to region of interest of the users in JPEG2000 image. A small image is not important, but in a big image the specified region that the user indicated has to be handled preferentially because it takes long time to display the whole image. If the user indicates a region of the outline image, the browser masks the region and sends the mask information to the source that transmitted the image. The server which got the mask information preferentially sends the code blocks matching the masks. Here, quickly generating mask information is important, so, in this paper using predefined 48 mask patterns, selecting one of the patterns according to the distribution of ROI(Region-of-Interest) and background, we remarkably reduced the time computing the mask region. Blocks that the patterns are applied are the blocks mixed of ROI and background in a block. If a whole block is an ROI or a background, these patterns are not applied. As results, comparing to the method that precisely handles ROI and background, the quality is unsatisfactory but the processing time remarkably reduced.

Design and Implementation of an HNS Accident Tracking System for Rapid Decision Making (신속한 의사결정을 위한 HNS 사고이력관리시스템 설계 및 구현)

  • Jang, Ha-Lyong;Ha, Min-Jae;Jang, Ha-Seek;Yun, Jong-Hwui;Lee, Eun-Bang;Lee, Moon-jin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.23 no.2
    • /
    • pp.168-176
    • /
    • 2017
  • HNS accidents involve large-scale fires and explosions, causing numerous human casualties and extreme environmental pollution in the surrounding area. The widespread diffusion of effects should be prevented through rapid decision making. In this study, a high-quality, standardized, and digitized HNS accident databases has been generated based on the HNS standard code proposed. Furthermore, the HNS Accident Tracking System (HATS) was applied and implemented to allow for systematic integration management and sharing. In addition, statistical analysis was performed on 76 cases of domestic HNS accident data collected over 23 years using HATS. In Korea, an average of 3.3 HNS accidents occurred each year and major HNS accident factors were Springs (41 %), Aprons (51 %), Chemical Carriers (49 %), Crew's Fault (45 %) and Xylenes (12 %). (The number in parentheses is the percentage of HNS accident factors for each HNS accident classification)

Research in the Direction of Improvement of the Web Site Utilizing Google Analytics (구글 애널리틱스를 활용한 웹 사이트의 개선방안 연구 : 앱팩토리를 대상으로)

  • Kim, Donglim;Lim, Younghwan
    • Cartoon and Animation Studies
    • /
    • s.36
    • /
    • pp.553-572
    • /
    • 2014
  • In this paper, for the evaluation of the ease of a particular Web site (www.appbelt.net), insert the log tracking code for Google Analytics in a page of the Web site to collect behavioral data of visitor and has studied the improvement measures for the problems of the Web site, after the evaluation of the overall quality of the Web site through the evaluation of Coolcheck. These findings set the target value of the company's priority (importance) companies want to influence the direction of the business judgment are set up correctly, and the user's needs and behavior will be appropriate for the service seems to help improvement.

Lightweight video coding using spatial correlation and symbol-level error-correction channel code (공간적 유사성과 심볼단위 오류정정 채널 코드를 이용한 경량화 비디오 부호화 방법)

  • Ko, Bong-Hyuck;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.13 no.2
    • /
    • pp.188-199
    • /
    • 2008
  • In conventional video coding, encoder complexity is much higher than that of decoder. However, investigations for lightweight encoder to eliminate motion prediction/compensation claiming most complexity in encoder have recently become an important issue. The Wyner-Ziv coding is one of the representative schemes for the problem and, in this scheme, since encoder generates only parity bits of a current frame without performing any type of processes extracting correlation information between frames, it has an extremely simple structure compared to conventional coding techniques. However, in Wyner-Ziv coding, channel decoding errors occur when noisy side information is used in channel decoding process. These channel decoding errors appear more frequently, especially, when there is not enough correlation between frames to generate accurate side information and, as a result, those errors look like Salt & Pepper type noise in the reconstructed frame. Since this noise severely deteriorates subjective video quality even though such noise rarely occurs, previously we proposed a computationally extremely light encoding method based on selective median filter that corrects such noise using spatial correlation of a frame. However, in the previous method, there is a problem that loss of texture from filtering may exceed gain from error correction by the filter for video sequences having complex torture. Therefore, in this paper, we propose an improved lightweight encoding method that minimizes loss of texture detail from filtering by allowing information of texture and that of noise in side information to be utilized by the selective median filter. Our experiments have verified average PSNR gain of up to 0.84dB compared to the previous method.

Image Compression Using DCT Map FSVQ and Single - side Distribution Huffman Tree (DCT 맵 FSVQ와 단방향 분포 허프만 트리를 이용한 영상 압축)

  • Cho, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2615-2628
    • /
    • 1997
  • In this paper, a new codebook design algorithm is proposed. It uses a DCT map based on two-dimensional discrete cosine of transform (2D DCT) and finite state vector quantizer (FSVQ) when the vector quantizer is designed for image transmission. We make the map by dividing input image according to edge quantity, then by the map, the significant features of training image are extracted by using the 2D DCT. A master codebook of FSVQ is generated by partitioning the training set using binary tree based on tree-structure. The state codebook is constructed from the master codebook, and then the index of input image is searched at not master codebook but state codebook. And, because the coding of index is important part for high speed digital transmission, it converts fixed length codes to variable length codes in terms of entropy coding rule. The huffman coding assigns transmission codes to codes of codebook. This paper proposes single-side growing huffman tree to speed up huffman code generation process of huffman tree. Compared with the pairwise nearest neighbor (PNN) and classified VQ (CVQ) algorithm, about Einstein and Bridge image, the new algorithm shows better picture quality with 2.04 dB and 2.48 dB differences as to PNN, 1.75 dB and 0.99 dB differences as to CVQ respectively.

  • PDF

Application of MPI Technique for Distributed Rainfall-Runoff Model (분포형 강우유출모형 병렬화 처리기법 적용)

  • Chung, Sung-Young;Park, Jin-Hyeog;Hur, Young-Teck;Jung, Kwan-Sue
    • Journal of Korea Water Resources Association
    • /
    • v.43 no.8
    • /
    • pp.747-755
    • /
    • 2010
  • Distributed Models have relative weak points due to the amount of computer memory and calculation time required for calculating water flow using a numerical analysis based on kinematic wave theory when compared to the conceptual models used so far. Typically, the distributed models have been mainly applied to small basins. It was necessary to decrease the resolution of the grid to make it applicable for large scale watersheds, and because it would take up too much time to calculate using a higher resolution. That has been one of the more difficult factors in applying the model for actual work. In this paper, MPI (Message Passing Interface) technique was applied to solve the problem of calculation time as it is one of the demerits of the distributed model for performing physical and complicated numerical calculations for large scale watersheds. The comparison studies were performed a single domain and a divided small domain in Yongdam Dam watershed in case of typoon 'Ewiniar' at 2006. They were compared to analyze the application effects of parallelization technique. As a result, a maximum of 10 times the amount of calculation time was saved but keeping the level of quality for discharge by using parallelization code rather than a single processor.

On characteristics of environmental correction factors in the South Indian Ocean by Topex/Poseidon satellite altimetric data (Topex/Poseidon 위성의 Altimeter자료를 이용한 남인도양의 환경보정인자 특성에 관한 연구)

  • 윤홍주;김영섭;이재철
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.2
    • /
    • pp.117-128
    • /
    • 1998
  • Topex/Poseidon satellite, launched in Auguest 1992, has provided more 5 years of very good quality data. Efficient improvements, either about instrumental accuracy or about sea level data correction, have been made so that Topex/Poseidon has become presently a wonderful tool for many researchers. The first mission data of 73 cycles, September 1992 - August 1994, was used to our study in order to know characteristics of environmental correction factors in the Amsterdam-Crozet-Kerguelen region of the South Indian Ocean. According to standard procedures as defined under user handbook for sea surface height data processes, then we have chosen cycles 43 as the cycle of reference because this cycle has provided the completed data for measurement points and has presented the exacted position of ground track compared to another cycles. It was computed variations of various factors for correction in ascending ground track 103(Amsterdam-Kerguelen continental plateau) and descending ground track170 (Crozet basin). Here the variations of ionosphere, dry troposphere, humid troposphere, electromagnetic bias, elastic tide and loading tide were generally very smaller as a few of cm, but the variations of oceanic tide(30-35cm) and inverted barometer(15-30cm) were higher than another factors. For the correction of ocean tide, our model(CEFMO: Code d' Elements Finis pour la Maree Oceanique) - This is hydrodynamic model that is very well applicated in all oceanic situations - was used because this model has especially good solution in the coastal and island area as the open sea area. Conclusionally, it should be understood that the variation of ocean free surface is mainly under the influence of tides(>80-90%) in the Amsterdam - Crozet- Kerguelen region of the South Indian Ocean.

Service Life Variation for RC Structure under Carbonation Considering Korean Design Standard and Design Cover Depth (국내설계기준과 피복두께를 고려한 RC 구조물의 탄산화 내구수명의 변동성)

  • Kim, Yun-Shik;Kwon, Seung-Jun
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.25 no.5
    • /
    • pp.15-23
    • /
    • 2021
  • In this paper, service life for RC(Reinforced Concrete) substructure subjective to carbonation was evaluated through deterministic and probabilistic method considering field investigation data and Design Code(KDS 14 20 40). Furthermore changes in service life with increasing COV(Coefficient of Variation) and equivalent safety index meeting the same service life were studied. From the investigation, the mean and its COV of cover depth were evaluated to 70.0 ~ 90.0 mm and 0.2, respectively. With intended failure probability of 10.0 % and 70 mm of cover depth, service life decreased to 137 years, 123 years, and 91 years with increasing COV of 0.05, 0.1, and 0.2, respectively. In the case of 80 mm of cover depth, it changes to 179 years, 161 years, and 120 years with increasing COV. The equivalent safety index meeting the same service life from deterministic method showed 1.66 ~ 3.43 for 70 mm of cover depth and 1.61 ~ 3.24 for 80 mm of cover depth, respectively. The various design parameters covering local environment and quality condition in deterministic method yields a considerable difference of service life, so that determination of design parameters are required for exposure conditions and parameter variation.

Power-efficiency Analysis of the MIMO-VLC System considering Dimming Control (조광제어를 고려한 MIMO-VLC 시스템의 전력 효율 분석)

  • Kim, Yong-Won;Lee, Byung-Jin;Lee, Byung-Hoon;Lee, Min-Jung;Kim, Kyung-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.6
    • /
    • pp.169-180
    • /
    • 2018
  • White light-emitting diodes (LEDs) are more economical than fluorescent lights, and provide high brightness, a high lifetime expectancy, and greater durability. As LEDs are closely connected with people's daily lives, dimming control of LED is an important component in providing energy savings and improving quality of life. In visible light communications systems using these LEDs, multiple input multiple output (MIMO) technology has attracted a lot of attention, in that it can attain the channel capacity in proportion to the number of antennas. This paper analyzes the power performance of three kinds of modulation in visible light communications (VLC) systems applied space-time block code (STBC) techniques. The modulation schemes are return-to-zero on-off keying (RZ-OOK), variable pulse position modulation (VPPM), and overlapping pulse position modulation (OPPM), and dimming control was applied. The power requirements and power consumption were used as metrics to compare the power efficiency in $2{\times}2$ STBC-VLC environments under the three kinds of modulation. We confirm that dimming control affects the communications performance of each modulation scheme. VPPM showed greater consumption among the three modulations, and OPPM showed energy savings comparable to VPPM.

A study on the development of severity-adjusted mortality prediction model for discharged patient with acute stroke using machine learning (머신러닝을 이용한 급성 뇌졸중 퇴원 환자의 중증도 보정 사망 예측 모형 개발에 관한 연구)

  • Baek, Seol-Kyung;Park, Jong-Ho;Kang, Sung-Hong;Park, Hye-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.126-136
    • /
    • 2018
  • The purpose of this study was to develop a severity-adjustment model for predicting mortality in acute stroke patients using machine learning. Using the Korean National Hospital Discharge In-depth Injury Survey from 2006 to 2015, the study population with disease code I60-I63 (KCD 7) were extracted for further analysis. Three tools were used for the severity-adjustment of comorbidity: the Charlson Comorbidity Index (CCI), the Elixhauser comorbidity index (ECI), and the Clinical Classification Software (CCS). The severity-adjustment models for mortality prediction in patients with acute stroke were developed using logistic regression, decision tree, neural network, and support vector machine methods. The most common comorbid disease in stroke patients were hypertension, uncomplicated (43.8%) in the ECI, and essential hypertension (43.9%) in the CCS. Among the CCI, ECI, and CCS, CCS had the highest AUC value. CCS was confirmed as the best severity correction tool. In addition, the AUC values for variables of CCS including main diagnosis, gender, age, hospitalization route, and existence of surgery were 0.808 for the logistic regression analysis, 0.785 for the decision tree, 0.809 for the neural network and 0.830 for the support vector machine. Therefore, the best predictive power was achieved by the support vector machine technique. The results of this study can be used in the establishment of health policy in the future.