• Title/Summary/Keyword: Size normalization

Search Result 109, Processing Time 0.023 seconds

Comic Image Normalization using the gradient Radon Transform based on OpenCL implementation (OpenCL 기반의 그래디언트 라돈변환을 이용한 만화영상의 정규화)

  • Kim, Dong-Keun;Jeon, Hyeok-June;Hwang, Chi-Jung
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.221-230
    • /
    • 2011
  • Digital comic images are one of popular contents on the Internet. Usually, they are scanned from comic books by digital scanners. Without post-processing, they may have different sizes, skews and margins other than contents at the boundary. To normalize the size of their contents without the skews and margins is an important step in comic image analysis and application such as content-based comic image retrieval system. In this paper, we propose a method to detect a box frame in comic images by extracting of line segments using the gradient Radon transform. The box frame in comic images is the maximum rectangle which consists of contents without margins. We use the detected box frame to normalize the size of comic images and to make them no skew. In addition, the proposed method is implemented by OpenCL to speed up the detection of the line segments. Experimental results show that our proposed method effectively detects the box frame in comic images.

Airway Reactivity to Bronchoconstrictor and Bronchodilator: Assessment Using Thin-Section and Volumetric Three-Dimensional CT

  • Boo-Kyung Han;Jung-Gi Im;Hak Soo Kim;Jin Mo Koo;Hong Dae Kim;Kyung Mo Yeon
    • Korean Journal of Radiology
    • /
    • v.1 no.3
    • /
    • pp.127-134
    • /
    • 2000
  • Objective: To determine the extent to which thin-section and volumetric three-dimensional CT can depict airway reactivity to bronchostimulator, and to assess the effect of different airway sizes on the degree of reactivity. Materials and Methods: In eight dogs, thin-section CT scans were obtained before and after the administration of methacholine and ventolin. Cross-sectional areas of bronchi at multiple levels, as shown by axial CT, proximal airway volume as revealed by three-dimensional imaging, and peak airway pressure were measured. The significance of airway change induced by methacholine and ventolin, expressed by percentage changes in cross-sectional area, proximal airway volume, and peak airway pressure was statistically evaluated, as was correlation between the degree of airway reactivity and the area of airways. Results: Cross-sectional areas of the bronchi decreased significantly after the administration of methacholine, and scans obtained after a delay of 5 minutes showed that normalization was insufficient. Ventolin induced a significant increase in cross-sectional areas and an increase in proximal airway volume, while the effect of methacholine on the latter was the opposite. Peak airway pressure increased after the administration of methacholine, and after a 5-minute delay its level was near that of the control state. Ventolin, however, induced no significant decrease. The degree of airway reactivity did not correlate with airway size. Conclusion: Thin-section and volumetric spiral CT with three-dimensional reconstruction can demonstrate airway reactivity to bronchostimulator. The degree of reactivity did not correlate with airway size.

  • PDF

n-Gram/2L: A Space and Time Efficient Two-Level n-Gram Inverted Index Structure (n-gram/2L: 공간 및 시간 효율적인 2단계 n-gram 역색인 구조)

  • Kim Min-Soo;Whang Kyu-Young;Lee Jae-Gil;Lee Min-Jae
    • Journal of KIISE:Databases
    • /
    • v.33 no.1
    • /
    • pp.12-31
    • /
    • 2006
  • The n-gram inverted index has two major advantages: language-neutral and error-tolerant. Due to these advantages, it has been widely used in information retrieval or in similar sequence matching for DNA and Protein databases. Nevertheless, the n-gram inverted index also has drawbacks: the size tends to be very large, and the performance of queries tends to be bad. In this paper, we propose the two-level n-gram inverted index (simply, the n-gram/2L index) that significantly reduces the size and improves the query performance while preserving the advantages of the n-gram inverted index. The proposed index eliminates the redundancy of the position information that exists in the n-gram inverted index. The proposed index is constructed in two steps: 1) extracting subsequences of length m from documents and 2) extracting n-grams from those subsequences. We formally prove that this two-step construction is identical to the relational normalization process that removes the redundancy caused by a non-trivial multivalued dependency. The n-gram/2L index has excellent properties: 1) it significantly reduces the size and improves the Performance compared with the n-gram inverted index with these improvements becoming more marked as the database size gets larger; 2) the query processing time increases only very slightly as the query length gets longer. Experimental results using databases of 1 GBytes show that the size of the n-gram/2L index is reduced by up to 1.9${\~}$2.7 times and, at the same time, the query performance is improved by up to 13.1 times compared with those of the n-gram inverted index.

An analysis of emotional English utterances using the prosodic distance between emotional and neutral utterances (영어 감정발화와 중립발화 간의 운율거리를 이용한 감정발화 분석)

  • Yi, So-Pae
    • Phonetics and Speech Sciences
    • /
    • v.12 no.3
    • /
    • pp.25-32
    • /
    • 2020
  • An analysis of emotional English utterances with 7 emotions (calm, happy, sad, angry, fearful, disgust, surprised) was conducted using the measurement of prosodic distance between 672 emotional and 48 neutral utterances. Applying the technique proposed in the automatic evaluation model of English pronunciation to the present study on emotional utterances, Euclidean distance measurement of 3 prosodic elements such as F0, intensity and duration extracted from emotional and neutral utterances was utilized. This paper, furthermore, extended the analytical methods to include Euclidean distance normalization, z-score and z-score normalization resulting in 4 groups of measurement schemes (sqrF0, sqrINT, sqrDUR; norsqrF0, norsqrINT, norsqrDUR; sqrzF0, sqrzINT, sqrzDUR; norsqrzF0, norsqrzINT, norsqrzDUR). All of the results from perceptual analysis and acoustical analysis of emotional utteances consistently indicated the greater effectiveness of norsqrF0, norsqrINT and norsqrDUR, among 4 groups of measurement schemes, which normalized the Euclidean measurement. The greatest acoustical change of prosodic information influenced by emotion was shown in the values of F0 followed by duration and intensity in descending order according to the effect size based on the estimation of distance between emotional utterances and neutral counterparts. Tukey Post Hoc test revealed 4 homogeneous subsets (calm

Quality Visualization of Quality Metric Indicators based on Table Normalization of Static Code Building Information (정적 코드 내부 정보의 테이블 정규화를 통한 품질 메트릭 지표들의 가시화를 위한 추출 메커니즘)

  • Chansol Park;So Young Moon;R. Young Chul Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.5
    • /
    • pp.199-206
    • /
    • 2023
  • The current software becomes the huge size of source codes. Therefore it is increasing the importance and necessity of static analysis for high-quality product. With static analysis of the code, it needs to identify the defect and complexity of the code. Through visualizing these problems, we make it guild for developers and stakeholders to understand these problems in the source codes. Our previous visualization research focused only on the process of storing information of the results of static analysis into the Database tables, querying the calculations for quality indicators (CK Metrics, Coupling, Number of function calls, Bad-smell), and then finally visualizing the extracted information. This approach has some limitations in that it takes a lot of time and space to analyze a code using information extracted from it through static analysis. That is since the tables are not normalized, it may occur to spend space and time when the tables(classes, functions, attributes, Etc.) are joined to extract information inside the code. To solve these problems, we propose a regularized design of the database tables, an extraction mechanism for quality metric indicators inside the code, and then a visualization with the extracted quality indicators on the code. Through this mechanism, we expect that the code visualization process will be optimized and that developers will be able to guide the modules that need refactoring. In the future, we will conduct learning of some parts of this process.

Research of Phase Correlation Method for Identifying Quantitative Similarity in Adjacent Real-time Streaming Frame

  • Cho, Yongjin;Yun, Yeji;Lee, Kyou-seung;Oh, Jong-woo;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.157-157
    • /
    • 2017
  • To minimize the damage by wild birds and acquire the benefits such as protection against weeds and maintenance of water content in soil, the mulching black color vinyl after seeding should be carried out. Non-contact and non-destructive methods that can continuously determine the locations are necessary. In this study, a crop position detection method was studied that uses infrared thermal image sensor to determine the cotyledon position under vinyl mulch. The moving system for acquiring image arrays has been developed for continuously detecting crop locations under plastic mulching on the field. A sliding mechanical device was developed to move the sensor, which were arranged in the form of a linear array, perpendicular to the array using a micro-controller integrated with a stepping motor. The experiments were conducted while moving 4.00 cm/s speed of the IR sensor by the rotational speed of the stepping motor based on a digital pulse width modulation signal from the micro-controller. The acquired images were calibrated with the spatial image correlation. The collected data were processed using moving averaging on interpolation to determine the frame where the variance was the smallest in resolution units of 1.02 cm. Non-linear integral interpolation was one of method for analyzing the frequency using the normalization image and then arbitrarily increasing the limited data value of $16{\times}4pixels$ in one frame. It was a method to relatively reduce the size of overlapping pixels by arbitrarily increasing the limited data value. The splitted frames into 0.1 units instead of 1 pixel can propose more than 10 times more accurate and original method than the existing correction method. The non-integral calibration method was conducted by applying the subdivision method to the pixels to find the optimal correction resolution based on the first reversed frequency. In order to find a correct resolution, the expected location of the first crop was indicated on near pixel 4 in the inversion frequency. For the most optimized resolution, the pixel was divided by 0.4 pixel instead of one pixel to find out where the lowest frequency exists.

  • PDF

Neural Predictive Coding for Text Compression Using GPGPU (GPGPU를 활용한 인공신경망 예측기반 텍스트 압축기법)

  • Kim, Jaeju;Han, Hwansoo
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.3
    • /
    • pp.127-132
    • /
    • 2016
  • Several methods have been proposed to apply artificial neural networks to text compression in the past. However, the networks and targets are both limited to the small size due to hardware capability in the past. Modern GPUs have much better calculation capability than CPUs in an order of magnitude now, even though CPUs have become faster. It becomes possible now to train greater and complex neural networks in a shorter time. This paper proposed a method to transform the distribution of original data with a probabilistic neural predictor. Experiments were performed on a feedforward neural network and a recurrent neural network with gated-recurrent units. The recurrent neural network model outperformed feedforward network in compression rate and prediction accuracy.

An Automated Water Nitrate Monitoring System based on Ion-Selective Electrodes

  • Cho, Woo Jae;Kim, Dong-Wook;Jung, Dae Hyun;Cho, Sang Sun;Kim, Hak-Jin
    • Journal of Biosystems Engineering
    • /
    • v.41 no.2
    • /
    • pp.75-84
    • /
    • 2016
  • Purpose: In-situ water quality monitoring based on ion-selective electrodes (ISEs) is a promising technique because ISEs can be used directly in the medium to be tested, have a compact size, and are inexpensive. However, signal drift can be a major concern with on-line management systems because continuous immersion of the ISEs in water causes electrode degradation, affecting the stability, repeatability, and selectivity over time. In this study, a computer-based nitrate monitoring system including automatic electrode rinsing and calibration was developed to measure the nitrate concentration in water samples in real-time. Methods: The capabilities of two different types of poly(vinyl chloride) membrane-based ISEs, an electrode with a liquid filling and a carbon paste-based solid state electrode, were used in the monitoring system and evaluated on their sensitivities, selectivities, and durabilities. A feasibility test for the continuous detection of nitrate ions in water using the developed system was conducted using water samples obtained from various water sources. Results: Both prepared ISEs were capable of detecting low concentrations of nitrate in solution, i.e., 0.7 mg/L $NO_3-N$. Furthermore, the electrodes have the same order of selectivity for nitrate: $NO_3{^-}{\gg}HCO_3{^-}$ > $Cl^-$ > $H_2PO_4{^-}$ > $SO{_4}^{2-}$, and maintain their sensitivity by > 40 mV/decade over a period of 90 days. Conclusions: The use of an automated ISE-based nitrate measurement system that includes automatic electrode rinsing and two-point normalization proved to be feasible in measuring $NO_3-N$ in water samples obtained from different water sources. A one-to-one relationship between the levels of $NO_3-N$ measured with the ISEs and standard analytical instruments was obtained.

Clinical Observation and Therapeutic Evaluation of Rh-endostatin Combined with DP Regimen in Treating Patients with Advanced Esophageal Cancer

  • Deng, Wen-Ying;Song, Tao;Li, Ning;Luo, Su-Xia;Li, Xiang
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.16
    • /
    • pp.6565-6570
    • /
    • 2014
  • Objective: To observe the curative effects of rh-endostatin combined with DP regimen in treating patients with advanced esophageal cancer and analyze the correlation of CT perfusion (CTP) parameters and the expression of vascular endothelial growth factor (VEGF). Methods: Twenty patients with esophageal cancer confirmed pathologically were randomly divided into combined treatment (rh-endostatin+DP regimen) group and single chemotherapy group, 10 patients in each group, respectively. All patients were given conventional CT examination and CTP imaging for primary tumor. The level of VEGF, the size of tumor and CTP parameters (BF, BV, PS and MTT) before treatment and after 2 cycles of treatment were determined for the comparison and the correlation between CTP parameters and VEGF expression was analyzed. Results: the therapeutic effect of rh-endostatin+DP regimen group was superior to single chemotherapy group. VEGF level after treatment in rh-endostatin+DP regimen group was obviously lower than single chemotherapy group (P<0.01). The expression of VEGF had positive correlation with BF and BV but negative correlation with MTT. Compared with treatment before for rh-endostatin+DP regimen group, BF, BV and PS decreased while MTT increased after treatment (P<0.05). However, there were no significant differences between treatment before and after treatment in single chemotherapy (P>0.05). Conclusions: Rh-endostatin can down-regulate the expression of VEGF in esophageal cancer, change the state of hypertransfusion and high permeability of tumor vessels and had the better curative effect and slighter adverse reactions when combined with chemotherapy.

Code Optimization Using Pattern Table (패턴 테이블을 이용한 코드 최적화)

  • Yun Sung-Lim;Oh Se-Man
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.11
    • /
    • pp.1556-1564
    • /
    • 2005
  • Various optimization techniques are deployed in the compilation process of a source program for improving the program's execution speed and reducing the size of the source code. Of the optimization pattern matching techniques, the string pattern matching technique involves finding an optimal pattern that corresponds to the intermediate code. However, it is deemed inefficient due to excessive time required for optimized pattern search. The tree matching pattern technique can result in many redundant comparisons for pattern determination, and there is also the disadvantage of high cost involved in constructing a code tree. The objective of this paper is to propose a table-driven code optimizer using the DFA(Deterministic Finite Automata) optimization table to overcome the shortcomings of existing optimization techniques. Unlike other techniques, this is an efficient method of implementing an optimizer that is constructed with the deterministic automata, which determines the final pattern, refuting the pattern selection cost and expediting the pattern search process.

  • PDF