• Title/Summary/Keyword: Color coding

Search Result 142, Processing Time 0.025 seconds

Hyperspectral Image Classification via Joint Sparse representation of Multi-layer Superpixles

  • Sima, Haifeng;Mi, Aizhong;Han, Xue;Du, Shouheng;Wang, Zhiheng;Wang, Jianfang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.5015-5038
    • /
    • 2018
  • In this paper, a novel spectral-spatial joint sparse representation algorithm for hyperspectral image classification is proposed based on multi-layer superpixels in various scales. Superpixels of various scales can provide complete yet redundant correlated information of the class attribute for test pixels. Therefore, we design a joint sparse model for a test pixel by sampling similar pixels from its corresponding superpixels combinations. Firstly, multi-layer superpixels are extracted on the false color image of the HSI data by principal components analysis model. Secondly, a group of discriminative sampling pixels are exploited as reconstruction matrix of test pixel which can be jointly represented by the structured dictionary and recovered sparse coefficients. Thirdly, the orthogonal matching pursuit strategy is employed for estimating sparse vector for the test pixel. In each iteration, the approximation can be computed from the dictionary and corresponding sparse vector. Finally, the class label of test pixel can be directly determined with minimum reconstruction error between the reconstruction matrix and its approximation. The advantages of this algorithm lie in the development of complete neighborhood and homogeneous pixels to share a common sparsity pattern, and it is able to achieve more flexible joint sparse coding of spectral-spatial information. Experimental results on three real hyperspectral datasets show that the proposed joint sparse model can achieve better performance than a series of excellent sparse classification methods and superpixels-based classification methods.

Motion Estimation Method by Using Depth Camera (깊이 카메라를 이용한 움직임 추정 방법)

  • Kwon, Soon-Kak;Kim, Seong-Woo
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.676-683
    • /
    • 2012
  • Motion estimation in video coding greatly affects implementation complexity. In this paper, a reducing method of the complexity in motion estimation is proposed by using both the depth and color cameras. We obtain object information with video sequence from distance information calculated by depth camera, then perform labeling for grouping pixels within similar distances as the same object. Three search regions (background, inside-object, boundary) are determined adaptively for each of motion estimation blocks within current and reference pictures. If a current block is the inside-object region, then motion is searched within the inside-object region of reference picture. Also if a current block is the background region, then motion is searched within the background region of reference picture. From simulation results, we can see that the proposed method compared to the full search method remains the almost same as the motion estimated difference signal and significantly reduces the searching complexity.

Compression-time Shortening Algorithm on JPEG2000 using Pre-Truncation Method (선자름 방법을 이용한 JPEG2000에서의 부호차 시간 단축 알고리즘)

  • 양낙민;정재호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.1C
    • /
    • pp.64-71
    • /
    • 2003
  • In this paper, we proposed an algorithm that shorten coding time maintaining image quality in JPEG2000, which is the standard, of still image compression. This method encodes only the bit plane selected as appropriate truncation point for output bitstream, obtained from estimation of frequency distribution for whole image. Wavelet characterized by multi-resolution has vertical, horizontal, and diagonal frequency components for each resolution. The frequency interrelation addressed above is maintained thorough whole level of resolution and represents the unique frequency characteristics for input image. Thus, using the frequency relation at highest level, we can pick the truncation point for the compression time decrease by estimating code bits at encoding each code block. Also, we reduced the encoding time using simply down sampling instead of low-pass filtering at low-levels which are not encoded in color component of lower energy than luminance component. From the proposed algorithm, we can reduce about 15~36% of encoding time maintaining PSNR 30$\pm$0.5㏈.

Strain elastography of tongue carcinoma using intraoral ultrasonography: A preliminary study to characterize normal tissues and lesions

  • Ogura, Ichiro;Sasaki, Yoshihiko;Sue, Mikiko;Oda, Takaaki
    • Imaging Science in Dentistry
    • /
    • v.48 no.1
    • /
    • pp.45-49
    • /
    • 2018
  • Purpose: The aim of this study was to evaluate the quantitative strain elastography of tongue carcinoma using intraoral ultrasonography. Materials and Methods: Two patients with squamous cell carcinoma (SCC) who underwent quantitative strain elastography for the diagnosis of tongue lesions using intraoral ultrasonography were included in this prospective study. Strain elastography was performed using a linear 14 MHz transducer (Aplio 300; Canon Medical Systems, Otawara, Japan). Manual light compression and decompression of the tongue by the transducer was performed to achieve optimal and consistent color coding. The variation in tissue strain over time caused by the compression exerted using the probe was displayed as a strain graph. The integrated strain elastography software allowed the operator to place circular regions of interest (ROIs) of various diameters within the elastography window, and automatically displayed quantitative strain (%) for each ROI. Quantitative indices of the strain (%) were measured for normal tissues and lesions in the tongue. Results: The average strain of normal tissue and tongue SCC in a 50-year-old man was 1.468% and 0.000%, respectively. The average strain of normal tissue and tongue SCC in a 59-year-old man was 1.007% and 0.000%, respectively. Conclusion: We investigated the quantitative strain elastography of tongue carcinoma using intraoral ultrasonography. Strain elastography using intraoral ultrasonography is a promising technique for characterizing and differentiating normal tissues and SCC in the tongue.

Complete genome sequence of Deinococcus puniceus DY1T, a radiation resistant bacterium (방사선 내성 세균 Deinococcus puniceus DY1T의 완전한 게놈 서열 분석)

  • Srinivasan, Sathiyaraj;Sohn, Eun-Hwa;Jung, Hee-Young;Kim, Myung Kyum
    • Korean Journal of Microbiology
    • /
    • v.54 no.1
    • /
    • pp.84-86
    • /
    • 2018
  • Cells of Deinococcus puniceus $DY1^T$ are Gram-positive, coccus-shaped, and crimson color-pigmented. Strain $DY1^T$ was isolated from soil irradiated with 5 kGy gamma radiation and showed resistance to UVC and gamma radiation. In this study, we report the complete genome sequence of a bacterium Deinococcus puniceus $DY1^T$ is consist of circular chromosome comprised of 2,971,983 bp, with the G + C content of 62.5%. The complete genome sequence was obtained using the PacBio RS II platform, it included 2,617 coding sequences (CDs), 2,762 genes, and 88 pseudogene.

Towards Group-based Adaptive Streaming for MPEG Immersive Video (MPEG Immersive Video를 위한 그룹 기반 적응적 스트리밍)

  • Jong-Beom Jeong;Soonbin Lee;Jaeyeol Choi;Gwangsoon Lee;Sangwoon Kwak;Won-Sik Cheong;Bongho Lee;Eun-Seok Ryu
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.194-212
    • /
    • 2023
  • The MPEG immersive video (MIV) coding standard achieved high compression efficiency by removing inter-view redundancy and merging the residuals of immersive video which consists of multiple texture (color) and geometry (depth) pairs. Grouping of views that represent similar spaces enables quality improvement and implementation of selective streaming, but this has not been actively discussed recently. This paper introduces an implementation of group-based encoding into the recent version of MIV reference software, provides experimental results on optimal views and videos per group, and proposes a decision method for optimal number of videos for global immersive video representation by using portion of residual videos.

Perfusion MR Imaging of the Brain Tumor: Preliminary Report (뇌종야의 관류 자기공명영상: 예비보고)

  • 김홍대;장기현;성수옥;한문희;한만청
    • Investigative Magnetic Resonance Imaging
    • /
    • v.1 no.1
    • /
    • pp.119-124
    • /
    • 1997
  • Purpose: To assess the utility of magnetic resonance(MR) cerebral blood volume (CBV) map in the evaluation of brain tumors. Materials and Methods: We performed perfusion MR imaing preoperatively in the consecutive IS patients with intracranial masses(3 meningiomas, 2 glioblastoma multiformes, 3 low grade gliomas, 1 lymphoma, 1 germinoma, 1 neurocytoma, 1 metastasis, 2 abscesses, 1 radionecrosis). The average age of the patients was 42 years (22yr -68yr), composed of 10 males and S females. All MR images were obtained at l.ST imager(Signa, CE Medical Systems, Milwaukee, Wisconsin). The regional CBV map was obtained on the theoretical basis of susceptibility difference induced by first pass circulation of contrast media. (contrast media: IScc of gadopentate dimeglumine, about 2ml/sec by hand, starting at 10 second after first baseline scan). For each patient, a total of 480 images (6 slices, 80 images/slice in 160 sec) were obtained by using gradient echo(CE) single shot echo-planar image(EPI) sequence (TR 2000ms, TE SOms, flip angle $90^{\circ}$, FOV $240{\times}240mm,{\;}matrix{\;}128{\times}128$, slice-thick/gap S/2.S). After data collection, the raw data were transferred to CE workstation and rCBV maps were generated from the numerical integration of ${\Delta}R2^{*} on a voxel by voxel basis, with home made software (${\Delta}R2^{*}=-ln (S/SO)/TE). For easy visual interpretation, relative RCB color coding with reference to the normal white matter was applied and color rCBV maps were obtained. The findings of perfusion MR image were retrospectively correlated with Cd-enhanced images with focus on the degree and extent of perfusion and contrast enhancement. Results: Two cases of glioblastoma multiforme with rim enhancement on Cd-enhanced Tl weighted image showed increased perfusion in the peripheral rim and decreased perfusion in the central necrosis portion. The low grade gliomas appeared as a low perfusion area with poorly defined margin. In 2 cases of brain abscess, the degree of perfusion was similar to that of the normal white matter in the peripheral enhancing rim and was low in the central portion. All meningiomas showed diffuse homogeneous increased perfusion of moderate or high degree. One each of lymphoma and germinoma showed homogenously decreased perfusion with well defined margin. The central neurocytoma showed multifocal increased perfusion areas of moderate or high degree. A few nodules of the multiple metastasis showed increased perfusion of moderate degree. One radionecrosis revealed multiple foci of increased perfusion within the area of decreased perfusion. Conclusion: The rCBV map appears to correlate well with the perfusion state of brain tumor, and may be helpful in discrimination between low grade and high grade gliomas. The further study is needed to clarify the role of perfusion MR image in the evaluation of brain tumor.

  • PDF

Pickprimer: A Graphic User Interface Program for Primer Design on the Gene Target Region (픽프라이머 : 유전자 목표 구간 탐색 모듈을 포함한 프라이머 제작 그래픽 프로그램)

  • Chung, Hee;Mun, Jeong-Hwan;Lee, Seung-Chan;Yu, Hee-Ju
    • Horticultural Science & Technology
    • /
    • v.29 no.5
    • /
    • pp.461-466
    • /
    • 2011
  • In genetic and molecular breeding studies of plants, researchers need to design various kinds of primers based on their research purposes. So far many kinds of web- or script-based non-commercial programs for primer design are available. Because most of them do not include user interface for multipurpose usage including gene structure prediction and direct target selection on sequences, it has been a laborious work to design primers targeting on the exon or intron regions of interesting genes. Here we report a primer designing graphic user interface program, Pickprimer, that includes gene structure prediction and primer design modules by combining source codes of the Spidey and Primer3 programs. This program provides simple graphic user interface to input sequences and design primers. Genomic sequence and mRNA or coding sequence of genes can be copy and pasted or input as fasta or text files. Based on alignment of the input sequences using the Spidey module, a putative gene structure is graphically visualized along with exon-intron sequences of color codes. Primer design can be easily performed by dragging mouse on the displayed sequences or input primer targeting position with desirable values of primers. The output of designed primers with detailed information is provided by the Primer3 module. PCR evaluation of 24 selected primer sets successfully amplified single amplicons from six Brassica rapa cultivars. The Pickprimer will be a convenient tool for genetic and molecular breeding studies of plants.

3D Visual Attention Model and its Application to No-reference Stereoscopic Video Quality Assessment (3차원 시각 주의 모델과 이를 이용한 무참조 스테레오스코픽 비디오 화질 측정 방법)

  • Kim, Donghyun;Sohn, Kwanghoon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.110-122
    • /
    • 2014
  • As multimedia technologies develop, three-dimensional (3D) technologies are attracting increasing attention from researchers. In particular, video quality assessment (VQA) has become a critical issue in stereoscopic image/video processing applications. Furthermore, a human visual system (HVS) could play an important role in the measurement of stereoscopic video quality, yet existing VQA methods have done little to develop a HVS for stereoscopic video. We seek to amend this by proposing a 3D visual attention (3DVA) model which simulates the HVS for stereoscopic video by combining multiple perceptual stimuli such as depth, motion, color, intensity, and orientation contrast. We utilize this 3DVA model for pooling on significant regions of very poor video quality, and we propose no-reference (NR) stereoscopic VQA (SVQA) method. We validated the proposed SVQA method using subjective test scores from our results and those reported by others. Our approach yields high correlation with the measured mean opinion score (MOS) as well as consistent performance in asymmetric coding conditions. Additionally, the 3DVA model is used to extract information for the region-of-interest (ROI). Subjective evaluations of the extracted ROI indicate that the 3DVA-based ROI extraction outperforms the other compared extraction methods using spatial or/and temporal terms.

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF