• Title/Summary/Keyword: Image-to-image Translation

Search Result 306, Processing Time 0.03 seconds

2-D Conditional Moment for Recognition of Deformed Letters

  • Yoon, Myoong-Young
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.6 no.2
    • /
    • pp.16-22
    • /
    • 2001
  • In this paper we mose a new scheme for recognition of deformed letters by extracting feature vectors based on Gibbs distributions which are well suited for representing the spatial continuity. The extracted feature vectors are comprised of 2-D conditional moments which are invariant under translation, rotation, and scale of an image. The Algorithm for pattern recognition of deformed letters contains two parts: the extraction of feature vector and the recognition process. (i) We extract feature vector which consists of an improved 2-D conditional moments on the basis of estimated conditional Gibbs distribution for an image. (ii) In the recognition phase, the minimization of the discrimination cost function for a deformed letters determines the corresponding template pattern. In order to evaluate the performance of the proposed scheme, recognition experiments with a generated document was conducted. on Workstation. Experiment results reveal that the proposed scheme has high recognition rate over 96%.

  • PDF

Evaluation of accuracy in the ExacTrac 6D image induced radiotherapy using CBCT (CBCT을 이용한 ExacTrac 6D 영상유도방사선치료법의 정확도 평가)

  • Park, Ho Chun;Kim, Hyo Jung;Kim, Jong Deok;Ji, Dong Hwa;Song, Ju Young
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.109-121
    • /
    • 2016
  • To verify the accuracy of the image guided radiotherapy using ExacTrac 6D couch, the error values in six directions are randomly assigned and corrected and then the corrected values were compared with CBCT image to check the accurateness of ExacTrac. The therapy coordination values in the Rando head Phantom were moved in the directions of X, Y and Z as the translation group and they were moved in the directions of pitch, roll and yaw as the rotation group. The corrected values were moved in 6 directions with the combined and mutual reactions. The Z corrected value ranges from 1mm to 23mm. In the analysis of errors between CBCT image of the phantom which is corrected with therapy coordinate and 3D/3D matching error value, the rotation group showed higher error value than the translation group. In the distribution of dose for the error value of the therapy coordinate corrected with CBCT, the restricted value of dosage for the normal organs in two groups meet the prescription dose. In terms of PHI and PCI values which are the dose homogeneity of the cancerous tissue, the rotation group showed a little higher in the low dose distribution range. This study is designed to verify the accuracy of ExacTrac 6D couch using CBCT. It showed that in terms of the error value in the simple movement, it showed the comparatively accurate correction capability but in the movement when the angle is put in the couch, it showed the inaccurate correction values. So, if the body of the patient is likely to have a lot of changes in the direction of rotation or there is a lot of errors in the pitch, roll and yaw in ExacTrac correction, it is better to conduct the CBCT guided image to correct the therapy coordinate in order to minimize any side effects.

  • PDF

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

A study on correlation-based fingerprint recognition method (광학적 상관관계를 기반으로 하는 지문인식 방법에 관한 연구)

  • 김상백;주성현;정만호
    • Korean Journal of Optics and Photonics
    • /
    • v.13 no.6
    • /
    • pp.493-500
    • /
    • 2002
  • Fingerprint recognition is concerned with fingerprint acquisition and matching. Our research was focused on a fingerprint matching method using an inkless fingerprint input sensor at the fingerprint acquisition step. Since an inkless fingerprint sensor produces a digital-image-processed fingerprint image, we did not consider noise that can happen while acquiring the fingerprint. And making the user attempt fingerprint input as random, we considered image distortion that translation and rotation are included as complex. NJTC algorithm is used for fingerprint identification and verification. The method to find the center of the fingerprint is added in the NJTC algorithm to supplement discrimination of fingerprint recognition. From this center point, we decided the optimum cropping size for effective matching with pixels and demonstrated that the proposed method has high discrimination and high efficiency.

Producing Stereoscopic Video Contents Using Transformation of Character Objects (캐릭터 객체의 변환을 이용하는 입체 동영상 콘텐츠 제작)

  • Lee, Kwan-Wook;Won, Ji-Yeon;Choi, Chang-Yeol;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.33-43
    • /
    • 2011
  • Recently, 3D displays are supplied in the 3D markets so that the demand for 3D stereoscopic contents increases. In general, a simple method is to use a stereoscopic camera. As well, the production of 3D from 2D materials is regarded as an important technology. Such conversion works have gained much interest in the field of 3D converting. However, the stereoscopic image generation from a single 2D image is limited to simple 2D to 3D conversion so that the better realistic perception is difficult to deliver to the users. This paper presents a new stereoscopic content production method where foreground objects undergo alive action events. Further stereoscopic animation is viewed on 3D displays. Given a 2D image, the production is composed of background image generation, foreground object extraction, object/background depth maps and stereoscopic image generation The alive objects are made using the geometric transformation (e.g., translation, rotation, scaling, etc). The proposed method is performed on a Korean traditional painting, Danopungjung as well as Pixar's Up. The animated video showed that through the utilization of simple object transformations, more realistic perception can be delivered to the viewers.

Calibration of Thermal Camera with Enhanced Image (개선된 화질의 영상을 이용한 열화상 카메라 캘리브레이션)

  • Kim, Ju O;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.621-628
    • /
    • 2021
  • This paper proposes a method to calibrate a thermal camera with three different perspectives. In particular, the intrinsic parameters of the camera and re-projection errors were provided to quantify the accuracy of the calibration result. Three lenses of the camera capture the same image, but they are not overlapped, and the image resolution is worse than the one captured by the RGB camera. In computer vision, camera calibration is one of the most important and fundamental tasks to calculate the distance between camera (s) and a target object or the three-dimensional (3D) coordinates of a point in a 3D object. Once calibration is complete, the intrinsic and the extrinsic parameters of the camera(s) are provided. The intrinsic parameters are composed of the focal length, skewness factor, and principal points, and the extrinsic parameters are composed of the relative rotation and translation of the camera(s). This study estimated the intrinsic parameters of thermal cameras that have three lenses of different perspectives. In particular, image enhancement based on a deep learning algorithm was carried out to improve the quality of the calibration results. Experimental results are provided to substantiate the proposed method.

Direct RTI Fingerprint Identification Based on GCMs and Gabor Features Around Core point

  • Cho, Sang-Hyun;Sung, Hyo-Kyung;Park, Jin-Geun;Park, Heung-Moon
    • Proceedings of the IEEK Conference
    • /
    • 2000.07a
    • /
    • pp.446-449
    • /
    • 2000
  • A direct RTI(Rotation and translation invariant) fingerprint identification is proposed using the GCMs(generalized complex moments) and Gabor filter-based features from the grey level fingerprint around core point. The core point is located as reference point for the translation invariant matching. And its symmetry axis is detected for the rotation invariant matching from its neighboring region centered at the core point. And then, fingerprint is divided into non-overlapping blocks with respect to the core point and, in contrast to minutiae-based method using various processing steps, features are directly extracted from the blocked grey level fingerprint using Gabor filter, which provides information contained in a particular orientation in the image. The Proposed fingerprint identification is based on the Euclidean distance of the corresponding Gabor features between the input and the template fingerprint. Experiments are conducted on 300 ${\times}$ 300 fingerprints obtained from the CMOS sensor with 500 dpi resolution, and the proposed method could obtain 97% identification rate.

  • PDF

Stereoscopic Free-viewpoint Tour-Into-Picture Generation from a Single Image (단안 영상의 입체 자유시점 Tour-Into-Picture)

  • Kim, Je-Dong;Lee, Kwang-Hoon;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.15 no.2
    • /
    • pp.163-172
    • /
    • 2010
  • The free viewpoint video delivers an active contents where users can see the images rendered from the viewpoints chosen by them. Its applications are found in broad areas, especially museum tour, entertainment and so forth. As a new free-viewpoint application, this paper presents a stereoscopic free-viewpoint TIP (Tour Into Picture) where users can navigate the inside of a single image controlling a virtual camera and utilizing depth data. Unlike conventional TIP methods providing 2D image or video, our proposed method can provide users with 3D stereoscopic and free-viewpoint contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method uses semi-automatic processing to make foreground mask, background image, and depth map. The second step is to navigate the single picture and to obtain rendered images by perspective projection. For the free-viewpoint viewing, a virtual camera whose operations include translation, rotation, look-around, and zooming is operated. In experiments, the proposed method was tested eth 'Danopungjun' that is one of famous paintings made in Chosun Dynasty. The free-viewpoint software is developed based on MFC Visual C++ and OpenGL libraries.

A Fingerprint Identification System using Large Database (대용량 DB를 사용한 지문인식 시스템)

  • Cha, Jeong-Hee;Seo, Jeong-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.203-211
    • /
    • 2005
  • In this paper, we propose a new automatic fingerprint identification system that identifies individuals in large databases. The algorithm consists of three steps; preprocessing, classification, and matching, in the classification. we present a new classification technique based on the statistical approach for directional image distribution. In matching, we also describe improved minutiae candidate pair extraction algorithm that is faster and more accurate than existing algorithm. In matching stage, we extract fingerprint minutiaes from its thinned image for accuracy, and introduce matching process using minutiae linking information. Introduction of linking information into the minutiae matching process is a simple but accurate way, which solves the problem of reference minutiae pair selection in comparison stage of two fingerprints quickly. This algorithm is invariant to translation and rotation of fingerprint. The proposed system was tested on 1000 fingerprint images from the semiconductor chip style scanner. Experimental results reveal false acceptance rate is decreased and genuine acceptance rate is increased than existing method.

  • PDF

Evaluation of the Usefulness of Exactrac in Image-guided Radiation Therapy for Head and Neck Cancer (두경부암의 영상유도방사선치료에서 ExacTrac의 유용성 평가)

  • Baek, Min Gyu;Kim, Min Woo;Ha, Se Min;Chae, Jong Pyo;Jo, Guang Sub;Lee, Sang Bong
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.7-15
    • /
    • 2020
  • Purpose: In modern radiotherapy technology, several methods of image guided radiation therapy (IGRT) are used to deliver accurate doses to tumor target locations and normal organs, including CBCT (Cone Beam Computed Tomography) and other devices, ExacTrac System, other than CBCT equipped with linear accelerators. In previous studies comparing the two systems, positional errors were analysed rearwards using Offline-view or evaluated only with a Yaw rotation with the X, Y, and Z axes. In this study, when using CBCT and ExacTrac to perform 6 Degree of the Freedom(DoF) Online IGRT in a treatment center with two equipment, the difference between the set-up calibration values seen in each system, the time taken for patient set-up, and the radiation usefulness of the imaging device is evaluated. Materials and Methods: In order to evaluate the difference between mobile calibrations and exposure radiation dose, the glass dosimetry and Rando Phantom were used for 11 cancer patients with head circumference from March to October 2017 in order to assess the difference between mobile calibrations and the time taken from Set-up to shortly before IGRT. CBCT and ExacTrac System were used for IGRT of all patients. An average of 10 CBCT and ExacTrac images were obtained per patient during the total treatment period, and the difference in 6D Online Automation values between the two systems was calculated within the ROI setting. In this case, the area of interest designation in the image obtained from CBCT was fixed to the same anatomical structure as the image obtained through ExacTrac. The difference in positional values for the six axes (SI, AP, LR; Rotation group: Pitch, Roll, Rtn) between the two systems, the total time taken from patient set-up to just before IGRT, and exposure dose were measured and compared respectively with the RandoPhantom. Results: the set-up error in the phantom and patient was less than 1mm in the translation group and less than 1.5° in the rotation group, and the RMS values of all axes except the Rtn value were less than 1mm and 1°. The time taken to correct the set-up error in each system was an average of 256±47.6sec for IGRT using CBCT and 84±3.5sec for ExacTrac, respectively. Radiation exposure dose by IGRT per treatment was measured at 37 times higher than ExacTrac in CBCT and ExacTrac at 2.468mGy and 0.066mGy at Oral Mucosa among the 7 measurement locations in the head and neck area. Conclusion: Through 6D online automatic positioning between the CBCT and ExacTrac systems, the set-up error was found to be less than 1mm, 1.02°, including the patient's movement (random error), as well as the systematic error of the two systems. This error range is considered to be reasonable when considering that the PTV Margin is 3mm during the head and neck IMRT treatment in the present study. However, considering the changes in target and risk organs due to changes in patient weight during the treatment period, it is considered to be appropriately used in combination with CBCT.