• Title/Summary/Keyword: Image Translation

Search Result 319, Processing Time 0.031 seconds

Evaluation of accuracy in the ExacTrac 6D image induced radiotherapy using CBCT (CBCT을 이용한 ExacTrac 6D 영상유도방사선치료법의 정확도 평가)

  • Park, Ho Chun;Kim, Hyo Jung;Kim, Jong Deok;Ji, Dong Hwa;Song, Ju Young
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.109-121
    • /
    • 2016
  • To verify the accuracy of the image guided radiotherapy using ExacTrac 6D couch, the error values in six directions are randomly assigned and corrected and then the corrected values were compared with CBCT image to check the accurateness of ExacTrac. The therapy coordination values in the Rando head Phantom were moved in the directions of X, Y and Z as the translation group and they were moved in the directions of pitch, roll and yaw as the rotation group. The corrected values were moved in 6 directions with the combined and mutual reactions. The Z corrected value ranges from 1mm to 23mm. In the analysis of errors between CBCT image of the phantom which is corrected with therapy coordinate and 3D/3D matching error value, the rotation group showed higher error value than the translation group. In the distribution of dose for the error value of the therapy coordinate corrected with CBCT, the restricted value of dosage for the normal organs in two groups meet the prescription dose. In terms of PHI and PCI values which are the dose homogeneity of the cancerous tissue, the rotation group showed a little higher in the low dose distribution range. This study is designed to verify the accuracy of ExacTrac 6D couch using CBCT. It showed that in terms of the error value in the simple movement, it showed the comparatively accurate correction capability but in the movement when the angle is put in the couch, it showed the inaccurate correction values. So, if the body of the patient is likely to have a lot of changes in the direction of rotation or there is a lot of errors in the pitch, roll and yaw in ExacTrac correction, it is better to conduct the CBCT guided image to correct the therapy coordinate in order to minimize any side effects.

  • PDF

Gendered innovation for algorithm through case studies (음성·영상 신호 처리 알고리즘 사례를 통해 본 젠더혁신의 필요성)

  • Lee, JiYeoun;Lee, Heisook
    • Journal of Digital Convergence
    • /
    • v.16 no.12
    • /
    • pp.459-466
    • /
    • 2018
  • Gendered innovations is a term used by policy makers and academics to refer the process of creating better research and development (R&D) for both men and women. In this paper, we analyze the literatures in image and speech signal processing that can be used in ICT, examine the importance of gendered innovations through case study. Therefore the latest domestic and foreign literature related to image and speech signal processing based on gender research is searched and a total of 9 papers are selected. In terms of gender analysis, research subjects, research environment, and research design are examined separately. Especially, through the case analysis of algorithms of the elderly voice signal processing, machine learning, machine translation technology, and facial gender recognition technology, we found that there is gender bias in existing algorithms, and which leads to gender analysis is required. We also propose a gendered innovations method integrating sex and gender analysis in algorithm development. Gendered innovations in ICT can contribute to the creation of new markets by developing products and services that reflect the needs of both men and women.

A Study on the Availability of the On-Board Imager(OBI) and Cone-Beam CT(CBCT) in the Verification of Patient Set-up (온보드 영상장치(On-Board Imager) 및 콘빔CT(CBCT)를 이용한 환자 자세 검증의 유용성에 대한 연구)

  • Bak, Jino;Park, Sung-Ho;Park, Suk-Won
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.118-125
    • /
    • 2008
  • Purpose: On-line image guided radiation therapy(on-line IGRT) and(kV X-ray images or cone beam CT images) were obtained by an on-board imager(OBI) and cone beam CT(CBCT), respectively. The images were then compared with simulated images to evaluate the patient's setup and correct for deviations. The setup deviations between the simulated images(kV or CBCT images), were computed from 2D/2D match or 3D/3D match programs, respectively. We then investigated the correctness of the calculated deviations. Materials and Methods: After the simulation and treatment planning for the RANDO phantom, the phantom was positioned on the treatment table. The phantom setup process was performed with side wall lasers which standardized treatment setup of the phantom with the simulated images, after the establishment of tolerance limits for laser line thickness. After a known translation or rotation angle was applied to the phantom, the kV X-ray images and CBCT images were obtained. Next, 2D/2D match and 3D/3D match with simulation CT images were taken. Lastly, the results were analyzed for accuracy of positional correction. Results: In the case of the 2D/2D match using kV X-ray and simulation images, a setup correction within $0.06^{\circ}$ for rotation only, 1.8 mm for translation only, and 2.1 mm and $0.3^{\circ}$ for both rotation and translation, respectively, was possible. As for the 3D/3D match using CBCT images, a correction within $0.03^{\circ}$ for rotation only, 0.16 mm for translation only, and 1.5 mm for translation and $0.0^{\circ}$ for rotation, respectively, was possible. Conclusion: The use of OBI or CBCT for the on-line IGRT provides the ability to exactly reproduce the simulated images in the setup of a patient in the treatment room. The fast detection and correction of a patient's positional error is possible in two dimensions via kV X-ray images from OBI and in three dimensions via CBCT with a higher accuracy. Consequently, the on-line IGRT represents a promising and reliable treatment procedure.

Producing Stereoscopic Video Contents Using Transformation of Character Objects (캐릭터 객체의 변환을 이용하는 입체 동영상 콘텐츠 제작)

  • Lee, Kwan-Wook;Won, Ji-Yeon;Choi, Chang-Yeol;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.33-43
    • /
    • 2011
  • Recently, 3D displays are supplied in the 3D markets so that the demand for 3D stereoscopic contents increases. In general, a simple method is to use a stereoscopic camera. As well, the production of 3D from 2D materials is regarded as an important technology. Such conversion works have gained much interest in the field of 3D converting. However, the stereoscopic image generation from a single 2D image is limited to simple 2D to 3D conversion so that the better realistic perception is difficult to deliver to the users. This paper presents a new stereoscopic content production method where foreground objects undergo alive action events. Further stereoscopic animation is viewed on 3D displays. Given a 2D image, the production is composed of background image generation, foreground object extraction, object/background depth maps and stereoscopic image generation The alive objects are made using the geometric transformation (e.g., translation, rotation, scaling, etc). The proposed method is performed on a Korean traditional painting, Danopungjung as well as Pixar's Up. The animated video showed that through the utilization of simple object transformations, more realistic perception can be delivered to the viewers.

Stereoscopic Free-viewpoint Tour-Into-Picture Generation from a Single Image (단안 영상의 입체 자유시점 Tour-Into-Picture)

  • Kim, Je-Dong;Lee, Kwang-Hoon;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.15 no.2
    • /
    • pp.163-172
    • /
    • 2010
  • The free viewpoint video delivers an active contents where users can see the images rendered from the viewpoints chosen by them. Its applications are found in broad areas, especially museum tour, entertainment and so forth. As a new free-viewpoint application, this paper presents a stereoscopic free-viewpoint TIP (Tour Into Picture) where users can navigate the inside of a single image controlling a virtual camera and utilizing depth data. Unlike conventional TIP methods providing 2D image or video, our proposed method can provide users with 3D stereoscopic and free-viewpoint contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method uses semi-automatic processing to make foreground mask, background image, and depth map. The second step is to navigate the single picture and to obtain rendered images by perspective projection. For the free-viewpoint viewing, a virtual camera whose operations include translation, rotation, look-around, and zooming is operated. In experiments, the proposed method was tested eth 'Danopungjun' that is one of famous paintings made in Chosun Dynasty. The free-viewpoint software is developed based on MFC Visual C++ and OpenGL libraries.

Calibration of Thermal Camera with Enhanced Image (개선된 화질의 영상을 이용한 열화상 카메라 캘리브레이션)

  • Kim, Ju O;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.621-628
    • /
    • 2021
  • This paper proposes a method to calibrate a thermal camera with three different perspectives. In particular, the intrinsic parameters of the camera and re-projection errors were provided to quantify the accuracy of the calibration result. Three lenses of the camera capture the same image, but they are not overlapped, and the image resolution is worse than the one captured by the RGB camera. In computer vision, camera calibration is one of the most important and fundamental tasks to calculate the distance between camera (s) and a target object or the three-dimensional (3D) coordinates of a point in a 3D object. Once calibration is complete, the intrinsic and the extrinsic parameters of the camera(s) are provided. The intrinsic parameters are composed of the focal length, skewness factor, and principal points, and the extrinsic parameters are composed of the relative rotation and translation of the camera(s). This study estimated the intrinsic parameters of thermal cameras that have three lenses of different perspectives. In particular, image enhancement based on a deep learning algorithm was carried out to improve the quality of the calibration results. Experimental results are provided to substantiate the proposed method.

The Character Recognition System of Mobile Camera Based Image (모바일 이미지 기반의 문자인식 시스템)

  • Park, Young-Hyun;Lee, Hyung-Jin;Baek, Joong-Hwan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.5
    • /
    • pp.1677-1684
    • /
    • 2010
  • Recently, due to the development of mobile phone and supply of smart phone, many contents have been developed. Especially, since the small-sized cameras are equiped in mobile devices, people are interested in the image based contents development, and it also becomes important part in their practical use. Among them, the character recognition system can be widely used in the applications such as blind people guidance systems, automatic robot navigation systems, automatic video retrieval and indexing systems, automatic text translation systems. Therefore, this paper proposes a system that is able to extract text area from the natural images captured by smart phone camera. The individual characters are recognized and result is output in voice. Text areas are extracted using Adaboost algorithm and individual characters are recognized using error back propagated neural network.

Indian Traditional Clothing in Fashion Design of the 21st Century (2000년 이후 패션디자인에 나타난 인도 전통 복식)

  • Choi, Ho-Jeong
    • Journal of the Korean Society of Costume
    • /
    • v.56 no.9 s.109
    • /
    • pp.127-142
    • /
    • 2006
  • In this study, I have analyzed Indian traditional clothing in fashion design of 21st century by comparing 1,286 fashion items designed by Indian designers and 722 Western fashion items, which were presented from 2000 S/S to 2005 F/W Formal analysis were made for change in how to wear clothes, and change in items and ornaments. Change of traditional clothing was found in two ways; Western elements added to Indian tradition and Indian traditional image adopted in Western clothing. First, Indian traditional elements added to Western clothing in the formal aspect was found in 83% of Western collections and 27.2% of the Indian designers' collections. In Indian designers' collections, traditional clothing form takes 72.8%, which shows the regional characteristics of India where the traditional clothing is still adhered to in daily life especially by women. Second, from the fashion design of the Indian designers, we can find modernization of sari, change of traditional items into more active and modern way by adding Western clothing; change of form, color and material of traditional items in various ways; and decorative aspects highlighted by adding Indian traditional color, pattern or decoration into Western clothing. In most cases, Western collections are seasoned with Indian traditional image, rather than utilizing the form of Indian clothing. Although adopting the farm of Indian traditional clothing, it can be considered as a translation from the viewpoint of the West. Third, Indian look is expressed in various ways by reproducing Indian traditional ornaments such as earings, bracelets and henna, or by adopting Indian traditional fabric design and decoration in mufflers, bags and etc.

Using Contour Matching for Omnidirectional Camera Calibration (투영곡선의 자동정합을 이용한 전방향 카메라 보정)

  • Hwang, Yong-Ho;Hong, Hyun-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.125-132
    • /
    • 2008
  • Omnidirectional camera system with a wide view angle is widely used in surveillance and robotics areas. In general, most of previous studies on estimating a projection model and the extrinsic parameters from the omnidirectional images assume corresponding points previously established among views. This paper presents a novel omnidirectional camera calibration based on automatic contour matching. In the first place, we estimate the initial parameters including translation and rotations by using the epipolar constraint from the matched feature points. After choosing the interested points adjacent to more than two contours, we establish a precise correspondence among the connected contours by using the initial parameters and the active matching windows. The extrinsic parameters of the omnidirectional camera are estimated minimizing the angular errors of the epipolar plane of endpoints and the inverse projected 3D vectors. Experimental results on synthetic and real images demonstrate that the proposed algorithm obtains more precise camera parameters than the previous method.

Vision-based Target Tracking for UAV and Relative Depth Estimation using Optical Flow (무인 항공기의 영상기반 목표물 추적과 광류를 이용한 상대깊이 추정)

  • Jo, Seon-Yeong;Kim, Jong-Hun;Kim, Jung-Ho;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.3
    • /
    • pp.267-274
    • /
    • 2009
  • Recently, UAVs (Unmanned Aerial Vehicles) are expected much as the Unmanned Systems for various missions. These missions are often based on the Vision System. Especially, missions such as surveillance and pursuit have a process which is carried on through the transmitted vision data from the UAV. In case of small UAVs, monocular vision is often used to consider weights and expenses. Research of missions performance using the monocular vision is continued but, actually, ground and target model have difference in distance from the UAV. So, 3D distance measurement is still incorrect. In this study, Mean-Shift Algorithm, Optical Flow and Subspace Method are posed to estimate the relative depth. Mean-Shift Algorithm is used for target tracking and determining Region of Interest (ROI). Optical Flow includes image motion information using pixel intensity. After that, Subspace Method computes the translation and rotation of image and estimates the relative depth. Finally, we present the results of this study using images obtained from the UAV experiments.