• Title/Summary/Keyword: Image rotation

Search Result 842, Processing Time 0.025 seconds

Image Evaluation of Resolution Parameter and Reconstitution Filter in 256 Multi Detector Computed Tomography by Using Head Phantom (256 다중 검출기 전산화단층촬영에서 두개부 전용 팬톰을 이용한 분해능 파라메터와 재구성 필터의 영상 평가)

  • Gu, Bon-Seung;Seoung, Youl-Hun
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.814-821
    • /
    • 2011
  • The purpose of this study was to evaluate of resolution parameter and reconstitution filter in the 256 multi detector computed tomography(MDCT) by using the head phantom. We used 256 MDCT, and head phantom of philips system. We evaluated to image quality by using Extended Brilliance Workspace. The protocol were axial scan method with 120 kVp, 0.5 sec of rotation time, 5 mm of slice thickness and increment, 250 mm of field of view(FOV), $512{\times}512$ of matrix size, 1.0 of pitch, $128{\times}0.625$ mm of collimations. The resolution parameter was applied for 'Standard', 'High' and 'Ultrahigh'. The reconstitution filters were changed to seven type of 'A', 'B', 'C', 'D', 'UA', 'UB', 'UC'. The assesment factors of image quality were the uniformity, the noise, the linearity and 50% and 10% of the modulation transfer function(MTF). Finally The good image quality in 'High' resolution parameter showed at the uniformity, the linearity and 50% and 10% of MTF. The 'UA', 'UB' reconstitution filter showed at the good image quality of the uniformity and the noise and 'C' reconstitution filter showed at the same result of the linearity and 50% and 10% of MTF.

Producing Stereoscopic Video Contents Using Transformation of Character Objects (캐릭터 객체의 변환을 이용하는 입체 동영상 콘텐츠 제작)

  • Lee, Kwan-Wook;Won, Ji-Yeon;Choi, Chang-Yeol;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.33-43
    • /
    • 2011
  • Recently, 3D displays are supplied in the 3D markets so that the demand for 3D stereoscopic contents increases. In general, a simple method is to use a stereoscopic camera. As well, the production of 3D from 2D materials is regarded as an important technology. Such conversion works have gained much interest in the field of 3D converting. However, the stereoscopic image generation from a single 2D image is limited to simple 2D to 3D conversion so that the better realistic perception is difficult to deliver to the users. This paper presents a new stereoscopic content production method where foreground objects undergo alive action events. Further stereoscopic animation is viewed on 3D displays. Given a 2D image, the production is composed of background image generation, foreground object extraction, object/background depth maps and stereoscopic image generation The alive objects are made using the geometric transformation (e.g., translation, rotation, scaling, etc). The proposed method is performed on a Korean traditional painting, Danopungjung as well as Pixar's Up. The animated video showed that through the utilization of simple object transformations, more realistic perception can be delivered to the viewers.

Manufacture of 3-Dimensional Image and Virtual Dissection Program of the Human Brain (사람 뇌의 3차원 영상과 가상해부 풀그림 만들기)

  • Chung, M.S.;Lee, J.M.;Park, S.K.;Kim, M.K.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1998 no.11
    • /
    • pp.57-59
    • /
    • 1998
  • For medical students and doctors, knowledge of the three-dimensional (3D) structure of brain is very important in diagnosis and treatment of brain diseases. Two-dimensional (2D) tools (ex: anatomy book) or traditional 3D tools (ex: plastic model) are not sufficient to understand the complex structures of the brain. However, it is not always guaranteed to dissect the brain of cadaver when it is necessary. To overcome this problem, the virtual dissection programs of the brain have been developed. However, most programs include only 2D images that do not permit free dissection and free rotation. Many programs are made of radiographs that are not as realistic as sectioned cadaver because radiographs do not reveal true color and have limited resolution. It is also necessary to make the virtual dissection programs of each race and ethnic group. We attempted to make a virtual dissection program using a 3D image of the brain from a Korean cadaver. The purpose of this study is to present an educational tool for those interested in the anatomy of the brain. The procedures to make this program were as follows. A brain extracted from a 58-years old male Korean cadaver was embedded with gelatin solution, and serially sectioned into 1.4 mm-thickness using a meat slicer. 130 sectioned specimens were inputted to the computer using a scanner ($420\times456$ resolution, true color), and the 2D images were aligned on the alignment program composed using IDL language. Outlines of the brain components (cerebrum, cerebellum, brain stem, lentiform nucleus, caudate nucleus, thalamus, optic nerve, fornix, cerebral artery, and ventricle) were manually drawn from the 2D images on the CorelDRAW program. Multimedia data, including text and voice comments, were inputted to help the user to learn about the brain components. 3D images of the brain were reconstructed through the volume-based rendering of the 2D images. Using the 3D image of the brain as the main feature, virtual dissection program was composed using IDL language. Various dissection functions, such as dissecting 3D image of the brain at free angle to show its plane, presenting multimedia data of brain components, and rotating 3D image of the whole brain or selected brain components at free angle were established. This virtual dissection program is expected to become more advanced, and to be used widely through Internet or CD-title as an educational tool for medical students and doctors.

  • PDF

Stereoscopic Free-viewpoint Tour-Into-Picture Generation from a Single Image (단안 영상의 입체 자유시점 Tour-Into-Picture)

  • Kim, Je-Dong;Lee, Kwang-Hoon;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.15 no.2
    • /
    • pp.163-172
    • /
    • 2010
  • The free viewpoint video delivers an active contents where users can see the images rendered from the viewpoints chosen by them. Its applications are found in broad areas, especially museum tour, entertainment and so forth. As a new free-viewpoint application, this paper presents a stereoscopic free-viewpoint TIP (Tour Into Picture) where users can navigate the inside of a single image controlling a virtual camera and utilizing depth data. Unlike conventional TIP methods providing 2D image or video, our proposed method can provide users with 3D stereoscopic and free-viewpoint contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method uses semi-automatic processing to make foreground mask, background image, and depth map. The second step is to navigate the single picture and to obtain rendered images by perspective projection. For the free-viewpoint viewing, a virtual camera whose operations include translation, rotation, look-around, and zooming is operated. In experiments, the proposed method was tested eth 'Danopungjun' that is one of famous paintings made in Chosun Dynasty. The free-viewpoint software is developed based on MFC Visual C++ and OpenGL libraries.

Analysis and Compensation of Time Synchronization Error on SAR Image (시각 동기화 오차가 SAR 영상에 미치는 영향 분석 및 보상)

  • Lee, Soojeong;Park, Woo Jung;Park, Chan Gook;Song, Jong-Hwa;Bae, Chang-Sik
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.4
    • /
    • pp.285-293
    • /
    • 2020
  • In this paper, to improve Synthetic Aperture Radar (SAR) image quality, the effect of time synchronization error in the EGI/IMU (Embedded GPS/INS, Inertial Measurement Unit) integrated system is analyzed and state augmentation is applied to compensate it. EGI/IMU integrated system is widely used as a SAR motion measurement algorithm, which consists of EGI mounted to obtain the trajectory and IMU mounted on the SAR antenna. In an EGI/IMU integrated system, a time synchronization error occurs when the clocks of the sensors are not synchronized. Analysis of the effect of time synchronization error on navigation solutions and SAR images confirmed that the time synchronization error deteriorates SAR image quality. The state augmentation is applied to compensate for this and as a result, the SAR image quality does not decrease. In addition, by analyzing the performance and the observability of the time synchronization error according to the maneuver, it was confirmed that the time-variant maneuver such as rotational motion is necessary to estimate the time synchronization error adequately. In order to reduce the influence of the time synchronization error on the SAR image, the time synchronization error must be compensated by performing maneuver changing over time such as a rotation before SAR operation.

A Study on the Image and Visual Preference for the Seongpanak Districtat at the Mt. Hallasan (한라산 성판악 등산로 주변 경관이미지 및 선호도 특성에 관한 연구)

  • Kim, Sei-Cheon;Huh, Joon
    • Korean Journal of Environment and Ecology
    • /
    • v.21 no.2
    • /
    • pp.134-140
    • /
    • 2007
  • The purpose of this study is to investigate the landscape image and visual preference for ridges of the Seongpanak district at Mt. Hallasan. For this, the evaluation of the artificial and natural landscape is compared through the medium of color slides. Data is analyzed through the descriptive statistics and spatial image is analyzed by factor analysis algorithm. Principle component analysis and Varimax Method are applied to extraction and factor rotation respectively. The results of this study can be summarized as follows : General visual imagesthe Seongpanak district at Mt. Hallasan are clean, beautiful and attractive. The degree of visual preference increased commensurately with the lower rate of artificial factors. Landscape Factors covering the spatial image are found to be 'aesthetic value', 'spatial scale', 'natural quality', and 'topography' factor, which account for 57.6% of the total variants. The aesthetic value variable is the most important factor in visual preference and the unnatural factors are found to present negative elements with visual preference.

Development of Convolutional Network-based Denoising Technique using Deep Reinforcement Learning in Computed Tomography (심층강화학습을 이용한 Convolutional Network 기반 전산화단층영상 잡음 저감 기술 개발)

  • Cho, Jenonghyo;Yim, Dobin;Nam, Kibok;Lee, Dahye;Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.7
    • /
    • pp.991-1001
    • /
    • 2020
  • Supervised deep learning technologies for improving the image quality of computed tomography (CT) need a lot of training data. When input images have different characteristics with training images, the technologies cause structural distortion in output images. In this study, an imaging model based on the deep reinforcement learning (DRL) was developed for overcoming the drawbacks of the supervised deep learning technologies and reducing noise in CT images. The DRL model was consisted of shared, value and policy networks, and the networks included convolutional layers, rectified linear unit (ReLU), dilation factors and gate rotation unit (GRU) in order to extract noise features from CT images and improve the performance of the DRL model. Also, the quality of the CT images obtained by using the DRL model was compared to that obtained by using the supervised deep learning model. The results showed that the image accuracy for the DRL model was higher than that for the supervised deep learning model, and the image noise for the DRL model was smaller than that for the supervised deep learning model. Also, the DRL model reduced the noise of the CT images, which had different characteristics with training images. Therefore, the DRL model is able to reduce image noise as well as maintain the structural information of CT images.

Accuracy Assessment of Aerial Triangulation of Network RTK UAV (네트워크 RTK 무인기의 항공삼각측량 정확도 평가)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.6
    • /
    • pp.663-670
    • /
    • 2020
  • In the present study, we assessed the accuracy of aerial triangulation using a UAV (Unmanned Aerial Vehicle) capable of network RTK (Real-Time Kinematic) survey in a disaster situation that may occur in a semi-urban area mixed with buildings. For a reliable survey of check points, they were installed on the roofs of buildings, and static GNSS (Global Navigation Satellite System) survey was conducted for more than four hours. For objective accuracy assessment, coded aerial targets were installed on the check points to be automatically recognized by software. At the instance of image acquisition, the 3D coordinates of the UAV camera were measured using VRS (Virtual Reference Station) method, as a kind of network RTK survey, and the 3-axial angles were achieved using IMU (Inertial Measurement Unit) and gimbal rotation measurement. As a result of estimation and update of the interior and exterior orientation parameters using Agisoft Metashape, the 3D RMSE (Root Mean Square Error) of aerial triangulation ranged from 0.153 m to 0.102 m according to the combination of the image overlap and the angle of the image acquisition. To get higher aerial triangulation accuracy, it was proved to be effective to incorporate oblique images, though it is common to increase the overlap of vertical images. Therefore, to conduct a UAV mapping in an urgent disaster site, it is necessary to acquire oblique images together rather than improving image overlap.

Calibration of Thermal Camera with Enhanced Image (개선된 화질의 영상을 이용한 열화상 카메라 캘리브레이션)

  • Kim, Ju O;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.621-628
    • /
    • 2021
  • This paper proposes a method to calibrate a thermal camera with three different perspectives. In particular, the intrinsic parameters of the camera and re-projection errors were provided to quantify the accuracy of the calibration result. Three lenses of the camera capture the same image, but they are not overlapped, and the image resolution is worse than the one captured by the RGB camera. In computer vision, camera calibration is one of the most important and fundamental tasks to calculate the distance between camera (s) and a target object or the three-dimensional (3D) coordinates of a point in a 3D object. Once calibration is complete, the intrinsic and the extrinsic parameters of the camera(s) are provided. The intrinsic parameters are composed of the focal length, skewness factor, and principal points, and the extrinsic parameters are composed of the relative rotation and translation of the camera(s). This study estimated the intrinsic parameters of thermal cameras that have three lenses of different perspectives. In particular, image enhancement based on a deep learning algorithm was carried out to improve the quality of the calibration results. Experimental results are provided to substantiate the proposed method.

A Study on the Availability of the On-Board Imager(OBI) and Cone-Beam CT(CBCT) in the Verification of Patient Set-up (온보드 영상장치(On-Board Imager) 및 콘빔CT(CBCT)를 이용한 환자 자세 검증의 유용성에 대한 연구)

  • Bak, Jino;Park, Sung-Ho;Park, Suk-Won
    • Radiation Oncology Journal
    • /
    • v.26 no.2
    • /
    • pp.118-125
    • /
    • 2008
  • Purpose: On-line image guided radiation therapy(on-line IGRT) and(kV X-ray images or cone beam CT images) were obtained by an on-board imager(OBI) and cone beam CT(CBCT), respectively. The images were then compared with simulated images to evaluate the patient's setup and correct for deviations. The setup deviations between the simulated images(kV or CBCT images), were computed from 2D/2D match or 3D/3D match programs, respectively. We then investigated the correctness of the calculated deviations. Materials and Methods: After the simulation and treatment planning for the RANDO phantom, the phantom was positioned on the treatment table. The phantom setup process was performed with side wall lasers which standardized treatment setup of the phantom with the simulated images, after the establishment of tolerance limits for laser line thickness. After a known translation or rotation angle was applied to the phantom, the kV X-ray images and CBCT images were obtained. Next, 2D/2D match and 3D/3D match with simulation CT images were taken. Lastly, the results were analyzed for accuracy of positional correction. Results: In the case of the 2D/2D match using kV X-ray and simulation images, a setup correction within $0.06^{\circ}$ for rotation only, 1.8 mm for translation only, and 2.1 mm and $0.3^{\circ}$ for both rotation and translation, respectively, was possible. As for the 3D/3D match using CBCT images, a correction within $0.03^{\circ}$ for rotation only, 0.16 mm for translation only, and 1.5 mm for translation and $0.0^{\circ}$ for rotation, respectively, was possible. Conclusion: The use of OBI or CBCT for the on-line IGRT provides the ability to exactly reproduce the simulated images in the setup of a patient in the treatment room. The fast detection and correction of a patient's positional error is possible in two dimensions via kV X-ray images from OBI and in three dimensions via CBCT with a higher accuracy. Consequently, the on-line IGRT represents a promising and reliable treatment procedure.