• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.034 seconds

The Evaluation of Lateral Scatter Ray of Gamma Camera (Gamma Camera에 있어 측면 선란선의 영향에 대한 평가)

  • Kim, Jae-Il;Lee, Eun-Byeol;Cho, Seong-Wook;Noh, Kyeong-Woon;Kang, Keon-Wook
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.46-50
    • /
    • 2018
  • Purpose Generally, a collimator that installed in front of detector set a direction of gamma ray and remove a scatter ray. By the way, a lateral or oblique scatter ray is detected into crystal through collimator. At this study, we will evaluate a mount of count and spectrums of lateral scatter ray. Materials and Methods We used the SKY LITE (philips, netherlands) as a gamma camera, and $^{99m}Tc$, 1.11 GBq point source as a phantom. we put this point source at backside 50 cm of detector. After acquiring this for 1 min, we turned a detector next 10 degrees. Likely this, we acquired images at every 10 degrees from $0^{\circ}$ to $360^{\circ}$, analyzed images and spectrums. In case of patient study, we choose a 3 phase bone scan patient who had a hand disease, because scatter rays from body would detect on crystal. After acquiring blood flow and blood pool images, we analyzed images and spectrums. Additional, we put a lead gown on patient's hand, body. And then we compared and evaluated 3 type blood pool images (non lead gown, lead gown on a hand and on body). Results In case of phantom study, scatter ray counts at backside ($270^{\circ}-90^{\circ}$) are same with a background count. By the way, counts of scatter ray of oblique side ($0^{\circ}-50^{\circ}$, $220^{\circ}-270^{\circ}$) are 100-600 cps, furthermore, counts at frontside are over 4 Mcps. In case of patient study, a counts of hand blood pool scan are 1510 cps. But counts of hand with lead gown on hands and on body are each 1554 cps, 1299 cps. Conclusion Therefore, even though there is a collimator in front of detector, lateral scatter rays detect on crystal and affect to images and spectrums. Especially, if there is a high activity source at outside of detector when we examine low activity organs like hands or foot, we have to shield and remove the source at outside for a good image.

System Development for Measuring Group Engagement in the Art Center (공연장에서 다중 몰입도 측정을 위한 시스템 개발)

  • Ryu, Joon Mo;Choi, Il Young;Choi, Lee Kwon;Kim, Jae Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.45-58
    • /
    • 2014
  • The Korean Culture Contents spread out to Worldwide, because the Korean wave is sweeping in the world. The contents stand in the middle of the Korean wave that we are used it. Each country is ongoing to keep their Culture industry improve the national brand and High added value. Performing contents is important factor of arousal in the enterprise industry. To improve high arousal confidence of product and positive attitude by populace is one of important factor by advertiser. Culture contents is the same situation. If culture contents have trusted by everyone, they will give information their around to spread word-of-mouth. So, many researcher study to measure for person's arousal analysis by statistical survey, physiological response, body movement and facial expression. First, Statistical survey has a problem that it is not possible to measure each person's arousal real time and we cannot get good survey result after they watched contents. Second, physiological response should be checked with surround because experimenter sets sensors up their chair or space by each of them. Additionally it is difficult to handle provided amount of information with real time from their sensor. Third, body movement is easy to get their movement from camera but it difficult to set up experimental condition, to measure their body language and to get the meaning. Lastly, many researcher study facial expression. They measures facial expression, eye tracking and face posed. Most of previous studies about arousal and interest are mostly limited to reaction of just one person and they have problems with application multi audiences. They have a particular method, for example they need room light surround, but set limits only one person and special environment condition in the laboratory. Also, we need to measure arousal in the contents, but is difficult to define also it is not easy to collect reaction by audiences immediately. Many audience in the theater watch performance. We suggest the system to measure multi-audience's reaction with real-time during performance. We use difference image analysis method for multi-audience but it weaks a dark field. To overcome dark environment during recoding IR camera can get the photo from dark area. In addition we present Multi-Audience Engagement Index (MAEI) to calculate algorithm which sources from sound, audience' movement and eye tracking value. Algorithm calculates audience arousal from the mobile survey, sound value, audience' reaction and audience eye's tracking. It improves accuracy of Multi-Audience Engagement Index, we compare Multi-Audience Engagement Index with mobile survey. And then it send the result to reporting system and proposal an interested persons. Mobile surveys are easy, fast, and visitors' discomfort can be minimized. Also additional information can be provided mobile advantage. Mobile application to communicate with the database, real-time information on visitors' attitudes focused on the content stored. Database can provide different survey every time based on provided information. The example shown in the survey are as follows: Impressive scene, Satisfied, Touched, Interested, Didn't pay attention and so on. The suggested system is combine as 3 parts. The system consist of three parts, External Device, Server and Internal Device. External Device can record multi-Audience in the dark field with IR camera and sound signal. Also we use survey with mobile application and send the data to ERD Server DB. The Server part's contain contents' data, such as each scene's weights value, group audience weights index, camera control program, algorithm and calculate Multi-Audience Engagement Index. Internal Device presents Multi-Audience Engagement Index with Web UI, print and display field monitor. Our system is test-operated by the Mogencelab in the DMC display exhibition hall which is located in the Sangam Dong, Mapo Gu, Seoul. We have still gotten from visitor daily. If we find this system audience arousal factor with this will be very useful to create contents.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Moving Object Tracking Using MHI and M-bin Histogram (MHI와 M-bin Histogram을 이용한 이동물체 추적)

  • Oh, Youn-Seok;Lee, Soon-Tak;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.9 no.1
    • /
    • pp.48-55
    • /
    • 2005
  • In this paper, we propose an efficient moving object tracking technique for multi-camera surveillance system. Color CCD cameras used in this system are network cameras with their own IP addresses. Input image is transmitted to the media server through wireless connection among server, bridge, and Access Point (AP). The tracking system sends the received images through the network to the tracking module, and it tracks moving objects in real-time using color matching method. We compose two sets of cameras, and when the object is out of field of view (FOV), we accomplish hand-over to be able to continue tracking the object. When hand-over is performed, we use MHI(Motion History Information) based on color information and M-bin histogram for an exact tracking. By utilizing MHI, we can calculate direction and velocity of the object, and those information helps to predict next location of the object. Therefore, we obtain a better result in speed and stability than using template matching based on only M-bin histogram, and we verified this result by an experiment.

  • PDF

Estimating the Spatial Distribution of Rumex acetosella L. on Hill Pasture using UAV Monitoring System and Digital Camera (무인기와 디지털카메라를 이용한 산지초지에서의 애기수영 분포도 제작)

  • Lee, Hyo-Jin;Lee, Hyowon;Go, Han Jong
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.36 no.4
    • /
    • pp.365-369
    • /
    • 2016
  • Red sorrel (Rumex acetosella L.), as one of exotic weeds in Korea, was dominated in grassland and reduced the quality of forage. Improving current pasture productivity by precision management requires practical tools to collect site-specific pasture weed data. Recent development in unmanned aerial vehicle (UAV) technology has offered cost effective and real time applications for site-specific data collection. To map red sorrel on a hill pasture, we tested the potential use of an UAV system with digital cameras (visible and near-infrared (NIR) camera). Field measurements were conducted on grazing hill pasture at Hanwoo Improvement Office, Seosan City, Chungcheongnam-do Province, Korea on May 17, 2014. Plant samples were obtained at 20 sites. An UAV system was used to obtain aerial photos from a height of approximately 50 m (approximately 30 cm spatial resolution). Normalized digital number values of Red, Green, Blue, and NIR channels were extracted from aerial photos. Multiple linear regression analysis results showed that the correlation coefficient between Rumex content and 4 bands of UAV image was 0.96 with root mean square error of 9.3. Therefore, UAV monitoring system can be a quick and cost effective tool to obtain spatial distribution of red sorrel data for precision management of hilly grazing pasture.

Analysis for Concentration Range of Fluorescein Sodium (플루오레신나트륨의 농도 범위 분석)

  • Lee, Da-Ae;Kim, Yong-Jae;Yoon, Ki-Cheol;Kim, Kwang-Gi
    • Journal of Biomedical Engineering Research
    • /
    • v.41 no.2
    • /
    • pp.67-74
    • /
    • 2020
  • Brain tumors or gliomas are fatal cancer species with high recurrence rates due to their strong invasiveness. Therefore, the goal of surgery is complete tumor resection. However, the surgery is difficult to distinguish the border because tumors and blood vessels have the same color tone and shape. The fluorescein sodium is used as a fluorescence contrast agent for boundary separation. When the external light source is irradiated, yellow fluorescence is expressed in the tumor, which helps distinguish between blood vessels and tumor boundaries. But, the fluorescence expression of fluorescence sodium depends on the concentration of fluorescein sodium and such analytical data is insufficient. The unclear fluorescence can obscure the boundaries between blood vessels and tumors. In addition, reduce the efficiency of fluorescence sodium use. This paper proposes a protocol of concentration range for fluorescence expression conditions. Fluorescent expression was observed using a near-infrared (NIR) color camera with corresponding dilution using normal saline in 1 ml microtube. The flunoresence emission density range is 1.00 mM to 0.15 mM. The fluorescence emission begin to 1.00 mM and the 0.15 mM discolor. The discolor is difficult to fluorescence emission condition obserbation. Thus, the maximum density range of the bright fluoresecein is 0.15 mM to 0.30 mM. When the concentration range of fluorescein sodium is analyzed based on the gradient of fluorescence expression and the power measurement, the brightest fluorescence is expected to facilitate the complete resection of the tumor. For the concentration range protocol, setting concentration ranges and analyzing fluorescence expression image according to saturation and brightness to find optimal fluorescence concentration are important. Concentration range protocols for fluorescence expression conditions can be used to find optimal concentrations of substances whose expression pattern varies with concentration ranges. This study is expected to be helpful in the boundary classification and resection of brain tumors and glioma.

Coastal Shallow-Water Bathymetry Survey through a Drone and Optical Remote Sensors (드론과 광학원격탐사 기법을 이용한 천해 수심측량)

  • Oh, Chan Young;Ahn, Kyungmo;Park, Jaeseong;Park, Sung Woo
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.3
    • /
    • pp.162-168
    • /
    • 2017
  • Shallow-water bathymetry survey has been conducted using high definition color images obtained at the altitude of 100 m above sea level using a drone. Shallow-water bathymetry data are one of the most important input data for the research of beach erosion problems. Especially, accurate bathymetry data within closure depth are critically important, because most of the interesting phenomena occur in the surf zone. However, it is extremely difficult to obtain accurate bathymetry data due to wave-induced currents and breaking waves in this region. Therefore, optical remote sensing technique using a small drone is considered to be attractive alternative. This paper presents the potential utilization of image processing algorithms using multi-variable linear regression applied to red, green, blue and grey band images for estimating shallow water depth using a drone with HD camera. Optical remote sensing analysis conducted at Wolpo beach showed promising results. Estimated water depths within 5 m showed correlation coefficient of 0.99 and maximum error of 0.2 m compared with water depth surveyed through manual as well as ship-board echo-sounder measurements.

Development of a Portable Device Based Wireless Medical Radiation Monitoring System (휴대용 단말 기반 의료용 무선 방사선 모니터링 시스템 개발)

  • Park, Hye Min;Hong, Hyun Seong;Kim, Jeong Ho;Joo, Koan Sik
    • Journal of Radiation Protection and Research
    • /
    • v.39 no.3
    • /
    • pp.150-158
    • /
    • 2014
  • Radiation-related practitioners and radiation-treated patients at medical institutions are inevitably exposed to radiation for diagnosis and treatment. Although standards for maximum doses are recommended by the International Commission on Radiological Protection (ICPR) and the International Atomic Energy Agency (IAEA), more direct and available measurement and analytical methods are necessary for optimal exposure management for potential exposure subjects such as practitioners and patients. Thus, in this study we developed a system for real-time radiation monitoring at a distance that works with existing portable device. The monitoring system comprises three parts for detection, imaging, and transmission. For miniaturization of the detection part, a scintillation detector was designed based on a silicon photomultiplier (SiPM). The imaging part uses a wireless charge-coupled device (CCD) camera module along with the detection part to transmit a radiation image and measured data through the transmission part using a Bluetooth-enabled portable device. To evaluate the performance of the developed system, diagnostic X-ray generators and sources of $^{137}Cs$, $^{22}Na$, $^{60}Co$, $^{204}Tl$, and $^{90}Sr$ were used. We checked the results for reactivity to gamma, beta, and X-ray radiation and determined that the error range in the response linearity is less than 3% with regard to radiation strength and in the detection accuracy evaluation with regard to measured distance using MCNPX Code. We hope that the results of this study will contribute to cost savings for radiation detection system configuration and to individual exposure management.

Acquisition of Subcentimeter GSD Images Using UAV and Analysis of Visual Resolution (UAV를 이용한 Subcentimeter GSD 영상의 취득 및 시각적 해상도 분석)

  • Han, Soohee;Hong, Chang-Ki
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.563-572
    • /
    • 2017
  • The purpose of the study is to investigate the effect of flight height, flight speed, exposure time of camera shutter and autofocusing on the visual resolution of the image in order to obtain ultra-high resolution images with a GSD less than 1cm. It is also aimed to evaluate the ease of recognition of various types of aerial targets. For this purpose, we measured the visual resolution using a 7952*5304 pixel 35mm CMOS sensor and a 55mm prime lens at 20m intervals from 20m to 120m above ground. As a result, with automatic focusing, the visual resolution is measured 1.1~1.6 times as the theoretical GSD, and without automatic focusing, 1.5~3.5 times. Next, the camera was shot at 80m above ground at a constant flight speed of 5m/s, while reducing the exposure time by 1/2 from 1/60sec to 1/2000sec. Assuming that blur is allowed within 1 pixel, the visual resolution is 1.3~1.5 times larger than the theoretical GSD when the exposure time is kept within the longest exposure time, and 1.4~3.0 times larger when it is not kept. If the aerial targets are printed on A4 paper and they are shot within 80m above ground, the encoded targets can be recognized automatically by commercial software, and various types of general targets and coded ones can be manually recognized with ease.

Illuminant Color Estimation Method Using Valuable Pixels (중요 화소들을 이용한 광원의 색 추정 방법)

  • Kim, Young-Woo;Lee, Moon-Hyun;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.18 no.1
    • /
    • pp.21-30
    • /
    • 2013
  • It is a challenging problem to most of the image processing when the light source is unknown. The color of the light source must be estimated in order to compensate color changes. To estimate the color of the light source, additional assumption is need, so that we assumed color distribution according to the light source. If the pixels, which do not satisfy the assumption, are used, the estimation fails to provide an accurate result. The most popular color distribution assumption is Grey-World Assumption (GWA); it is the assumption that the color in each scene, the surface reflectance averages to gray or achromatic color over the entire images. In this paper, we analyze the characteristics of the camera response function, and the effect of the Grey-World Assumption on the pixel value and chromaticity, based on the inherent characteristics of the light source. Besides, we propose a novel method that detects important pixels for the color estimation of the light source. In our method, we firstly proposed a method that gives weights to pixels satisfying the assumption. Then, we proposed a pixel detection method, which we modified max-RGB method, to apply on the weighted pixels. Maximum weighted pixels in the column direction and row direction in one channel are detected. The performance of our method is verified through demonstrations in several real scenes. Proposed method better accurately estimate the color of the light than previous methods.