• Title/Summary/Keyword: Automatic measuring system

Search Result 356, Processing Time 0.032 seconds

HVL Measurement of the Miniature X-Ray Tube Using Diode Detector (다이오드 검출기를 이용한 초소형 X선관(Miniature X-ray Tube)의 반가층 측정)

  • Kim, Ju-Hye;An, So-Hyeon;Oh, Yoon-Jin;Ji, Yoon-Seo;Huh, Jang-Yong;Kang, Chang-Mu;Suh, Hyunsuk;Lee, Rena
    • Progress in Medical Physics
    • /
    • v.23 no.4
    • /
    • pp.279-284
    • /
    • 2012
  • The X ray has been widely used in both diagnosis and treatment. Recently, a miniature X ray tube has been developed for radiotherapy. The miniature X ray tube is directly inserted into the body irradiated, so that X rays can be guided to a target at various incident angles according to collimator geometry and, thus, minimize patient dose. If such features of the miniature X ray tube can be applied to development of X ray imaging as well as radiation treatment, it is expected to open a new chapter in the field of diagnostic X ray. However, the miniature X ray tube requires an added filter and a collimator for diagnostic purpose because it was designed for radiotherapy. Therefore, a collimator and an added filter were manufactured for the miniature X ray tube, and mounted on. In this study, we evaluated beam characteristics of the miniature X ray tube for diagnostic X ray system and accuracy of measuring the HVL. We used the Si PIN Photodiode type Piranha detector (Piranha, RTI, Sweden) and estimated the HVL of the miniature X ray tube with added filter and without added filter. Through an another measurement using Al filter, we evaluated the accuracy of the HVL obtained from a direct measurement using the automatic HVL calculation function provided by the Piranha detector. As a result, the HVL of the miniature X ray tube was increased around 1.9 times with the added filter mounted on. So we demonstrated that the HVL was suitable for diagnostic X ray system. In the case that the added filter was not mounted on, the HVL obtained from use of the automatic HVL calculation function provided by Piranha detector was 50% higher than the HVL estimated using Al filter. Therefore, the HVL automatic measurement from the Piranha detector cannot be used for the HVL calculation. However, when the added filter was mounted on, the HVL automatic measurement value using the Piranha detector was approximately 15% lower than the estimated value using Al filter. It implies that the HVL automatic measurement can be used to estimate the HVL of the miniature X ray tube with the added filter mounted on without a more complicated measurement method using Al filter. It is expected that the automatic HVL measurement provided by the Piranha detector enables to make kV-X ray characterization easier.

Automation of Dobson Spectrophotometer(No.124) for Ozone Measurements (돕슨 분광광도계(No.124)의 오존 자동관측시스템화)

  • Kim, Jhoon;Park, Sang-Seo;Moon, Kyung-Jung;Koo, Ja-Ho;Lee, Yun-Gon;Miyagawa, Koji;Cho, Hi-Ku
    • Atmosphere
    • /
    • v.17 no.4
    • /
    • pp.339-348
    • /
    • 2007
  • Global Environment Laboratory at Yonsei University in Seoul ($37.57^{\circ}N$, $126.95^{\circ}E$) has carried out the ozone layer monitoring program in the framework of the Global Ozone Observing System of the World Meteorlogical Organization (WMO/GAW/GO3OS Station No. 252) since May of 1984. The daily measurements of total ozone and the vertical distribution of ozone amount have been made with the Dobson Spectrophotometer (No.124) on the roof of the Science Building on Yonsei campus. From 2004 through 2006, major parts of the manual operations are automated in measuring total ozone amount and vertical ozone profile through Umkehr method, and calibrating instrument by standard lamp tests with new hardware and software including step motor, rotary encoder, controller, and visual display. This system takes full advantage of Windows interface and information technology to realize adaptability to the latest Windows PC and flexible data processing system. This automatic system also utilizes card slot of desktop personal computer to control various types of boards in the driving unit for operating Dobson spectrophotometer and testing devices. Thus, by automating most of the manual work both in instrument operation and in data processing, subjective human errors and individual differences are eliminated. It is therefore found that the ozone data quality has been distinctly upgraded after automation of the Dobson instrument.

Development of an Integrated Measurement and Analysis System for DTV Field Test (DTV 필드테스트를 위한 통합 측정 및 분석 시스템 개발)

  • Kim Young-Min;Suh Young-Woo;Mok Ha-Kyun;Kwon Tae-Hoon;Lee Sang-Gil
    • Journal of Broadcast Engineering
    • /
    • v.10 no.4 s.29
    • /
    • pp.599-609
    • /
    • 2005
  • There are many test parameters in the DTV measurement, which uses several test measuring instruments and miscellaneous devices. To operate all of those devices and analyse test results is a tedious and time-consuming process with a high error rate committed by inexperienced test crews. In this paper, we propose an integrated DTV measurement and analysis system(IMAS) that remotely controls and manages any instruments with standard network interface. This system can take, organize, store the field data into an integrated database and easily produce systematic output according to user-defined form. It can also measure several types of digital broadcasting signals such as DTV, DMB, DAB with generalized measurement procedures. Proposed measurement system was applied in the DTV field test by KBS and proved that it could enhance the accuracy and efficiency of entire test sequences and also dramatically reduce measurement time compared to conventional measurement systems.

A New Method for Measuring the Dose Distribution of the Radiotherapy Domain using the IP

  • Homma, Mitsuhiko;Tabushi, Katsuyoshi;Obata, Yasunori;Tamiya, Tadashi;Koyama, Shuji;Kurooka, Masahiko;Shimomura, Kouhei;Ishigaki, Takeo
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.237-240
    • /
    • 2002
  • Knowing the dose distribution in a tissue is as important as being able to measure exposure or absorbed dose in radiotherapy. Since the Dry Imager spread, the wet type automatic processor is no longer used. Furthermore, the waste fluid after film development process brings about a serious problem for prevention of pollution. Therefore, we have developed a measurement method for the dose distribution (CR dosimetry) in the phantom based on the imaging plate (IP) of the computed radiography (CR). The IP was applied for the dose measurement as a dosimeter instead of the film used for film dosimetry. The data from the irradiated IP were processed by a personal computer with 10 bits and were depicted as absorbed dose distributions in the phantom. The image of the dose distribution was obtained from the CR system using the DICOM form. The CR dosimetry is an application of CR system currently employed in medical examinations to dosimetry in radiotherapy. A dose distribution can be easily shown by the Dose Distribution Depiction System we developed this time. Moreover, the measurement method is simpler and a result is obtained more quickly compared with film dosimetry.

  • PDF

Development of a Series Hybrid Propulsion System for Bimodal Tram (바이모달 트램용 직렬형 하이브리드 추진시스템 개발)

  • Bae, Chang-Han;Lee, Kang-Won;Mok, Jai-Kyun;You, Doo-Young;Bae, Jong-Min
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.16 no.5
    • /
    • pp.494-502
    • /
    • 2011
  • Bimodal tram is designed to run on a dedicated path in automatic mode using a magnetic track system in order to realize a combination of the accessibility of a bus and the constant regularity of a railroad. This paper presents design and test results of the series hybrid propulsion system of the bimodal tram on both test track and public road, which uses CNG (Compressed Natural Gas) engine and Lithium polymer battery pack. This paper describes the real-time data measuring equipment for the series hybrid propulsion system of the bimodal tram. Using this measurement equipment, the performance of the prototype vehicle's driving on test track and public road was verified and the fuel consumption and the efficiency of CNG engine have been investigated.

Development of Automatic Peach Grading System using NIR Spectroscopy

  • Lee, Kang-J.;Choi, Kyu H.;Choi, Dong S.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.1267-1267
    • /
    • 2001
  • The existing fruit sorter has the method of tilting tray and extracting fruits by the action of solenoid or springs. In peaches, the most sort processing is supported by man because the sorter make fatal damage to peaches. In order to sustain commodity and quality of peach non-destructive, non-contact and real time based sorter was needed. This study was performed to develop peach sorter using near-infrared spectroscopy in real time and nondestructively. The prototype was developed to decrease internal and external damage of peach caused by the sorter, which had a way of extracting tray with it. To decrease positioning error of measuring sugar contents in peaches, fiber optic with two direction diverged was developed and attached to the prototype. The program for sorting and operating the prototype was developed using visual basic 6.0 language to measure several quality index such as chlorophyll, some defect, sugar contents. The all sorting result was saved to return farmers for being index of good quality production. Using the prototype, program and MLR(multiple linear regression) model, it was possible to estimate sugar content of peaches with the determination coefficient of 0.71 and SEC of 0.42bx using 16 wavelengths. The developed MLR model had determination coefficient of 0.69, and SEP of 0.49bx, it was better result than single point measurement of 1999's. The peach sweetness grading system based on NIR reflectance method, which consists of photodiode-array sensor, quartz-halogen lamp and fiber optic diverged two bundles for transmitting the light and detecting the reflected light, was developed and evaluated. It was possible to predict the soluble solid contents of peaches in real time and nondestructively using the system which had the accuracy of 91 percentage and the capacity of 7,200 peaches per an hour for grading 2 classes by sugar contents. Draining is one of important factors for production peaches having good qualities. The reason why one farm's product belows others could be estimated for bad draining, over-much nitrogen fertilizer, soil characteristics, etc. After this, the report saved by the peach grading system will have to be good materials to farmers for production high quality peaches. They could share the result or compare with others and diagnose their cultural practice.

  • PDF

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Deep Learning Method for Cost-Effective Feed Weight Prediction of Automatic Feeder for Companion Animals (반려동물용 자동 사료급식기의 비용효율적 사료 중량 예측을 위한 딥러닝 방법)

  • Kim, Hoejung;Jeon, Yejin;Yi, Seunghyun;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.263-278
    • /
    • 2022
  • With the recent advent of IoT technology, automatic pet feeders are being distributed so that owners can feed their companion animals while they are out. However, due to behaviors of pets, the method of measuring weight, which is important in automatic feeding, can be easily damaged and broken when using the scale. The 3D camera method has disadvantages due to its cost, and the 2D camera method has relatively poor accuracy when compared to 3D camera method. Hence, the purpose of this study is to propose a deep learning approach that can accurately estimate weight while simply using a 2D camera. For this, various convolutional neural networks were used, and among them, the ResNet101-based model showed the best performance: an average absolute error of 3.06 grams and an average absolute ratio error of 3.40%, which could be used commercially in terms of technical and financial viability. The result of this study can be useful for the practitioners to predict the weight of a standardized object such as feed only through an easy 2D image.

Analysis of the Optimal Window Size of Hampel Filter for Calibration of Real-time Water Level in Agricultural Reservoirs (농업용저수지의 실시간 수위 보정을 위한 Hampel Filter의 최적 Window Size 분석)

  • Joo, Dong-Hyuk;Na, Ra;Kim, Ha-Young;Choi, Gyu-Hoon;Kwon, Jae-Hwan;Yoo, Seung-Hwan
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.3
    • /
    • pp.9-24
    • /
    • 2022
  • Currently, a vast amount of hydrologic data is accumulated in real-time through automatic water level measuring instruments in agricultural reservoirs. At the same time, false and missing data points are also increasing. The applicability and reliability of quality control of hydrological data must be secured for efficient agricultural water management through calculation of water supply and disaster management. Considering the characteristics of irregularities in hydrological data caused by irrigation water usage and rainfall pattern, the Korea Rural Community Corporation is currently applying the Hampel filter as a water level data quality management method. This method uses window size as a key parameter, and if window size is large, distortion of data may occur and if window size is small, many outliers are not removed which reduces the reliability of the corrected data. Thus, selection of the optimal window size for individual reservoir is required. To ensure reliability, we compared and analyzed the RMSE (Root Mean Square Error) and NSE (Nash-Sutcliffe model efficiency coefficient) of the corrected data and the daily water level of the RIMS (Rural Infrastructure Management System) data, and the automatic outlier detection standards used by the Ministry of Environment. To select the optimal window size, we used the classification performance evaluation index of the error matrix and the rainfall data of the irrigation period, showing the optimal values at 3 h. The efficient reservoir automatic calibration technique can reduce manpower and time required for manual calibration, and is expected to improve the reliability of water level data and the value of water resources.

Establishment of Thermal Infrared Observation System on Ieodo Ocean Research Station for Time-series Sea Surface Temperature Extraction (시계열 해수면온도 산출을 위한 이어도 종합해양과학기지 열적외선 관측 시스템 구축)

  • KANG, KI-MOOK;KIM, DUK-JIN;HWANG, JI-HWAN;CHOI, CHANGHYUN;NAM, SUNGHYUN;KIM, SEONGJUNG;CHO, YANG-KI;BYUN, DO-SEONG;LEE, JOOYOUNG
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.22 no.3
    • /
    • pp.57-68
    • /
    • 2017
  • Continuous monitoring of spatial and temporal changes in key marine environmental parameters such as SST (sea surface temperature) near IORS (Ieodo Ocean Research Station) is demanded to investigate the ocean ecosystem, climate change, and sea-air interaction processes. In this study, we aimed to develop the system for continuously measuring SST using a TIR (thermal infrared) sensor mounted at the IORS. New SST algorithm is developed to provide SST of better quality that includes automatic atmospheric correction and emissivity calculation for different oceanic conditions. Then, the TIR-based SST products were validated against in-situ water temperature measurements during May 17-26, 2015 and July 15-18, 2015 at the IORS, yielding the accuracy of 0.72-0.85 R-square, and $0.37-0.90^{\circ}C$ RMSE. This TIR-based SST observing system can be installed easily at similar Ocean Research Stations such as Sinan Gageocho and Ongjin Socheongcho, which provide a vision to be utilized as calibration site for SST remotely sensed from satellites to be launched in future.