• Title/Summary/Keyword: 수작업

Search Result 1,204, Processing Time 0.028 seconds

A Study on Development of Gas Accident Management System based on GIS (GIS 기반의 가스사고 관리시스템 개발에 대한 연구)

  • Kim, Kye-Hyun;Kim, Tae-Il
    • 한국지형공간정보학회:학술대회논문집
    • /
    • 2002.03a
    • /
    • pp.106-112
    • /
    • 2002
  • 최근 급속한 도시의 팽창 및 신도시 건설과 산업의 발전으로 가스시설은 꾸준히 확대되고 있는 실정이다. 그리고 94년 아현동 도시가스사고와 95년 대구 도시가스사고 이후로 도시 가스 시설물의 대한 안전 대책 및 시설물 관리에 대한 국민들의 관심이 증대되었다. 이러한 흐름에 따라 가스회사들은 GIS 기술을 도입하여 기존에 수작업으로 관리되고 있는 가스시설 정보체계를 전산화하여 항상 최신의 현황을 유지하고, 사고 발생시 신속한 대처 방안 및 피해예측을 위한 시스템을 개발하기 위하여 많은 연구를 진행하고 있는 실정이다. 본 연구의 목적은 안전이 중요시되는 가스시설물에 대하여 가스사고 발생시 신속한 대처 및 처리방안을 제시할 수 있는 GIS 기반의 가스사고 관리시스템을 개발하는데 있다. GIS의 가스사고 관리시스템에서는 사고 발생시에 시설물 관리자가 사고 지점을 선택하여 우선적으로 공급을 중단해야 할 관로를 제시하고 사고지점을 검색하여 차단해야 할 밸브에 대한 정보를 신속히 제공하여 대응 방안을 제시 할 수 있도록 하였다. 아울러 가스공급이 중단되는 지역에 대한 정보를 추출하여 피해범위를 산정하여 효율적인 사고 관리를 지원하도록 구성되었으며, 이와 함께 잔존가스량을 구하여 사고후의 대처방안을 마련할 수 있는 기능을 제공하도록 하였다. 향후 연구과제로는 원격으로 가스 시설물을 감시하고 제어할 수 있는 원격감시/제어시스템(SCADA System)과 연계를 통하여 가스사고 후에 신속한 피해예측 및 피해를 최소화 할 수 있는 방안제시 및 GPS를 활용하여 신속한 사고처리를 할 수 있는 활용 방안을 연구하여 체계적이고 종합적인 가스사고의 관리가 필요하다고 사료된다. 또한, 사고 후의 긴급 대처방안 뿐만 아니라 잔존가스량을 이용하여 수용가에 가스의 신속한 재공급을 위한 정보의 제공까지 한 단계 발전된 시스템의 개발이 추진되어야 한다.남산지역에 대해 정사영상과 10m간격의 DEM을 제작하였으며 1:1000 수치지도를 통해 제작된 DEM과 비교한 결과 총 43990개 격자점의 표고 차이는 평균 5.98m였다.여재 높이 100 cm에서 원수를 하향류 및 상향류로 주입하면서 하향류 20, 40, 80, 100 cm, 상향류 20, 40, 60, 80, 100 cm에서 시료를 채취하여 분석한 결과 모두 원수가 주입되는 부근 여재 높이 20 cm에서 가장 많이 제거되었다. 상향류 보다 하향류로 원수를 주입했을 때 제거효율이 높았다. $Fe^{+++}$$Fe^{++}$로 환원하는 $O^{-}_{2}{\cdot}$의 작용을 대신할 수 있음을 증명하며 이와같은 ascorbate 의존적인 $OH{\cdot}$ 의 생성은 ascorbate가 조직손상에 관여할 가능성을 시사하였다.었다. 정확한 예측치를 얻기 위하여 불균질 조직이 조사야에 포함되는 경우 보정이 요구되며, 골반의 경우 골 조직의 보정이 중요한 요인임을 알 수 있었다. 이를 위하여 불균질 조직에 대한 정확한 정보가 요구되며, 이는 CT 영상을 이용하는 것이 크게 도움이 되리라 생각된다.전시 슬러지층과 상등액의 온도차를 측정하여 대사열량의 발생량을 측정하고 슬러지의 활성을 측정할 수 있는 방법을 개발하였다.enin과 Rhaponticin의 작용(作用)에 의(依)한 것이며, 이는 한의학(韓醫學) 방제(方劑) 원리(原理)인 군신좌사(君臣佐使) 이론(理論)에서 군약(君藥)이 주증(主症)에 주(主)로 작용(作用)하는 약물(藥物)이라는 것을 밝혀주는 것이라고 사료(思料)된다.일전 $13.447\;{\mu}g/hr/g$, 섭취 7일중 $8.123\;{\mu}g/hr/g$, 절식 14일후 $10.612

  • PDF

Optimization of Number of Training Documents in Text Categorization (문헌범주화에서 학습문헌수 최적화에 관한 연구)

  • Shim, Kyung
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.4 s.62
    • /
    • pp.277-294
    • /
    • 2006
  • This paper examines a level of categorization performance in a real-life collection of abstract articles in the fields of science and technology, and tests the optimal size of documents per category in a training set using a kNN classifier. The corpus is built by choosing categories that hold more than 2,556 documents first, and then 2,556 documents per category are randomly selected. It is further divided into eight subsets of different size of training documents : each set is randomly selected to build training documents ranging from 20 documents (Tr-20) to 2,000 documents (Tr-2000) per category. The categorization performances of the 8 subsets are compared. The average performance of the eight subsets is 30% in $F_1$ measure which is relatively poor compared to the findings of previous studies. The experimental results suggest that among the eight subsets the Tr-100 appears to be the most optimal size for training a km classifier In addition, the correctness of subject categories assigned to the training sets is probed by manually reclassifying the training sets in order to support the above conclusion by establishing a relation between and the correctness and categorization performance.

Reconstruction of body contour with digital camera image (Digital Camera의 영상을 이용한 신체 단면도 제작)

  • Kwon, KT;Kim, CM;Kang, TY;Park, CS;Song, HK
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.15 no.1
    • /
    • pp.53-60
    • /
    • 2003
  • I. Purpose It is essential to have the correct body contour information for the calculation of dose distribution. The role of CT images in the radiation oncology field has been increased. But there still exists a method to use cast or lead wire for the body contour drawing. This traditional method has drawbacks such as in accurate and time consuming procedure. This study has been designed to overcome this problem. II. Materials and Methods A digital camera is attached to a pole which stands on the opposite side of the gantry. Positional information was acquired from an image of the phantom which is specially designed for this study and located on the isocenter level of the simulator Laser line on the patients skin or on the phantom surface was digitized and reconstructed as the contour. Verification of usefulness this technique has been done with various shape of phantoms and a patients chest III. Results and Conclusions Contours from the traditional method with the cast or lead wire and the digital image method showed good agreement within experimetal error range. This technique showed more efficiente in time and convenience. For irregular shaped contour, like H&N region, special care are needed. The results suggest that more study is needed. To use of the another photogrammatory techinique with two camera system may be better for the actual clinical application

  • PDF

Construction of 3D Digital Maps Using 3D Symbols (3차원 심볼을 활용한 3차원 수치지도 제작에 관한 연구)

  • Park, Seung-Yong;Lee, Jae-Bin;Yu, Ki-Yun;Kim, Yong-Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.5
    • /
    • pp.417-424
    • /
    • 2006
  • Despite of many researches related to create 3D digital maps, it is still time-consuming and costly because a large part of 3D digital mapping is conducted manually. To circumvent this limitation, we proposed methodologies to create 3D digital maps with 3D symbols automatically. For this purpose, firstly, the 3D symbol library to represent 3D objects as 3D symbols was constructed. In this library, we stored the attribute and geometry information of 3D objects which define types and shapes of symbols respectively. These information were used to match 3D objects with 3D symbols and extracted from 2D digital maps and LiDAR(Light Detection and Ranging) data. Then, to locate 3D symbols into a base map automatically, we used predefined parameters such as the size, the height, the rotation angle and the center of gravity of 3D objects which are extracted from LiDAR data. Finally, the 3D digital map in urban area was constructed and the results were tested. Through this research, we can identify that the developed algorithms can be used as effective techniques for 3D digital mapping.

A study on the Effectivness of Hand-made Paraffin Thyroid Phantom (Paraffin을 이용한 Thyroid Phantom제작에 따른 유용성에 관한 연구)

  • Park, Soung-Ock;Lee, In-Ja
    • Journal of radiological science and technology
    • /
    • v.30 no.3
    • /
    • pp.237-243
    • /
    • 2007
  • Phantoms are very necessary for quality assurance of radio nuclides imaging systems to maintain standards and to ensure reproducibility of test. General quality assurance and instrument quality control are essential in every hospital. The human tissue equivalent materials are aluminum, areryl, water and epoxy..etc. It is very important to select optimum equivalant materials for a phantoms in QC. Especially, paraffin is very similar with human soft tissue in X or Gamma-ray physical characteristics and easy to buy with economically. We made a paraffin thyroid phantom and compare with thyroid areryl phantom, also used commercially in practice. Two small size cold spots(3 and 6 mm diameter) and a hot spot(3 mm diameter) embeded in paraffin phantom. And imaged with $^{99m}TcO_4$ by camera for analysis about spatial resolution and noise at the hot and cold spots. We got some results as below : 1. No difference in counting rate and noise between both arcryl and paraffin thyroid phantoms. 2. The best spatial resolution can be seen 6 cm distance between pinhole collimator and thyroid phantoms(arcryl and paraffin). 3. More optimal spatial resolution could acquired in paraffin thyroid phantom. Paraffin is very similar with human soft tissue in atomic number, density and relative absorbtion function, and can be shaped easily what we wanted. So we can recommendation paraffin as quality assurance phantom because its usefulness, economical benefit and purchasability.

  • PDF

The Line Feature Extraction for Automatic Cartography Using High Frequency Filters in Remote Sensing : A Case Study of Chinju City (위성영상의 형태추출을 통한 지도화 : 고빈도 공간필터 사용을 중심으로)

  • Jung, In-Chul
    • Journal of the Korean association of regional geographers
    • /
    • v.2 no.2
    • /
    • pp.183-196
    • /
    • 1996
  • The purpose of this paper is to explore the possibility of automatic extraction of line feature from Satellite image. The first part reviews the relationship between spatial filtering and cartographic interpretation. The second part describes the principal operations of high frequency filters and their properties, the third part presents the result of filtering application to the SPOT Panchromatic image of the Chinju city. Some experimental results are given here indicating the high feasibility of the filtering technique. The results of the paper is summarized as follows: Firstly the good all-purposes filter dose not exist. Certain laplacian filter and Frei-chen filter were very sensitive to the noise and could not detect line features in our case. Secondly, summary filters and some other filters do an excellent job of identifying edges around urban objects. With the filtered image added to the original image, the interpretation is more easy. Thirdly, Compass gradient masks may be used to perform two-dimensional, discrete differentiation directional edge enhancement, however, in our case, the line featuring was not satisfactory. In general, the wide masks detect the broad edges and narrow masks are used to detect the sharper discontinuities. But, in our case, the difference between the $3{\times}3$ and $7{\times}7$ kernel filters are not remarkable. It may be due to the good spatial resolution of Spot scene. The filtering effect depends on local circumstance. Band or kernel size selection must be also considered. For the skillful geographical interpretation, we need to take account the more subtle qualitative information.

  • PDF

Construction of a Full-length cDNA Library from Korean Stewartia (Stewartia koreana Nakai) and Characterization of EST Dataset (노각나무(Stewartia koreana Nakai)의 cDNA library 제작 및 EST 분석)

  • Im, Su-Bin;Kim, Joon-Ki;Choi, Young-In;Choi, Sun-Hee;Kwon, Hye-Jin;Song, Ho-Kyung;Lim, Yong-Pyo
    • Horticultural Science & Technology
    • /
    • v.29 no.2
    • /
    • pp.116-122
    • /
    • 2011
  • In this study, we report the generation and analysis of 1,392 expressed sequence tags (ESTs) from Korean Stewartia (Stewartia koreana Nakai). A cDNA library was generated from the young leaf tissue and a total of 1,392 cDNA were partially sequenced. EST and unigene sequence quality were determined by computational filtering, manual review, and BLAST analyses. Finally, 1,301 ESTs were acquired after the removal of the vector sequence and filtering over a minimum length 100 nucleotides. A total of 893 unigene, consisting of 150 contigs and 743 singletons, was identified after assembling. Also, we identified 95 new microsatellite-containing sequences from the unigenes and classified the structure according to their repeat unit. According to homology search with BLASTX against the NCBI database, 65% of ESTs were homologous with known function and 11.6% of ESTs were matched with putative or unknown function. The remaining 23.2% of ESTs showed no significant similarity to any protein sequences found in the public database. Annotation based searches against multiple databases including wine grape and populus sequences helped to identify putative functions of ESTs and unigenes. Gene ontology (GO) classification showed that the most abundant GO terms were transport, nucleotide binding, plastid, in terms biological process, molecular function and cellular component, respectively. The sequence data will be used to characterize potential roles of new genes in Stewartia and provided for the useful tools as a genetic resource.

A Study of the Advanced Strategy for ICT-based Public Compensation Business (ICT 기반 공익사업 보상업무 첨단화 방안 연구)

  • Seo, Myoung Bae
    • Smart Media Journal
    • /
    • v.9 no.1
    • /
    • pp.75-83
    • /
    • 2020
  • Compensation services that are indispensable during large-scale public utilities projects have been gradually increasing with the recent increase in construction, but there are no systematic compensation services due to the complicated procedures and manual work. For this reason, various problems such as construction period delays due to various complaints, corruption in compensation work, and impossible to trace the history of compensation data in the past are emerging. In this paper, in order to solve this problem, in-depth interviews and questionnaires were conducted to find out the problems of each compensation status. Based on this, 3 core technologies and 10 technical needs based on ICT were selected to improve the compensation work by deriving STEEP analysis and Issue Tree. The three core technologies are big data-based decision-making and prediction technology, advanced measurement technology, and open cloud-based compensation platform technology. In order to introduce the derived technologies to the institutions in charge of compensation, the possibility of technology diffusion by project operators was suggested based on the results of the current status of informatization by institution. Based on the core technology derived from this paper, it is necessary to make a prototype that can be advanced in compensation work and apply it to each institution and analyze the effect.

Development of Cleaning System of Electronic Components for the Remanufacturing of Laser Copy Machine (레이저 복합기의 재제조공정을 위한 전자부품 세정시스템의 개발)

  • Bae, Jae-Heum;Chang, Yoon-Sang
    • Clean Technology
    • /
    • v.18 no.3
    • /
    • pp.287-294
    • /
    • 2012
  • In this study, performances of two cleaning methods were analyzed and a cleaning system was designed to develop a cleaning process of electronic components to remanufacture old laser copy machine. First, plasma cleaning as a dry cleaning method was executed to test cleaning ability. In cleaning of printed circuit board (PCB) by plasma, some damages were found near the metal parts, and considering the productivity, this method was not adequate for the cleaning of electronic components. With 4 different cleaning agents, ultrasonic cleaning tests were executed to select an optimal cleaning agent, aqueous agents showed superior cleaning performance compared to semi-aqueous and non-aqueous agents. Cleaning with aqueous cleaning agent A and 28 kHz ultrasonic frequency can be completed in 30 sec to 1 min. Finally, an ultrasonic cleaning system was constructed based on the pre-test results. Optimal cleaning conditions of 40 kHz and $50^{\circ}C$ were found in the field test. The productivity and economic efficiency in remanufacturing of laser copy machine are expected to increase by adapting developed ultrasonic cleaning system.

A Design and Implementation of WML Compiler for WAP Gateway for Wireless Internet Services (무선 인터넷 서비스를 위한 WAP 게이트웨이용 WML 컴파일러의 설계 및 구현)

  • Choi, Eun-Jeong;Han, Dong-Won;Lim, Kyung-Shik
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.2
    • /
    • pp.165-182
    • /
    • 2001
  • In this paper, we describe a design and implementation of the Wireless Markup Language(WML) compiler to deploy wireless Internet services effectively. The WML compiler translates textual WML decks into binary ones in order to reduce the traffic on wireless links that have relatively low bandwidth to wireline links and mitigate the processing overhead of WML decks on, wireless terminals that have relatively low processing power to fixed workstations. In addition, it takes over the overhead of eXtensible Markup Language(XML) well-formedness and validation processes. The WML compiler consists of the lexical analyzer and parser modules. The granunar for the WML parser module is LALR(1) context-free grammar that is designed based on XML 1.0 and WML 1.2 DTD(Document Type Definition) with the consideration of the Wireless Application Protocol Binary XML grammar. The grammar description is converted into a C program to parse that grammar by using parser generator. Even though the tags in WML will be extended or WML DTD will be upgraded, this approach has the advantage of flexibility because the program is generated by modifying just the changed parts. We have verified the functionality of the WML compiler by using a WML decompiler in the public domain and by using the Nokia WAP Toolkit as a WAP client. To measurethe compressibility gain of the WML compiler, we have tested a large number of textual WML decks and obtained a maximum 85 %. As the effect of compression is reduced when the portion of general textual strings increases relative to one of the tags and attributes in a WML deck, an extended encoding method might be needed for specific applications such as compiling of the WML decks to which the Hyper Text Markup Language document is translated dynamically.

  • PDF