• Title/Summary/Keyword: 자동정보 추출

Search Result 2,000, Processing Time 0.031 seconds

Phantom of the AAPM CT imaging evaluation Studies on the quantitative analysis method (CT 정도관리 영상의 정량적 분석방법에 관한 연구)

  • Kim, Young-su;Ko, Seong-Jin;Kang, Se-Sik;Ye, Soo-young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.271-274
    • /
    • 2016
  • CT quality assurance imaging evaluation and enforcement as quantitative assessment by phantom image evaluation, assessment items include There are also contrasting the water attenuation coefficient, uniformity, noise, resolution, spatial resolution, 10mm slice thickness evaluation, contrast resolution, space for the resolution, the slice thickness evaluation, it is possible to estimate the error due to the evaluation by the subjective judgment of the tester, using a subjective error image processing program to be computed to minimize the objective evaluation. Basic recording conditions of the CT image quality control assessment is the same as special medical equipment quality control checks, the images were evaluated quantitatively using IMAGE J. For a CT attenuation coefficient, the uniformity, noise evaluation, were evaluated as CT quality control image the standard deviation of the measured value of the digital processing of image smaller and less noise uniform images than the, contrast and resolution assessment is the size of the diameter of a circle having a large the 1 inch, 0.75 inch, 0.5 inch quality if the diameter of the circle, was evaluated in the small circle in the near circle ellipse. Spatial resolution is evaluated by using a self-extracting features of an image processing program, all of the groups of members comprising the acceptance criteria to automatically extract, was evaluated to be very useful for the quantitative assessment. When CT image quality control assessment on the basis of the results such as the above, if using an image processing program to minimize the subjective judgment of the error evaluator and is determined more efficient than would be made quantitative evaluation.

  • PDF

The Design and Implementation of the System for Processing Well-Formed XML Document on the Client-side (클라이언트 상의 Well-Formed XML 문서 처리 시스템의 설계 및 구현)

  • Song, Jong-Chul;Moon, Byung-Joo;Hong, Gi-Chai;Cheong, Hyun-Soo;Kim, Gyu-Tae;Lee, Soo-Youn
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.10
    • /
    • pp.3236-3246
    • /
    • 2000
  • XML is a meta-language as SGML and also can be xonsructed as an Internet versionof simplified SGML being used in confunction with XLL. Xpointer and XSL. Also W3C established DTDless Well-Formed XML document to use XML document on the Web. But it isnt offered system that consists of browsing, link and DTD generating facihty, and efficiently processes DTDless Well-Formed XML document. This paper studies on an implementation and design of system to process DTDless Well-Formed XML document on the client-side. This system consists of Well-Formed XML viewer displaying Well-Formed XML documet, XLL Processor processing Xll and Auto DTD generator constructing automatically DTDs based on multiple documents of the same class. This study focuses on automatic DTD generation during hyperlink navigation and an implementation of extended links based on XLL and Xpointer. ID and Xpointer location address are used as the address mode in the links. As a result of implement of this system, it conforms to validationof extended link facihties, extracts DTD from Well-Fromed XML Documents including same root element at the same class and constructs generalized DTD.

  • PDF

Anaphora Resolution System for Natural Language Requirements Document in Korean based on Syntactic Structure (한국어 자연어 요구문서에서 구문 구조 기반의 조응어 처리 시스템)

  • Park, Ki-Seon;An, Dong-Un;Lee, Yong-Seok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.255-262
    • /
    • 2010
  • When a system is developed, requirements document is generated by requirement analysts and then translated to formal specifications by specifiers. If a formal specification can be generated automatically from a natural language requirements document, system development cost and system fault from experts' misunderstanding will be decreased. A pronoun can be classified in personal and demonstrative pronoun. In the characteristics of requirements document, the personal pronouns are almost not occurred, so we focused on the decision of antecedent for a demonstrative pronoun. For the higher accuracy in analysis of requirements document automatically, finding antecedent of demonstrative pronoun is very important for elicitation of formal requirements automatically from natural language requirements document via natural language processing. The final goal of this research is to automatically generate formal specifications from natural language requirements document. For this, this paper, based on previous research [3], proposes an anaphora resolution system to decide antecedent of pronoun using natural language processing from natural language requirements document in Korean. This paper proposes heuristic rules for the system implementation. By experiments, we got 92.45%, 69.98% as recall and precision respectively with ten requirements documents.

Forward/Reverse Engineering Approaches of Java Source Code using JML (JML을 이용한 Java 원시 코드의 역공학/순공학적 접근)

  • 장근실;유철중;장옥배
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.19-30
    • /
    • 2003
  • Based upon XML, a standard document format on the web, there have been many active studies on e-Commerce, wireless communication, multimedia technology and so forth. JML is an XML application suitable for understanding and reusing the source code written using JAVA for various purposes. And it is a DTD which can effectively express various information related to hierarchical class structures, class/method relationships and so on. This paper describes a tool which generates JML document by extracting a comment information from Java source code and information helpful for reusing and understanding by JML in terms of the reverse engineering and a tool which generates a skeleton code of Java application program from the document information included in the automatically or manually generated JML document in terms of the forward engineering. By using the result of this study, the information useful and necessary for understanding, analyzing or maintaining the source code can be easily acquired and the document of XML format makes it easy for developers and team members to share and to modify the information among them. And also, the Java skeleton coed generated form JML documents is a reliable robust code, which helps for developing a complete source code and reduces the cost and time of a project.

A Study on Shot Segmentation and Indexing of Language Education Videos by Content-based Visual Feature Analysis (교육용 어학 영상의 내용 기반 특징 분석에 의한 샷 구분 및 색인에 대한 연구)

  • Han, Heejun
    • Journal of the Korean Society for information Management
    • /
    • v.34 no.1
    • /
    • pp.219-239
    • /
    • 2017
  • As IT technology develops rapidly and the personal dissemination of smart devices increases, video material is especially used as a medium of information transmission among audiovisual materials. Video as an information service content has become an indispensable element, and it has been used in various ways such as unidirectional delivery through TV, interactive service through the Internet, and audiovisual library borrowing. Especially, in the Internet environment, the information provider tries to reduce the effort and cost for the processing of the provided information in view of the video service through the smart device. In addition, users want to utilize only the desired parts because of the burden on excessive network usage, time and space constraints. Therefore, it is necessary to enhance the usability of the video by automatically classifying, summarizing, and indexing similar parts of the contents. In this paper, we propose a method of automatically segmenting the shots that make up videos by analyzing the contents and characteristics of language education videos and indexing the detailed contents information of the linguistic videos by combining visual features. The accuracy of the semantic based shot segmentation is high, and it can be effectively applied to the summary service of language education videos.

Efficient Management of Statistical Information of Keywords on E-Catalogs (전자 카탈로그에 대한 효율적인 색인어 통계 정보 관리 방법)

  • Lee, Dong-Joo;Hwang, In-Beom;Lee, Sang-Goo
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.4
    • /
    • pp.1-17
    • /
    • 2009
  • E-Catalogs which describe products or services are one of the most important data for the electronic commerce. E-Catalogs are created, updated, and removed in order to keep up-to-date information in e-Catalog database. However, when the number of catalogs increases, information integrity is violated by the several reasons like catalog duplication and abnormal classification. Catalog search, duplication checking, and automatic classification are important functions to utilize e-Catalogs and keep the integrity of e-Catalog database. To implement these functions, probabilistic models that use statistics of index words extracted from e-Catalogs had been suggested and the feasibility of the methods had been shown in several papers. However, even though these functions are used together in the e-Catalog management system, there has not been enough consideration about how to share common data used for each function and how to effectively manage statistics of index words. In this paper, we suggest a method to implement these three functions by using simple SQL supported by relational database management system. In addition, we use materialized views to reduce the load for implementing an application that manages statistics of index words. This brings the efficiency of managing statistics of index words by putting database management systems optimize statistics updating. We showed that our method is feasible to implement three functions and effective to manage statistics of index words with empirical evaluation.

  • PDF

Updating GIS Data using Linear Features of Imagery (영상의 선형 정보를 이용한 GIS 자료의 갱신에 대한 연구)

  • 손홍규;최종현;피문희;이진화
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2003.04a
    • /
    • pp.388-393
    • /
    • 2003
  • 도시화 속도의 증가와 더불어 3차원 자료 획득의 출처가 다양해지연서, 도로 및 건물경계선과 같은 선형 GIS 정보에 대한 신속한 갱신 또한 요구되고 있다. 임의의 출처 자료로부터 대상 자료를 갱신하기 위해서는 가장 먼저 두 자료간의 위치 관계를 결정하여야 하며, 특히 영상정보와 같은 출처 자료와 GIS 자료와 같은 대상 자료간의 위치 관계를 결정하기 위하여 기존에 제시되어온 대부분의 방법들은 두 개 자료간의 관계를 정의 할 수 있는 기준정과 같은 정확한 점 정합 요소(point matching entities)를 요구하고 있다. 따라서 정확한 정합 요소들을 획득할 수 없는 경우 영상과 GIS 자료간의 위치 관계를 결정할 수 없을뿐더러 위치 관계 정립의 결과는 정합 요소들의 분포 및 정확도에 매우 의존하게 된다. 또한 이러한 점 정합 요소들을 정의하기 위해서는 대부분의 경우 수동적으로 이루어질 수밖에 없다. 따라서 본 연구에서는 영상 및 GIS 자료의 선형 정보를 이용하여 정확한 점 정합 요소들을 모르더라도 영상과 GIS 자료간의 위치 관계를 결정할 수 있는 기법을 제시하고자 한다. 사용된 알고리즘은 개선된 Hough 변환(Modified Hough Transform)을 기반으로 다수의 선형 정보 중에 정합되는 요소들을 자동으로 찾아내고 이들을 최소제곱법으로 풀이함으로써 두 데이터간의 기하학적 변환 관계를 결정하는 기법이다. 본 연구에서는 이와 같은 접근을 통해 데이터간의 기하학적 변환 관계를 결정한 후, 영상 상에는 존재하지만 GIS 자료에는 존재하지 않는 선형 정보에 대한 갱신 여부를 확인하고 갱신함으로써 3차원 위치 자료의 자동 생성에 대한 가능성을 제시하고자 한다.로 갈수록 퇴적이 우세한 것으로 관측되었다.보체계의 구축사업의 시각이 행정정보화, 생활정보화, 산업정보화 등 다양한 분야와 결합하여 보다 큰 시너지 효과와 사용자 중심의 서비스 개선을 창출할 수 있는 기반을 제공할 것을 기대해 본다.. 이상의 결과를 종합해볼 때, ${\beta}$-glucan은 고용량일 때 직접적으로 또는 $IFN-{\gamma}$ 존재시에는 저용량에서도 복강 큰 포식세로를 활성화시킬 뿐 아니라, 탐식효율도 높임으로써 면역기능을 증진 시키는 것으로 나타났고, 그 효과는 crude ${\beta}$-glucan의 추출조건에 따라 달라지는 것을 알 수 있었다.eveloped. Design concepts and control methods of a new crane will be introduced in this paper.and momentum balance was applied to the fluid field of bundle. while the movement of′ individual material was taken into account. The constitutive model relating the surface force and the deformation of bundle was introduced by considering a representative prodedure that stands for the bundle movement. Then a fundamental equations system could be simplified considering a steady state of th

  • PDF

Generation of Indoor Network by Crowdsourcing (크라우드 소싱을 이용한 실내 공간 네트워크 생성)

  • Kim, Bo Geun;Li, Ki-Joune;Kang, Hae-Kyong
    • Spatial Information Research
    • /
    • v.23 no.1
    • /
    • pp.49-57
    • /
    • 2015
  • Due to high density of population and progress of high building construction technologies, the number of high buildings has been increasing. Several information services have been provided to figure out complex indoor structures of building such as indoor navigations and indoor map services. The most fundamental information for these services are indoor network information. Indoor network in building provides topological connectivity between spaces unlike geometric information of buildings. In order to make indoor network information, we have to edit network manually or derive network properties based on the geometric data of buildings. This process is not easy for complex buildings. In this paper, we suggest a method to generate indoor network automatically based on crowdsourcing. From the collected individual trajectories, we derive indoor network information with crowdsourcing. We validate our method with a sample set of trajectory data and the result shows that our method is practical if the indoor positioning technology is reasonably accurate.

Background and Local Histogram-Based Object Tracking Approach (도로 상황인식을 위한 배경 및 로컬히스토그램 기반 객체 추적 기법)

  • Kim, Young Hwan;Park, Soon Young;Oh, Il Whan;Choi, Kyoung Ho
    • Spatial Information Research
    • /
    • v.21 no.3
    • /
    • pp.11-19
    • /
    • 2013
  • Compared with traditional video monitoring systems that provide a video-recording function as a main service, an intelligent video monitoring system is capable of extracting/tracking objects and detecting events such as car accidents, traffic congestion, pedestrian detection, and so on. Thus, the object tracking is an essential function for various intelligent video monitoring and surveillance systems. In this paper, we propose a background and local histogram-based object tracking approach for intelligent video monitoring systems. For robust object tracking in a live situation, the result of optical flow and local histogram verification are combined with the result of background subtraction. In the proposed approach, local histogram verification allows the system to track target objects more reliably when the local histogram of LK position is not similar to the previous histogram. Experimental results are provided to show the proposed tracking algorithm is robust in object occlusion and scale change situation.

Analysis and Utilization of Housing Information based on Open API and Web Scraping (오픈API와 웹스크래핑에 기반한 주택정보 분석 및 활용방안)

  • Shin-Hyeong Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.5
    • /
    • pp.323-329
    • /
    • 2024
  • In an era of low interest rates around the world, interest in real estate has increased. We can collect real estate information using the Internet, but it takes a lot of time to find. In this paper, real estate information from January 2015 to April 2024 is collected from three places to help users more easily collect real estate information of interest and use it for sales. First, by analyzing HTML documents using web scraping techniques, information on real estate of interest is automatically extracted from the website of the platform company. Second, the actual transaction price of the real estate is additionally collected through the open API provided by the Ministry of Land, Infrastructure and Transport. Third, real estate-related news is provided so that users can learn about the future value and prospects of real estate. The simulation results for the data collected in this study show that the lowest price predicted by the ARIMA model is expected to be in May 2024 among the next eight months. Therefore, by following this procedure, real estate buyers can make more efficient home sales by referring to related information including the predicted transaction price.