• Title/Summary/Keyword: Data editing

Search Result 316, Processing Time 0.025 seconds

A Watermarking Scheme Based on k-means++ for Design Drawings (k-means++ 기반의 설계도면 워터마킹 기법)

  • Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.5
    • /
    • pp.57-70
    • /
    • 2009
  • A CAD design drawing based on vector data that is very important art work in industrial fields has been considered to content that the copyright protection is urgently needed. This paper presents a watermarking scheme based on k-means++ for CAD design drawing. One CAD design drawing consists of several layers and each layer consists of various geometric objects such as LINE, POLYLINE, CIRCLE, ARC, 3DFACE and POLYGON. POLYLINE with LINE, 3DFACE and ARC that are fundamental objects make up the majority in CAD design drawing. Therefore, the proposed scheme selects the target object with high distribution among POLYLINE, 3DFACE and ARC objects in CAD design drawing and then selects layers that include the most target object. Then we cluster the target objects in the selected layers by using k-means++ and embed the watermark into the geometric distribution of each group. The geometric distribution is the normalized length distribution in POLYLINE object, the normalized area distribution in 3DFACE object and the angle distribution in ARC object. Experimental results verified that the proposed scheme has the robustness against file format converting, layer attack as well as various geometric editing provided in CAD editing tools.

Confidence Improvement of Serial Cadastral Map Edit Using Ortho Image (정사영상을 이용한 연속지적도 편집의 신뢰성 향상 방안)

  • Kim Kam Lae;Ra Yong Hwa;Ahn Byung Gu;Park Se Jin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.3
    • /
    • pp.253-259
    • /
    • 2004
  • The sheetwise cadastral map data needs to become a Serial Cadastral Map (SCM) database for the promotion of the reliability of cadastral surveying, for the efficient operation of the Parcel Based Land Information System, and for the convenient use of land information as well. A large amount of money and time are required for the editing process of producing SCM DB in accordance with the $\ulcorner$Guideline for the Production of Serial Cadastral Maps$\lrcorner$ by the Ministry of Construction & Transportation if any of field surveying techniques is accompanied by. In addition, a boundary line that extends to a neat line does not meet the counterpart of the neighboring map sheet at a point. Such cases frequently occur and are much dependent upon the decisions of individuals in charge of editing or inspecting. The core processes of the research, firstly overlay SCM produced by the edition of the sheetwise cadastral maps with Autodesk Map on orthophoto images, secondly adjust the parcel boundaries which are delineated over more than one map sheet, and lastly compare the original boundary coordinates and areas with the corresponding adjusted ones and calculate root mean square errors (RMSEs). The research aims at promoting the quality of SCM by minimizing the inconsistency of parcel boundaries by means of the comparative analysis of the calculated RMSEs.

Building Large-scale CityGML Feature for Digital 3D Infrastructure (디지털 3D 인프라 구축을 위한 대규모 CityGML 객체 생성 방법)

  • Jang, Hanme;Kim, HyunJun;Kang, HyeYoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.187-201
    • /
    • 2021
  • Recently, the demand for a 3D urban spatial information infrastructure for storing, operating, and analyzing a large number of digital data produced in cities is increasing. CityGML is a 3D spatial information data standard of OGC (Open Geospatial Consortium), which has strengths in the exchange and attribute expression of city data. Cases of constructing 3D urban spatial data in CityGML format has emerged on several cities such as Singapore and New York. However, the current ecosystem for the creation and editing of CityGML data is limited in constructing CityGML data on a large scale because of lack of completeness compared to commercial programs used to construct 3D data such as sketchup or 3d max. Therefore, in this study, a method of constructing CityGML data is proposed using commercial 3D mesh data and 2D polygons that are rapidly and automatically produced through aerial LiDAR (Light Detection and Ranging) or RGB (Red Green Blue) cameras. During the data construction process, the original 3D mesh data was geometrically transformed so that each object could be expressed in various CityGML LoD (Levels of Detail), and attribute information extracted from the 2D spatial information data was used as a supplement to increase the utilization as spatial information. The 3D city features produced in this study are CityGML building, bridge, cityFurniture, road, and tunnel. Data conversion for each feature and property construction method were presented, and visualization and validation were conducted.

Development of an Object-Oriented Framework Data Update System (객체 기반의 기본지리정보 갱신시스템 개발)

  • Lee, Jin-Soo;Choi, Yun-Soo;Seo, Chang-Wan;Jeon, Chang-Dong
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.11 no.1
    • /
    • pp.31-44
    • /
    • 2008
  • The 1st phase framework data implementation of National Geographic Information Systems (NGIS) used 1:5,000 digital map with 5 years updating period which is lacking in the latest information. This is a significant factor which hinders the use of framework data. This study proposed the efficient technical method of a location based object data management and system implementation for updating framework data. First, we did an object-oriented data modeling and database design using a location based features identifier(UFID: Unique Feature IDentifier). The second, we developed the system with various functions such as a location based UFID creation, input and output, a spatial and attribute data editing, an object based data processing using UML(Unified Modeling Language). Finally, we applied the system to the study area and got high quality data of 99% accuracy and 35% benefit effect of personnel expenses compare to the previous method. We expect that this study can contribute to the maintenance of national framework data as well as the revitalization of various GIS markets by providing user the latest framework data and that we can develop the methods of a feature-change modeling and monitoring using an object based data management.

  • PDF

A Commissioning of 3D RTP System for Photon Beams

  • Kang, Wee-Saing
    • Proceedings of the Korean Society of Medical Physics Conference
    • /
    • 2002.09a
    • /
    • pp.119-120
    • /
    • 2002
  • The aim is to urge the need of elaborate commissioning of 3D RTP system from the firsthand experience. A 3D RTP system requires so much data such as beam data and patient data. Most data of radiation beam are directly transferred from a 3D dose scanning system, and some other data are input by editing. In the process inputting parameters and/or data, no error should occur. For RTP system using algorithm-bas ed-on beam-modeling, careless beam-data processing could also cause the treatment error. Beam data of 3 different qualities of photon from two linear accelerators, patient data and calculated results were commissioned. For PDD, the doses by Clarkson, convolution, superposition and fast superposition methods at 10 cm for 10${\times}$10 cm field, 100 cm SSD were compared with the measured. An error in the SCD for one quality was input by the service engineer. Whole SCD defined by a physicist is SAD plus d$\sub$max/, the value was just SAD. That resulted in increase of MU by 100${\times}$((1_d$\sub$max//SAD)$^2$-1)%. For 10${\times}$10 cm open field, 1 m SSD and at 10 cm depth in uniform medium of relative electron density (RED) 1, PDDs for 4 algorithms of dose calculation, Clarkson, convolution, superposition and fast-superposition, were compared with the measured. The calculated PDD were similar to the measured. For 10${\times}$10 cm open field, 1 m SSD and at 10 cm depth with 5 cm thick inhomogeneity of RED 0.2 under 2 cm thick RED 1 medium, PDDs for 4 algorithms were compared. PDDs ranged from 72.2% to 77.0% for 4 MV X-ray and from 90.9% to 95.6% for 6 MV X-ray. PDDs were of maximum for convolution and of minimum for superposition. For 15${\times}$15 cm symmetric wedged field, wedge factor was not constant for calculation mode, even though same geometry. The reason is that their wedge factor is considering beam hardness and ray path. Their definition requires their users to change the concept of wedge factor. RTP user should elaborately review beam data and calculation algorithm in commissioning.

  • PDF

Participation Level in Online Knowledge Sharing: Behavioral Approach on Wikipedia (온라인 지식공유의 참여정도: 위키피디아에 대한 행태적 접근)

  • Park, Hyun Jung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.97-121
    • /
    • 2013
  • With the growing importance of knowledge for sustainable competitive advantages and innovation in a volatile environment, many researches on knowledge sharing have been conducted. However, previous researches have mostly relied on the questionnaire survey which has inherent perceptive errors of respondents. The current research has drawn the relationship among primary participant behaviors towards the participation level in knowledge sharing, basically from online user behaviors on Wikipedia, a representative community for online knowledge collaboration. Without users' participation in knowledge sharing, knowledge collaboration for creating knowledge cannot be successful. By the way, the editing patterns of Wikipedia users are diverse, resulting in different revisiting periods for the same number of edits, and thus varying results of shared knowledge. Therefore, we illuminated the participation level of knowledge sharing from two different angles of number of edits and revisiting period. The behavioral dimensions affecting the level of participation in knowledge sharing includes the article talk for public discussion and user talk for private messaging, and community registration, which are observable on Wiki platform. Public discussion is being progressed on article talk pages arranged for exchanging ideas about each article topic. An article talk page is often divided into several sections which mainly address specific type of issues raised during the article development procedure. From the diverse opinions about the relatively trivial things such as what text, link, or images should be added or removed and how they should be restructured to the profound professional insights are shared, negotiated, and improved over the course of discussion. Wikipedia also provides personal user talk pages as a private messaging tool. On these pages, diverse personal messages such as casual greetings, stories about activities on Wikipedia, and ordinary affairs of life are exchanged. If anyone wants to communicate with another person, he or she visits the person's user talk page and leaves a message. Wikipedia articles are assessed according to seven quality grades, of which the featured article level is the highest. The dataset includes participants' behavioral data related with 2,978 articles, which have reached the featured article level, with editing histories of articles, their article talk histories, and user talk histories extracted from user talk pages for each article. The time period for analysis is from the initiation of articles until their promotion to the featured article level. The number of edits represents the total number of participation in the editing of an article, and the revisiting period is the time difference between the first and last edits. At first, the participation levels of each user category classified according to behavioral dimensions have been analyzed and compared. And then, robust regressions have been conducted on the relationships among independent variables reflecting the degree of behavioral characteristics and the dependent variable representing the participation level. Especially, through adopting a motivational theory adequate for online environment in setting up research hypotheses, this work suggests a theoretical framework for the participation level of online knowledge sharing. Consequently, this work reached the following practical behavioral results besides some theoretical implications. First, both public discussion and private messaging positively affect the participation level in knowledge sharing. Second, public discussion exerts greater influence than private messaging on the participation level. Third, a synergy effect of public discussion and private messaging on the number of edits was found, whereas a pretty weak negative interaction effect of them on the revisiting period was observed. Fourth, community registration has a significant impact on the revisiting period, whereas being insignificant on the number of edits. Fifth, when it comes to the relation generated from private messaging, the frequency or depth of relation is shown to be more critical than the scope of relation for the participation level.

Color Correction Using Back Propagation Neural Network in Film Scanner (필름 스캐너에서 역전파 신경회로망을 이용한 색 보정)

  • 홍승범;백중환
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.4
    • /
    • pp.15-22
    • /
    • 2003
  • A film scanner is one of the input devices for ac acquiring high resolution and high qualify of digital images from the existing optical film. Recently the demand of film scanners have risen for experts of image printing and editing fields. However, due to the nonlinear characteristic of light source and sensor, colors of the original film image do not correspond to the colors of the scanned image. Therefore color correction for the scanned digital image is essential in film scanner. In this paper, neural network method is applied for the color correction to CIE L/sup *//a/sup *//b/sup */ color model data converted from RGB color model data. Also a film scanner hardware with 12 bit color resolution for each R, G, B and 2400 dpi is implemented by using the TMS320C32 DSP chip and high resolution line sensor. An experimental result shows that the average color correction rate is 79.8%, which is an improvement of 43.5% than our previous method, polygonal regression method.

  • PDF

An Ontology Editor to describe the semantic association about Web Documents (웹 문서의 의미적 연관성 기술을 위한 온톨로지 에디터)

  • Lee Moo-Hun;Cho Hynu-Kyu;Cho Hyeon-Sung;Cho Sung-Hoon;Jang Chang-Bok;Choi Eui-In
    • The KIPS Transactions:PartD
    • /
    • v.12D no.6 s.102
    • /
    • pp.881-888
    • /
    • 2005
  • As the internet continues to grow, the quantity of information on the Web increases beyond measure. The internet users' abilities and requirements to use information also become varied and complicated. Ontology can describe correct meaning of web resource and relationships between web resources. And it can extract conformable information that a user wants. Accordingly, we need the ontology to represent knowledge. W3C announced OWL(Web Ontology Language), a meaning description technology for such web resources. But, the development of a professional use of tools that can compose and edit effectively is not yet developed adequately. In this paper, we design and implement an Ontology editor which generates and edits OWL documents through intuitional interface, with a OWL parser, a Internal DataModel, and a Serializer.

Design and Implementation of a TV-Anytime Metadata Authoring Tool for Personalized Broadcasting Services (개인형 데이터방송 서비스를 위한 TV-Anytime 메타데이터 저작도구 설계 및 구현)

  • Jun Dong-San;Kim Min-Je;Lee Han-Kyu;Yang Seung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.11 no.3 s.32
    • /
    • pp.284-301
    • /
    • 2006
  • In this paper, we present a design and implementation of a TV-Anytime metadata authoring tool for providing personalized data-broadcasting services. The TV-Anytime specifies metadata schema, metadata coding and delivery, and provides service models to provide personalized broadcasting content services at anytime when users want to consume using metadata including ECG (Electronic Content Guide) and content descriptive information in a PDR (Personal Digital Recorder)-centric environment. In spite of a useful services based on TV-anytime metadata, the metadata authoring still remains as a harassing and time consuming task. For easy metadata authoring, the proposed metadata authoring provides the following key functionalities: metadata visualization, media access, and semi-automatic method for editing segment related metadata.

Small Scale Digital Mapping using Airborne Digital Camera Image Map (디지털 항공영상의 도화성과를 이용한 소축척 수치지도 제작)

  • Choi, Seok-Keun;Oh, Eu-Gene
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.2
    • /
    • pp.141-147
    • /
    • 2011
  • This study analyzed the issues and its usefulness of drawing small-scale digital map by using the large-scale digital map which was producted with high-resolution digital aerial photograph which are commonly photographed in recent years. To this end, correlation analysis of the feature categories on the digital map was conducted, and this map was processed by inputting data, organizing, deleting, editing, and supervising feature categories according to the generalization process. As a result, 18 unnecessary feature codes were deleted, and the accuracy of 1/5,000 for the digital map was met. Although the size of the data and the number of feature categories increased, this was proven to be shown due to the excellent description of the digital aerial photograph. Accordingly, it was shown that drawing a small-scale digital map with the large-scale digital map by digital aerial photograph provided excellent description and high-quality information for digital map.