• Title/Summary/Keyword: 3D 파일 포맷

Search Result 40, Processing Time 0.021 seconds

A Semi-fragile Watermarking Algorithm of 3D Mesh Model for Rapid Prototyping System Application (쾌속조형 시스템의 무결성 인증을 위한 3차원 메쉬 모델의 Semi-fragile 워터마킹)

  • Chi, Ji-Zhe;Kim, Jong-Weon;Choi, Jong-Uk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.17 no.6
    • /
    • pp.131-142
    • /
    • 2007
  • In this paper, semi-fragile watermarking algorithm was proposed for the application to RP(Rapid Prototyping) system. In the case of the perceptual change or distortion of the original one, the prototype product will be affected from the process because the RP system requires the high precision measure. Therefore, the geometrical transformations like translation, rotation and scaling, the mesh order change and the file format change are used in the RP system because they do not change the basic shapes of the 3D models, but, the decimation and the smoothing are not used because they change the models. The proposed algorithm which is called semi-fragile watermarking is robust against to these kinds of manipulations which preserve the original shapes because it considers the limitations of the RP system, but fragile against to the other manipulations which change the original shapes. This algorithm does not change the model shapes after embedding the watermark information, that is, there is no shape difference between the original model and the watermarked model. so, it will be useful to authenticate the data integrity and hide the information in the field of mechanical engineering which requires the high precision measure.

A Watermarking Algorithm of 3D Mesh Model Using Spherical Parameterization (구면 파라미터기법을 이용한 3차원 메쉬 모델의 워더마킹 알고리즘)

  • Cui, Ji-Zhe;Kim, Jong-Weon;Choi, Jong-Uk
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.18 no.1
    • /
    • pp.149-159
    • /
    • 2008
  • In this paper, we propose a blind watermarking algorithm of 3d mesh model using spherical parameterization. Spherical parameterization is a useful method which is applicable to 3D data processing. Especially, orthogonal coordinate can not analyse the feature of the vertex coordination of the 3D mesh model, but this is possible to analyse and process. In this paper, the centroid center of the 3D model was set to the origin of the spherical coordinate, the orthogonal coordinate system was transformed to the spherical coordinate system, and then the spherical parameterization was applied. The watermark was embedded via addition/modification of the vertex after the feature analysis of the geometrical information and topological information. This algorithm is robust against to the typical geometrical attacks such as translation, scaling and rotation. It is also robust to the mesh reordering, file format change, mesh simplification, and smoothing. In this case, the this algorithm can extract the watermark information about $90{\sim}98%$ from the attacked model. This means it can be applicable to the game, virtual reality and rapid prototyping fields.

A Study on the Effects of BIM Adoption and Methods of Implementationin Landscape Architecture through an Analysis of Overseas Cases (해외사례 분석을 통한 조경분야에서의 BIM 도입효과 및 실행방법에 관한 연구)

  • Kim, Bok-Young;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.45 no.1
    • /
    • pp.52-62
    • /
    • 2017
  • Overseas landscape practices have already benefited from the awareness of BIM while landscape-related organizations are encouraging its use and the number of landscape projects using BIM is increasing. However, since BIM has not yet been introduced in the domestic field, this study investigated and analyzed overseas landscape projects and discussed the positive effects and implementation of BIM. For this purpose, landscape projects were selected to show three effects of BIM: improvement of design work efficiency, building of a platform for cooperation, and performance of topography design. These three projects were analyzed across four aspects of implementation methods: landscape information, 3D modeling, interoperability, and visualization uses of BIM. First, in terms of landscape information, a variety of building information was constructed in the form of 3D libraries or 2D CAD format from detailed landscape elements to infrastructure. Second, for 3D modeling, a landscape space including simple terrain and trees was modeled with Revit while elaborate and complex terrain was modeled with Maya, a professional 3D modeling tool. One integrated model was produced by periodically exchanging, reviewing, and finally combining each model from interdisciplinary fields. Third, interoperability of data from different fields was achieved through the unification of file formats, conversion of differing formats, or compliance with information standards. Lastly, visualized 3D models helped coordination among project partners, approval of design, and promotion through public media. Reviewing of the case studies shows that BIM functions as a process to improve work efficiency and interdisciplinary collaboration, rather than simply as a design tool. It has also verified that landscape architects could play an important role in integrated projects using BIM. Just as the introduction of BIM into the architecture, engineering and construction industries saw great benefits and opportunities, BIM should also be introduced to landscape architecture.

Development of a Haptic Modeling and Editing (촉감 모델링 및 편집 툴 개발)

  • Seo, Yong-Won;Lee, Beom-Chan;Cha, Jong-Eun;Kim, Jong-Phil;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.373-378
    • /
    • 2007
  • 최근 들어 햅틱 분야는 디지털 콘텐츠를 만질 수 있게 촉감을 제공함으로써 의학, 교육, 군사, 엔터테인먼트, 방송 분야 등에서 널리 연구되고 있다. 그러나 햅틱 분야가 사용자에게 시청각 정보와 더불어 추가적인 촉감을 제공함으로써 보다 실감 있고 자연스러운 상호작용을 제공하는 등 여러 가지 장점을 가진 것에 비해 아직은 일반 사용자들에게 생소한 분야다. 그 이유 중 하나로 촉감 상호작용이 가능한 콘텐츠의 부재를 들 수 있다. 또한 최근에 가상환경(Virtual Environment, VR)에 관심이 증가 되고, 가상환경에 햅틱이라는 기술을 접목시키는 시도가 많이 일어나고 있어서, 촉감 모델링에 대한 욕구 또한 증대 되고 있다. 일반적으로 촉감 모델링은 Material properties를 가지고 있는 그래픽 모델들로 구성이 된다. 그래픽 모델링은 일반적인 모델링툴 (MAYA, 3D MAX, 기타 등)으로 할 수 있다. 하지만 촉감 관련된 촉감 모델들은 콘텐츠를 제작한 이후에 일일이 수작업으로 넣어 주어야 한다. 그래픽 모델링에서는 사용자가 직접 눈으로 확인 하면서 작업을 이루어 지기 때문에 직관적으로 이루어질 수 있다. 이와 비슷하게 촉감 모델링은 직관적인 모델링을 하기 위해서 사용자가 직접 촉감을 느껴 보면서 진행이 되어야 한다. 또한 그래픽 모델링과 촉감 모델링이 동시에 진행이 되지 않기 때문에 촉감 콘텐츠를 만드는데 시간이 많이 걸리게 되고 직관적이지 못하는 단점이 있다. 더 나아가서 이런 촉감 모델링을 포함한 모델링 높은 생산성을 위해서 신속히 이루어져야 한다. 이런 이유들 때문에 촉감 모델링을 위한 새로운 인터페이스가 필요하다. 본 논문에서는 촉감 상호작용이 가능한 촉감 콘텐츠를 직관적으로 생성하고 조작할 수 있게 하는 촉감 모델러를 기술한다. 촉감 모델러에서 사용자는 3 자유도 촉감 장치를 사용하여 3 차원의 콘텐츠 (정적 이거나 동적이거나 Deformation이 가능한 2D, 2.5D, 3D Scene)를 실시간으로 만져보면서 생성, 조작할 수 있는 촉감 사용자 인터페이스 (Haptic User Interface, HUI)를 통해서 콘텐츠의 표면 촉감 특성을 직관적으로 편집할 수 있다. 촉감 사용자인터페이스는 마우스로 조작하는 기존의 2 차원 그래픽 사용자 인터페이스를 포함하여 3 차원으로 사용자 인터페이스도 추가되어 있고 그 형태는 촉감 장치로 조작할 수 있는 버튼, 라디오버튼, 슬라이더, 조이스틱의 구성요소로 이루어져있다. 사용자는 각각의 구성요소를 조작하여 콘텐츠의 표면 촉감 특성 값을 바꾸고 촉감 사용자 인터페이스의 한 부분을 만져 그 촉감을 실시간으로 느껴봄으로써 직관적으로 특성 값을 정할 수 있다. 또한, XML 기반의 파일포맷을 제공함으로써 생성된 콘텐츠를 저장할 수 있고 저장된 콘텐츠를 불러오거나 다른 콘텐츠에 추가할 수 있다. 이러한 시스템은 햅틱이라는 분야를 잘 모르는 사람들도 직관적으로 촉감 모델링을 하는데 큰 도움을 줄 수 있을 것이다.

  • PDF

Automation of Information Extraction from IFC-BIM for Indoor Air Quality Certification (IFC-BIM을 활용한 실내공기질 인증 요구정보 생성 자동화)

  • Hong, Simheee;Yeo, Changjae;Yu, Jungho
    • Korean Journal of Construction Engineering and Management
    • /
    • v.18 no.3
    • /
    • pp.63-73
    • /
    • 2017
  • In contemporary society, it is increasingly common to spend more time indoors. As such, there is a continually growing desire to build comfortable and safe indoor environments. Along with this trend, however, there are some serious indoor-environment challenges, such as the quality of indoor air and Sick House Syndrome. To address these concerns the government implements various systems to supervise and manage indoor environments. For example, green building certification is now compulsory for public buildings. There are three categories of green building certification related to indoor air in Korea: Health-Friendly Housing Construction Standards, Green Standard for Energy & Environmental Design(G-SEED), and Indoor Air Certification. The first two types of certification, Health-Friendly Housing Construction Standards and G-SEED, evaluate data in a drawing plan. In comparison, the Indoor Air Certification evaluates measured data. The certification using data from a drawing requires a considerable amount of time compared to other work. A 2D tool needs to be employed to measure the area manually. Thus, this study proposes an automatic assessment process using a Building Information Modeling(BIM) model based on 3D data. This process, using open source Industry Foundation Classes(IFC), exports data for the certification system, and extracts the data to create an Excel sheet for the certification. This is expected to improve the work process and reduce the workload associated with evaluating indoor air conditions.

Development of MPEG-4 IPMP Authoring Tool (MPEG-4 IPMP 저작 도구 개발)

  • Kim Kwangyong;Hong Jinwoo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.75-78
    • /
    • 2003
  • MPEG-4 표준은 저작자가 정지영상, 텍스트, 2D/3D 그래픽스, 오디오, 심지어 임의형의 비디오 등과 같이 다양한 형태의 객체들을 개별적으로 구성하고 이들을 시/공간자적으로 다루기 용이하게 해 준다. 이와 같은 객체 기반 코딩 특성에 의해서 대화형 방송 콘텐츠를 제작하는데 가장 유용한 방식으로 고려할 수 있다. 피러나, 콘텐츠의 제작, 전송, 소비 관전에서 고려해 달 때, 콘텐츠 제작자 또는 저작권자의 보호 및 관리가 필요하게 되었다. 이에 따라 최근에는 OPIMA (Open Platform Initiative for Multimedia Access), SDMI (Secure Digital Music Initiative) and MPEG(Moving Picture Expert Group) OPIMATfMr(Intellectual Property Management & Protection)와 같은 국제 표준 단체들이 콘텐츠 보호 및 관리에 대한 관심을 가지게 되었다. 특히, MPEG의 경우에 MPEG-4 IPMP를 표준화하여 디지털 콘텐츠와 저작권에 대한 보호를 체계적이고 효과적으로 다루는 연구를 가장 활발히 해오고 있다. 이 논문에서 우리는 MPEG-4 콘텐츠 저작자가 MPEG-4 규격에 맞게 보호화 된 객체 기반 방송용 콘텐츠를 쉽고 편리하게 제작학 수 있도록 하기 위한 MPEG씨 콘텐츠 및 저자권 보호를 위한 MPEG-4 IPMP 저작 도구를 제안하고자 한다. 제안한 MPEG-4 콘텐츠 및 저작권 보호 저작 도구는 저작자에게 친근한 사용자 인터페이스를 제공하여 편집 및 수정이 용이한 텍스트 포맷인 IPMP회된 XMT(extensible Mpeg-4 Textual format) 파일을 생성한다. 또한, 콘텐츠 전송 및 저장의 효율성을 위해 이진 포멧인 IPMP화된 MP4 파일을 생성할 수 있다.으로써, 에러 이미지가 가지고 있는 엔트로피에 좀 근접하게 코딩을 할 수 있게 되었다. 이 방법은 실제로 Arithmetic Coder를 이용하는 다른 압축 방법에 그리고 적용할 수 있다. 실험 결과 압축효율은 JPEG-LS보다 약 $5\%$의 압축 성능 개선이 있었으며, CALIC과는 대등한 압축률을 보이며, 부호화/복호화 속도는 CALIC보다 우수한 것으로 나타났다.우 $23.87\%$($18.00\~30.91\%$), 갑폭 $23.99\%$($17.82\~30.48\%$), 체중 $91.51\%$($58.86\~129.14\%$)이였으며 성장율은 사육 온도구간별 차는 없었다.20 km 까지의 지점들(지점 2에서 지점 6)에서 매우 높은 값을 보이며 이는 조석작용으로 해수와 담수가 강제혼합되면서 표층퇴적물이 재부유하기 때문이라고 판단된다. 영양염류는 월별로 다소의 차이는 있으나, 대체적으로 지점 1과 2에서 가장 낮고, 상류로 갈수록 점차 증가하며 지점 7 상류역이 하류역에 비해 높은 농도이다. 월별로는 7월에 규산염, 용존무기태질소 및 암모니아의 농도가 가장 높은 반면에 용존산소포화도는 가장 낮다. 그러나 지점 14 상류역에서는 5월에 측정한 용존무기태질소, 암모니아, 인산염 및 COD 값이 7월보다 다소 높거나 비슷하다. 한편 영양염류와 COD값은 대체적으로 8월에 가장 낮으나 용존산소포화도는 가장 높다.출조건은 $100^{\circ}C$에서 1분간의 고온단시간 추출이 적합하였다. 증가를 나타내었는데, 저장기간에 따른 물성의 변화는 숭어에 비하여 붕장어가 적었다.양식산은 aspartic acid 및 proline이 많았다. 또한 잉어는 천연산이

  • PDF

A Study on Light-weight Algorithm of Large scale BIM data for Visualization on Web based GIS Platform (웹기반 GIS 플랫폼 상 가시화 처리를 위한 대용량 BIM 데이터의 경량화 알고리즘 제시)

  • Kim, Ji Eun;Hong, Chang Hee
    • Spatial Information Research
    • /
    • v.23 no.1
    • /
    • pp.41-48
    • /
    • 2015
  • BIM Technology contains data from the life cycle of facility through 3D modeling. For these, one building products the huge file because of massive data. One of them is IFC which is the standard format, and there are issues that large scale data processing based on geometry and property information of object. It increases the rendering speed and constitutes the graphic card, so large scale data is inefficient for screen visualization to user. The light weighting of large scale BIM data has to solve for process and quality of program essentially. This paper has been searched and confirmed about light weight techniques from domestic and abroad researches. To control and visualize the large scale BIM data effectively, we proposed and verified the technique which is able to optimize the BIM character. For operating the large scale data of facility on web based GIS platform, the quality of screen switch from user phase and the effective memory operation were secured.

Development of KML conversion technology for ENCs application (전자해도 활용을 위한 KML 변환기술 개발)

  • Oh, Se-Woong;Ko, Hyun-Joo;Park, Jong-Min;Lee, Moon-Jin
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2010.04a
    • /
    • pp.135-138
    • /
    • 2010
  • IMO adopt the revision of SOLAS convention on requirement systems for ECDIS and considered an ECDIS as the major system for E-Navigation strategy on marine transportation safety and environment protection. ENC(Electronic Navigational Chart) as base map of ECDIS is considered as a principal information infrastructure that is essential for navigation tasks. But ENCs are not easy to utilize because they are encoded according to ISO/IEC 8211 file format, and ENCs is required to utilize in parts of Marine GIS and various marine application because they are used for navigational purpose mainly. Meanwhile Google earth is satellite map that Google company service, is utilized in all kinds of industry generally providing local information including satellite image, map, topography, 3D building information, etc. In this paper, we developed KML conversion technology for ENC application. details of development contents consist of ENC loading module and KML conversion module. Also, we applied this conversion technology to Korea ENC and evaluated the results.

  • PDF

Efficacy and Accuracy of Patient Specific Customize Bolus Using a 3-Dimensional Printer for Electron Beam Therapy (전자선 빔 치료 시 삼차원프린터를 이용하여 제작한 환자맞춤형 볼루스의 유용성 및 선량 정확도 평가)

  • Choi, Woo Keun;Chun, Jun Chul;Ju, Sang Gyu;Min, Byung Jun;Park, Su Yeon;Nam, Hee Rim;Hong, Chae-Seon;Kim, MinKyu;Koo, Bum Yong;Lim, Do Hoon
    • Progress in Medical Physics
    • /
    • v.27 no.2
    • /
    • pp.64-71
    • /
    • 2016
  • We develop a manufacture procedure for the production of a patient specific customized bolus (PSCB) using a 3D printer (3DP). The dosimetric accuracy of the 3D-PSCB is evaluated for electron beam therapy. In order to cover the required planning target volume (PTV), we select the proper electron beam energy and the field size through initial dose calculation using a treatment planning system. The PSCB is delineated based on the initial dose distribution. The dose calculation is repeated after applying the PSCB. We iteratively fine-tune the PSCB shape until the plan quality is sufficient to meet the required clinical criteria. Then the contour data of the PSCB is transferred to an in-house conversion software through the DICOMRT protocol. This contour data is converted into the 3DP data format, STereoLithography data format and then printed using a 3DP. Two virtual patients, having concave and convex shapes, were generated with a virtual PTV and an organ at risk (OAR). Then, two corresponding electron treatment plans with and without a PSCB were generated to evaluate the dosimetric effect of the PSCB. The dosimetric characteristics and dose volume histograms for the PTV and OAR are compared in both plans. Film dosimetry is performed to verify the dosimetric accuracy of the 3D-PSCB. The calculated planar dose distribution is compared to that measured using film dosimetry taken from the beam central axis. We compare the percent depth dose curve and gamma analysis (the dose difference is 3%, and the distance to agreement is 3 mm) results. No significant difference in the PTV dose is observed in the plan with the PSCB compared to that without the PSCB. The maximum, minimum, and mean doses of the OAR in the plan with the PSCB were significantly reduced by 9.7%, 36.6%, and 28.3%, respectively, compared to those in the plan without the PSCB. By applying the PSCB, the OAR volumes receiving 90% and 80% of the prescribed dose were reduced from $14.40cm^3$ to $0.1cm^3$ and from $42.6cm^3$ to $3.7cm^3$, respectively, in comparison to that without using the PSCB. The gamma pass rates of the concave and convex plans were 95% and 98%, respectively. A new procedure of the fabrication of a PSCB is developed using a 3DP. We confirm the usefulness and dosimetric accuracy of the 3D-PSCB for the clinical use. Thus, rapidly advancing 3DP technology is able to ease and expand clinical implementation of the PSCB.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.