• Title/Summary/Keyword: Virtual System

Search Result 4,657, Processing Time 0.039 seconds

Economic Impact of HEMOS-Cloud Services for M&S Support (M&S 지원을 위한 HEMOS-Cloud 서비스의 경제적 효과)

  • Jung, Dae Yong;Seo, Dong Woo;Hwang, Jae Soon;Park, Sung Uk;Kim, Myung Il
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.10 no.10
    • /
    • pp.261-268
    • /
    • 2021
  • Cloud computing is a computing paradigm in which users can utilize computing resources in a pay-as-you-go manner. In a cloud system, resources can be dynamically scaled up and down to the user's on-demand so that the total cost of ownership can be reduced. The Modeling and Simulation (M&S) technology is a renowned simulation-based method to obtain engineering analysis and results through CAE software without actual experimental action. In general, M&S technology is utilized in Finite Element Analysis (FEA), Computational Fluid Dynamics (CFD), Multibody dynamics (MBD), and optimization fields. The work procedure through M&S is divided into pre-processing, analysis, and post-processing steps. The pre/post-processing are GPU-intensive job that consists of 3D modeling jobs via CAE software, whereas analysis is CPU or GPU intensive. Because a general-purpose desktop needs plenty of time to analyze complicated 3D models, CAE software requires a high-end CPU and GPU-based workstation that can work fluently. In other words, for executing M&S, it is absolutely required to utilize high-performance computing resources. To mitigate the cost issue from equipping such tremendous computing resources, we propose HEMOS-Cloud service, an integrated cloud and cluster computing environment. The HEMOS-Cloud service provides CAE software and computing resources to users who want to experience M&S in business sectors or academics. In this paper, the economic ripple effect of HEMOS-Cloud service was analyzed by using industry-related analysis. The estimated results of using the experts-guided coefficients are the production inducement effect of KRW 7.4 billion, the value-added effect of KRW 4.1 billion, and the employment-inducing effect of 50 persons per KRW 1 billion.

A Study on Metaverse Construction Based on 3D Spatial Information of Convergence Sensors using Unreal Engine 5 (언리얼 엔진 5를 활용한 융복합센서의 3D 공간정보기반 메타버스 구축 연구)

  • Oh, Seong-Jong;Kim, Dal-Joo;Lee, Yong-Chang
    • Journal of Cadastre & Land InformatiX
    • /
    • v.52 no.2
    • /
    • pp.171-187
    • /
    • 2022
  • Recently, the demand and development for non-face-to-face services are rapidly progressing due to the pandemic caused by the COVID-19, and attention is focused on the metaverse at the center. Entering the era of the 4th industrial revolution, Metaverse, which means a world beyond virtual and reality, combines various sensing technologies and 3D reconstruction technologies to provide various information and services to users easily and quickly. In particular, due to the miniaturization and economic increase of convergence sensors such as unmanned aerial vehicle(UAV) capable of high-resolution imaging and high-precision LiDAR(Light Detection and Ranging) sensors, research on digital-Twin is actively underway to create and simulate real-life twins. In addition, Game engines in the field of computer graphics are developing into metaverse engines by expanding strong 3D graphics reconstuction and simulation based on dynamic operations. This study constructed a mirror-world type metaverse that reflects real-world coordinate-based reality using Unreal Engine 5, a recently announced metaverse engine, with accurate 3D spatial information data of convergence sensors based on unmanned aerial system(UAS) and LiDAR. and then, spatial information contents and simulations for users were produced based on various public data to verify the accuracy of reconstruction, and through this, it was possible to confirm the construction of a more realistic and highly utilizable metaverse. In addition, when constructing a metaverse that users can intuitively and easily access through the unreal engine, various contents utilization and effectiveness could be confirmed through coordinate-based 3D spatial information with high reproducibility.

Utilization of Smart Farms in Open-field Agriculture Based on Digital Twin (디지털 트윈 기반 노지스마트팜 활용방안)

  • Kim, Sukgu
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2023.04a
    • /
    • pp.7-7
    • /
    • 2023
  • Currently, the main technologies of various fourth industries are big data, the Internet of Things, artificial intelligence, blockchain, mixed reality (MR), and drones. In particular, "digital twin," which has recently become a global technological trend, is a concept of a virtual model that is expressed equally in physical objects and computers. By creating and simulating a Digital twin of software-virtualized assets instead of real physical assets, accurate information about the characteristics of real farming (current state, agricultural productivity, agricultural work scenarios, etc.) can be obtained. This study aims to streamline agricultural work through automatic water management, remote growth forecasting, drone control, and pest forecasting through the operation of an integrated control system by constructing digital twin data on the main production area of the nojinot industry and designing and building a smart farm complex. In addition, it aims to distribute digital environmental control agriculture in Korea that can reduce labor and improve crop productivity by minimizing environmental load through the use of appropriate amounts of fertilizers and pesticides through big data analysis. These open-field agricultural technologies can reduce labor through digital farming and cultivation management, optimize water use and prevent soil pollution in preparation for climate change, and quantitative growth management of open-field crops by securing digital data for the national cultivation environment. It is also a way to directly implement carbon-neutral RED++ activities by improving agricultural productivity. The analysis and prediction of growth status through the acquisition of the acquired high-precision and high-definition image-based crop growth data are very effective in digital farming work management. The Southern Crop Department of the National Institute of Food Science conducted research and development on various types of open-field agricultural smart farms such as underground point and underground drainage. In particular, from this year, commercialization is underway in earnest through the establishment of smart farm facilities and technology distribution for agricultural technology complexes across the country. In this study, we would like to describe the case of establishing the agricultural field that combines digital twin technology and open-field agricultural smart farm technology and future utilization plans.

  • PDF

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Clinical Usefulness of Implanted Fiducial Markers for Hypofractionated Radiotherapy of Prostate Cancer (전립선암의 소분할 방사선치료 시에 위치표지자 삽입의 유용성)

  • Choi, Young-Min;Ahn, Sung-Hwan;Lee, Hyung-Sik;Hur, Won-Joo;Yoon, Jin-Han;Kim, Tae-Hyo;Kim, Soo-Dong;Yun, Seong-Guk
    • Radiation Oncology Journal
    • /
    • v.29 no.2
    • /
    • pp.91-98
    • /
    • 2011
  • Purpose: To assess the usefulness of implanted fiducial markers in the setup of hypofractionated radiotherapy for prostate cancer patients by comparing a fiducial marker matched setup with a pelvic bone match. Materials and Methods: Four prostate cancer patients treated with definitive hypofractionated radiotherapy between September 2009 and August 2010 were enrolled in this study. Three gold fiducial markers were implanted into the prostate and through the rectum under ultrasound guidance around a week before radiotherapy. Glycerin enemas were given prior to each radiotherapy planning CT and every radiotherapy session. Hypofractionated radiotherapy was planned for a total dose of 59.5 Gy in daily 3.5 Gy with using the Novalis system. Orthogonal kV X-rays were taken before radiotherapy. Treatment positions were adjusted according to the results from the fusion of the fiducial markers on digitally reconstructed radiographs of a radiotherapy plan with those on orthogonal kV X-rays. When the difference in the coordinates from the fiducial marker fusion was less than 1 mm, the patient position was approved for radiotherapy. A virtual bone matching was carried out at the fiducial marker matched position, and then a setup difference between the fiducial marker matching and bone matching was evaluated. Results: Three patients received a planned 17-fractionated radiotherapy and the rest underwent 16 fractionations. The setup error of the fiducial marker matching was $0.94{\pm}0.62$ mm (range, 0.09 to 3.01 mm; median, 0.81 mm), and the means of the lateral, craniocaudal, and anteroposterior errors were $0.39{\pm}0.34$ mm, $0.46{\pm}0.34$ mm, and $0.57{\pm}0.59$ mm, respectively. The setup error of the pelvic bony matching was $3.15{\pm}2.03$ mm (range, 0.25 to 8.23 mm; median, 2.95 mm), and the error of craniocaudal direction ($2.29{\pm}1.95$ mm) was significantly larger than those of anteroposterior ($1.73{\pm}1.31$ mm) and lateral directions ($0.45{\pm}0.37$ mm), respectively (p<0.05). Incidences of over 3 mm and 5 mm in setup difference among the fractionations were 1.5% and 0% in the fiducial marker matching, respectively, and 49.3% and 17.9% in the pelvic bone matching, respectively. Conclusion: The more precise setup of hypofractionated radiotherapy for prostate cancer patients is feasible with the implanted fiducial marker matching compared with the pelvic bony matching. Therefore, a less marginal expansion of planning target volume produces less radiation exposure to adjacent normal tissues, which could ultimately make hypofractionated radiotherapy safer.

Dose Planning of Forward Intensity Modulated Radiation Therapy for Nasopharyngeal Cancer using Compensating Filters (보상여과판을 이용한 비인강암의 전방위 강도변조 방사선치료계획)

  • Chu Sung Sil;Lee Sang-wook;Suh Chang Ok;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.19 no.1
    • /
    • pp.53-65
    • /
    • 2001
  • Purpose : To improve the local control of patients with nasopharyngeal cancer, we have implemented 3-D conformal radiotherapy and forward intensity modulated radiation therapy (IMRT) to used of compensating filters. Three dimension conformal radiotherapy with intensity modulation is a new modality for cancer treatments. We designed 3-D treatment planning with 3-D RTP (radiation treatment planning system) and evaluation dose distribution with tumor control probability (TCP) and normal tissue complication probability (NTCP). Material and Methods : We have developed a treatment plan consisting four intensity modulated photon fields that are delivered through the compensating tilters and block transmission for critical organs. We get a full size CT imaging including head and neck as 3 mm slices, and delineating PTV (planning target volume) and surrounding critical organs, and reconstructed 3D imaging on the computer windows. In the planning stage, the planner specifies the number of beams and their directions including non-coplanar, and the prescribed doses for the target volume and the permissible dose of normal organs and the overlap regions. We designed compensating filter according to tissue deficit and PTV volume shape also dose weighting for each field to obtain adequate dose distribution, and shielding blocks weighting for transmission. Therapeutic gains were evaluated by numerical equation of tumor control probability and normal tissue complication probability. The TCP and NTCP by DVH (dose volume histogram) were compared with the 3-D conformal radiotherapy and forward intensity modulated conformal radiotherapy by compensator and blocks weighting. Optimization for the weight distribution was peformed iteration with initial guess weight or the even weight distribution. The TCP and NTCP by DVH were compared with the 3-D conformal radiotherapy and intensitiy modulated conformal radiotherapy by compensator and blocks weighting. Results : Using a four field IMRT plan, we have customized dose distribution to conform and deliver sufficient dose to the PTV. In addition, in the overlap regions between the PTV and the normal organs (spinal cord, salivary grand, pituitary, optic nerves), the dose is kept within the tolerance of the respective organs. We evaluated to obtain sufficient TCP value and acceptable NTCP using compensating filters. Quality assurance checks show acceptable agreement between the planned and the implemented MLC(multi-leaf collimator). Conclusion : IMRT provides a powerful and efficient solution for complex planning problems where the surrounding normal tissues place severe constraints on the prescription dose. The intensity modulated fields can be efficaciously and accurately delivered using compensating filters.

  • PDF