• Title/Summary/Keyword: automatic modeling

Search Result 650, Processing Time 0.027 seconds

A Process Programming Language and Its Runtime Support System for the SEED Process-centered Software Engineering Environment (SEED 프로세스 중심 소프트웨어 개발 환경을 위한 프로세스 프로그래밍 언어 및 수행지원 시스템)

  • Kim, Yeong-Gon;Choe, Hyeok-Jae;Lee, Myeong-Jun;Im, Chae-Deok;Han, U-Yong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.6
    • /
    • pp.727-737
    • /
    • 1999
  • 프로세스 중심 소프트웨어 개발 환경(PSEE : Process-centered Software Engineering Environment)은 소프트웨어 개발자를 위한 여러가지 정보의 제공과 타스크의 수행, 소프트웨어 개발 도구의 수행 및 제어, 필수적인 규칙이나 업무의 수행등과 같은 다양한 행위를 제공하는 프로세스 모형의 수행을 통하여 소프트웨어 개발 행위를 지원한다. SEED(Software Engineering Environment for Development)는 효율적인 소프트웨어 개발과 프로세스 모형의 수행을 제어하기 위해 ETRI에서 개발된 PSEE이다.본 논문에서는 SEED에서 프로세스 모형을 설계하기 위해 사용되는 SimFlex 프로세스 프로그래밍 언어와, 수행지원시스템인 SEED Engine의 구현에 대하여 기술한다. SimFlex는 간단한 언어 구조를 가진 프로세스 프로그래밍 언어이며, 적절한 적합화를 통하여 다른 PSEE에서 사용될 수 있다. SimFlex 컴파일러는 SimFlex에 의해 기술된 프로세스 모형을 분석하고, 모형의 오류를 검사하며, SEED Engine에 의해 참조되는 중간 프로세스 모형을 생성한다. 중간 프로세스 모형을 사용하여 SEED Engine은 외부 모니터링 도구와 연관하여 사용자를 위한 유용한 정보뿐만 아니라 SimFlex에 의해 기술된 프로세스 모형의 자동적인 수행을 제공한다. SimFlex 언어와 수행지원 시스템의 지원을 통하여 소프트웨어 프로세스를 모형화하는데 드는 비용과 시간을 줄일 수 있으며, 편리하게 프로젝트를 관리하여 양질의 소프트웨어 생산물을 도출할 수 있다. Abstract Process-centered Software Engineering Environments(PSEEs) support software development activities through the enaction of process models, providing a variety of activities such as supply of various information for software developers, automation of routine tasks, invocation and control of software development tools, and enforcement of mandatory rules and practices. The SEED(Software Engineering Environment for Development) system is a PSEE which was developed for effective software process development and controlling the enactment of process models by ETRI.In this paper, we describe the implementation of the SimFlex process programming language used to design process models in SEED, and its runtime support system called by SEED Engine. SimFlex is a software process programming language to describe process models with simple language constructs, and it could be embedded into other PSEEs through appropriate customization. The SimFlex compiler analyzes process models described by SimFlex, check errors in the models, and produce intermediate process models referenced by the SEED Engine. Using the intermediate process models, the SEED Engine provides automatic enactment of the process models described by SimFlex as well as useful information for agents linked to the external monitoring tool. With the help of the SimFlex language and its runtime support system, we can reduce cost and time in modeling software processes and perform convenient project management, producing well-qualified software products.

Development of an Image Processing System for the Large Size High Resolution Satellite Images (대용량 고해상 위성영상처리 시스템 개발)

  • 김경옥;양영규;안충현
    • Korean Journal of Remote Sensing
    • /
    • v.14 no.4
    • /
    • pp.376-391
    • /
    • 1998
  • Images from satellites will have 1 to 3 meter ground resolution and will be very useful for analyzing current status of earth surface. An image processing system named GeoWatch with more intelligent image processing algorithms has been designed and implemented to support the detailed analysis of the land surface using high-resolution satellite imagery. The GeoWatch is a valuable tool for satellite image processing such as digitizing, geometric correction using ground control points, interactive enhancement, various transforms, arithmetic operations, calculating vegetation indices. It can be used for investigating various facts such as the change detection, land cover classification, capacity estimation of the industrial complex, urban information extraction, etc. using more intelligent analysis method with a variety of visual techniques. The strong points of this system are flexible algorithm-save-method for efficient handling of large size images (e.g. full scenes), automatic menu generation and powerful visual programming environment. Most of the existing image processing systems use general graphic user interfaces. In this paper we adopted visual program language for remotely sensed image processing for its powerful programmability and ease of use. This system is an integrated raster/vector analysis system and equipped with many useful functions such as vector overlay, flight simulation, 3D display, and object modeling techniques, etc. In addition to the modules for image and digital signal processing, the system provides many other utilities such as a toolbox and an interactive image editor. This paper also presents several cases of image analysis methods with AI (Artificial Intelligent) technique and design concept for visual programming environment.

An Ontology - based Transformation Method from Feature Model to Class Model (온톨로지 기반 Feature 모델에서 Class 모델로의 변환 기법)

  • Kim, Dong-Ri;Song, Chee-Yang;Kang, Dong-Su;Baik, Doo-Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.5
    • /
    • pp.53-67
    • /
    • 2008
  • At present, for reuse of similar domains between feature model and class model. researches of transformation at the model level and of transformation using ontology between two models are being made. but consistent transformation through metamodel is not made. And the factors of modeling transformation targets are not sufficient, and especially, automatic transformation algorithm and supporting tools are not provided so reuse of domains between models is not activated. This paper proposes a method of transformation from feature model to class model using ontology on the metamodel. For this, it re-establishes the metamodel of feature model, class model, and ontology, and it defines the properties of modelling factors for each metamodel. Based on the properties, it defines the profiles of transformation rules between feature mndel and ontology, and between ontology and class model, using set theory and propositional calculus. For automation of the transformation, it creates transformation algorithm and supporting tools. Using the proposed transformation rules and tools, real application is made through Electronic Approval System. Through this, it is possible to transform from the existing constructed feature model to the class model and to use it again for a different development method. Especially, it is Possible to remove ambiguity of semantic transformation using ontology, and automation of transformation maintains consistence between models.

  • PDF

Finite Element Analysis(fem) of The Fixed Position of the Velcro Band for the 3D Print Wrist Brace made using the Dicom File (CT Dicom 파일을 이용하여 제작한 3D Print 손목보호대용 Velcro band 고정위치의 유한요소해석(FEM))

  • Choi, Hyeun-Woo;Seo, An-Na;Lee, Jong-Min
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.5
    • /
    • pp.585-590
    • /
    • 2021
  • Wrist braces are being used for patients with wrist trauma. Recently, many studies have been conducted to manufacture custom wrist braces using 3D printing technology. Such 3D printing customized orthosis has the advantage of reflecting various factors such as reflecting different shapes for each individual and securing breathability. In this paper, the stress on the orthosis by the number and position of Velcro bands that should be considered when manufacturing a 3D printing custom wrist brace was analyzed. For customized orthosis, 3D modeling of the bone and skin regions was performed using an automatic design software (Reconeasy 3D, Seeann Solution) based on CT images. Based on the 3D skin area, a wrist orthosis design was applied to suit each treatment purpose. And, for the elasticity of the brace, a wrist brace was manufactured with an FDM-type 3D printer using TPU material. To evaluate the effectiveness according to the number and position of the Velcro band of the custom 3D printed wrist brace, the stress distribution of the brace was analyzed by the finite element method (FEM). Through the finite element analysis of the wrist orthosis performed in this study, the stress distribution of the orthosis was confirmed, and the number and position of the orthosis production and Velcro bands could be confirmed. These experimental results will help provide quality treatment to patients.

Improved Performance of Image Semantic Segmentation using NASNet (NASNet을 이용한 이미지 시맨틱 분할 성능 개선)

  • Kim, Hyoung Seok;Yoo, Kee-Youn;Kim, Lae Hyun
    • Korean Chemical Engineering Research
    • /
    • v.57 no.2
    • /
    • pp.274-282
    • /
    • 2019
  • In recent years, big data analysis has been expanded to include automatic control through reinforcement learning as well as prediction through modeling. Research on the utilization of image data is actively carried out in various industrial fields such as chemical, manufacturing, agriculture, and bio-industry. In this paper, we applied NASNet, which is an AutoML reinforced learning algorithm, to DeepU-Net neural network that modified U-Net to improve image semantic segmentation performance. We used BRATS2015 MRI data for performance verification. Simulation results show that DeepU-Net has more performance than the U-Net neural network. In order to improve the image segmentation performance, remove dropouts that are typically applied to neural networks, when the number of kernels and filters obtained through reinforcement learning in DeepU-Net was selected as a hyperparameter of neural network. The results show that the training accuracy is 0.5% and the verification accuracy is 0.3% better than DeepU-Net. The results of this study can be applied to various fields such as MRI brain imaging diagnosis, thermal imaging camera abnormality diagnosis, Nondestructive inspection diagnosis, chemical leakage monitoring, and monitoring forest fire through CCTV.

Image-Based Automatic Bridge Component Classification Using Deep Learning (딥러닝을 활용한 이미지 기반 교량 구성요소 자동분류 네트워크 개발)

  • Cho, Munwon;Lee, Jae Hyuk;Ryu, Young-Moo;Park, Jeongjun;Yoon, Hyungchul
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.41 no.6
    • /
    • pp.751-760
    • /
    • 2021
  • Most bridges in Korea are over 20 years old, and many problems linked to their deterioration are being reported. The current practice for bridge inspection mainly depends on expert evaluation, which can be subjective. Recent studies have introduced data-driven methods using building information modeling, which can be more efficient and objective, but these methods require manual procedures that consume time and money. To overcome this, this study developed an image-based automaticbridge component classification network to reduce the time and cost required for converting the visual information of bridges to a digital model. The proposed method comprises two convolutional neural networks. The first network estimates the type of the bridge based on the superstructure, and the second network classifies the bridge components. In avalidation test, the proposed system automatically classified the components of 461 bridge images with 96.6 % of accuracy. The proposed approach is expected to contribute toward current bridge maintenance practice.

Design and Implementation of Interface System for Swarm USVs Simulation Based on Hybrid Mission Planning (하이브리드형 임무계획을 고려한 군집 무인수상정 시뮬레이션 시스템의 연동 인터페이스 설계 및 구현)

  • Park, Hee-Mun;Joo, Hak-Jong;Seo, Kyung-Min;Choi, Young Kyu
    • Journal of the Korea Society for Simulation
    • /
    • v.31 no.3
    • /
    • pp.1-10
    • /
    • 2022
  • Defense fields widely operate unmanned systems to lower vulnerability and enhance combat effectiveness. In the navy, swarm unmanned surface vehicles(USVs) form a cluster within communication range, share situational awareness information among the USVs, and cooperate with them to conduct military missions. This paper proposes an interface system, i.e., Interface Adapter System(IAS), to achieve inter-USV and intra-USV interoperability. We focus on the mission planning subsystem(MPS) for interoperability, which is the core subsystem of the USV to decide courses of action such as automatic path generation and weapon assignments. The central role of the proposed system is to exchange interface data between MPSs and other subsystems in real-time. To this end, we analyzed the operational requirements of the MPS and identified interface messages. Then we developed the IAS using the distributed real-time middleware. As experiments, we conducted several integration tests at swarm USVs simulation environment and measured delay time and loss ratio of interface messages. We expect that the proposed IAS successfully provides bridge roles between the mission planning system and other subsystems.

Analysis of Research Trends in Deep Learning-Based Video Captioning (딥러닝 기반 비디오 캡셔닝의 연구동향 분석)

  • Lyu Zhi;Eunju Lee;Youngsoo Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.35-49
    • /
    • 2024
  • Video captioning technology, as a significant outcome of the integration between computer vision and natural language processing, has emerged as a key research direction in the field of artificial intelligence. This technology aims to achieve automatic understanding and language expression of video content, enabling computers to transform visual information in videos into textual form. This paper provides an initial analysis of the research trends in deep learning-based video captioning and categorizes them into four main groups: CNN-RNN-based Model, RNN-RNN-based Model, Multimodal-based Model, and Transformer-based Model, and explain the concept of each video captioning model. The features, pros and cons were discussed. This paper lists commonly used datasets and performance evaluation methods in the video captioning field. The dataset encompasses diverse domains and scenarios, offering extensive resources for the training and validation of video captioning models. The model performance evaluation method mentions major evaluation indicators and provides practical references for researchers to evaluate model performance from various angles. Finally, as future research tasks for video captioning, there are major challenges that need to be continuously improved, such as maintaining temporal consistency and accurate description of dynamic scenes, which increase the complexity in real-world applications, and new tasks that need to be studied are presented such as temporal relationship modeling and multimodal data integration.

Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

  • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.109-122
    • /
    • 2014
  • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

Analysis of the Effect of Objective Functions on Hydrologic Model Calibration and Simulation (목적함수에 따른 매개변수 추정 및 수문모형 정확도 비교·분석)

  • Lee, Gi Ha;Yeon, Min Ho;Kim, Young Hun;Jung, Sung Ho
    • Journal of Korean Society of Disaster and Security
    • /
    • v.15 no.1
    • /
    • pp.1-12
    • /
    • 2022
  • An automatic optimization technique is used to estimate the optimal parameters of the hydrologic model, and different hydrologic response results can be provided depending on objective functions. In this study, the parameters of the event-based rainfall-runoff model were estimated using various objective functions, the reproducibility of the hydrograph according to the objective functions was evaluated, and appropriate objective functions were proposed. As the rainfall-runoff model, the storage function model(SFM), which is a lumped hydrologic model used for runoff simulation in the current Korean flood forecasting system, was selected. In order to evaluate the reproducibility of the hydrograph for each objective function, 9 rainfall events were selected for the Cheoncheon basin, which is the upstream basin of Yongdam Dam, and widely-used 7 objective functions were selected for parameter estimation of the SFM for each rainfall event. Then, the reproducibility of the simulated hydrograph using the optimal parameter sets based on the different objective functions was analyzed. As a result, RMSE, NSE, and RSR, which include the error square term in the objective function, showed the highest accuracy for all rainfall events except for Event 7. In addition, in the case of PBIAS and VE, which include an error term compared to the observed flow, it also showed relatively stable reproducibility of the hydrograph. However, in the case of MIA, which adjusts parameters sensitive to high flow and low flow simultaneously, the hydrograph reproducibility performance was found to be very low.