• Title/Summary/Keyword: computing model

Search Result 3,371, Processing Time 0.028 seconds

A Basic Guide to Network Simulation Using OMNeT++ (OMNeT++을 이용한 네크워크 시뮬레이션 기초 가이드)

  • Sooyeon Park
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.1-6
    • /
    • 2024
  • OMNeT++ (Objective Modular Network Testbed in C++) is an extensible and modular C++ simulation library and framework for building network simulators. OMNeT++ provides simulation models independently developed for various fields, including sensor networks, and Internet protocols. This enables researchers to use the tools and features required for their desired simulations. OMNeT++ uses NED (Network Description) Language to define nodes and network topologies, and it is able to implement the creation and behavior of defined network objects in C++. Moreover, the INET framework is an open-source model library for the OMNeT++ simulation environment, containing models for various networking protocols and components, making it convenient for designing and validating new network protocols. This paper aims to explain the concepts of OMNeT++ and the procedures for network simulation using the INET framework to assist novice researchers in modeling and analyzing various network scenarios.

A Lifelog Management System Based on the Relational Data Model and its Applications (관계 데이터 모델 기반 라이프로그 관리 시스템과 그 응용)

  • Song, In-Chul;Lee, Yu-Won;Kim, Hyeon-Gyu;Kim, Hang-Kyu;Haam, Deok-Min;Kim, Myoung-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.9
    • /
    • pp.637-648
    • /
    • 2009
  • As the cost of disks decreases, PCs are soon expected to be equipped with a disk of 1TB or more. Assuming that a single person generates 1GB of data per month, 1TB is enough to store data for the entire lifetime of a person. This has lead to the growth of researches on lifelog management, which manages what people see and listen to in everyday life. Although many different lifelog management systems have been proposed, including those based on the relational data model, based on ontology, and based on file systems, they have all advantages and disadvantages: Those based on the relational data model provide good query processing performance but they do not support complex queries properly; Those based on ontology handle more complex queries but their performances are not satisfactory: Those based on file systems support only keyword queries. Moreover, these systems are lack of support for lifelog group management and do not provide a convenient user interface for modifying and adding tags (metadata) to lifelogs for effective lifelog search. To address these problems, we propose a lifelog management system based on the relational data model. The proposed system models lifelogs by using the relational data model and transforms queries on lifelogs into SQL statements, which results in good query processing performance. It also supports a simplified relationship query that finds a lifelog based on other lifelogs directly related to it, to overcome the disadvantage of not supporting complex queries properly. In addition, the proposed system supports for the management of lifelog groups by providing ways to create, edit, search, play, and share them. Finally, it is equipped with a tagging tool that helps the user to modify and add tags conveniently through the ion of various tags. This paper describes the design and implementation of the proposed system and its various applications.

Regeneration of a defective Railroad Surface for defect detection with Deep Convolution Neural Networks (Deep Convolution Neural Networks 이용하여 결함 검출을 위한 결함이 있는 철도선로표면 디지털영상 재 생성)

  • Kim, Hyeonho;Han, Seokmin
    • Journal of Internet Computing and Services
    • /
    • v.21 no.6
    • /
    • pp.23-31
    • /
    • 2020
  • This study was carried out to generate various images of railroad surfaces with random defects as training data to be better at the detection of defects. Defects on the surface of railroads are caused by various factors such as friction between track binding devices and adjacent tracks and can cause accidents such as broken rails, so railroad maintenance for defects is necessary. Therefore, various researches on defect detection and inspection using image processing or machine learning on railway surface images have been conducted to automate railroad inspection and to reduce railroad maintenance costs. In general, the performance of the image processing analysis method and machine learning technology is affected by the quantity and quality of data. For this reason, some researches require specific devices or vehicles to acquire images of the track surface at regular intervals to obtain a database of various railway surface images. On the contrary, in this study, in order to reduce and improve the operating cost of image acquisition, we constructed the 'Defective Railroad Surface Regeneration Model' by applying the methods presented in the related studies of the Generative Adversarial Network (GAN). Thus, we aimed to detect defects on railroad surface even without a dedicated database. This constructed model is designed to learn to generate the railroad surface combining the different railroad surface textures and the original surface, considering the ground truth of the railroad defects. The generated images of the railroad surface were used as training data in defect detection network, which is based on Fully Convolutional Network (FCN). To validate its performance, we clustered and divided the railroad data into three subsets, one subset as original railroad texture images and the remaining two subsets as another railroad surface texture images. In the first experiment, we used only original texture images for training sets in the defect detection model. And in the second experiment, we trained the generated images that were generated by combining the original images with a few railroad textures of the other images. Each defect detection model was evaluated in terms of 'intersection of union(IoU)' and F1-score measures with ground truths. As a result, the scores increased by about 10~15% when the generated images were used, compared to the case that only the original images were used. This proves that it is possible to detect defects by using the existing data and a few different texture images, even for the railroad surface images in which dedicated training database is not constructed.

Multi-Variate Tabular Data Processing and Visualization Scheme for Machine Learning based Analysis: A Case Study using Titanic Dataset (기계 학습 기반 분석을 위한 다변량 정형 데이터 처리 및 시각화 방법: Titanic 데이터셋 적용 사례 연구)

  • Juhyoung Sung;Kiwon Kwon;Kyoungwon Park;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.121-130
    • /
    • 2024
  • As internet and communication technology (ICT) is improved exponentially, types and amount of available data also increase. Even though data analysis including statistics is significant to utilize this large amount of data, there are inevitable limits to process various and complex data in general way. Meanwhile, there are many attempts to apply machine learning (ML) in various fields to solve the problems according to the enhancement in computational performance and increase in demands for autonomous systems. Especially, data processing for the model input and designing the model to solve the objective function are critical to achieve the model performance. Data processing methods according to the type and property have been presented through many studies and the performance of ML highly varies depending on the methods. Nevertheless, there are difficulties in deciding which data processing method for data analysis since the types and characteristics of data have become more diverse. Specifically, multi-variate data processing is essential for solving non-linear problem based on ML. In this paper, we present a multi-variate tabular data processing scheme for ML-aided data analysis by using Titanic dataset from Kaggle including various kinds of data. We present the methods like input variable filtering applying statistical analysis and normalization according to the data property. In addition, we analyze the data structure using visualization. Lastly, we design an ML model and train the model by applying the proposed multi-variate data process. After that, we analyze the passenger's survival prediction performance of the trained model. We expect that the proposed multi-variate data processing and visualization can be extended to various environments for ML based analysis.

Application of Two-Dimensional Boundary Condition to Three-Dimensional Magnetotelluric Modeling (3차원 MT 탐사 모델링에서 2차원 경계조건의 적용)

  • Han, Nu-Ree;Nam, Myung-Jin;Kim, Hee-Joon;Lee, Tae-Jong;Song, Yoon-Ho;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • v.11 no.4
    • /
    • pp.318-325
    • /
    • 2008
  • Assigning an exact boundary condition is of great importance in three-dimensional (3D) magnetotelluric (MT) modeling, in which no source is considered in a computing domain. This paper presents a 3D MT modeling algorithm utilizing a Dirichlet condition for a 2D host. To compute boundary values for a model with a 2D host, we need to conduct additional 2D MT modeling. The 2D modeling consists of transverse magnetic and electric modes, which are determined from the relationship between the polarization of plane wave and the strike direction of the 2D structure. Since the 3D MT modeling algorithm solves Maxwell's equations for electric fields using the finite difference method with a staggered grid that defines electric fields along cell edges, electric fields are calculated at the same place in the 2D modeling. The algorithm developed in this study can produce reliable MT responses for a 3D model with a 2D host.

Performance Evaluation of Workstation System within ATM Integrated Service Switching System using Mean Value Analysis Algorithm (MVA 알고리즘을 이용한 ATM 기반 통합 서비스 교환기 내 워크스테이션의 성능 평가)

  • Jang, Seung-Ju;Kim, Gil-Yong;Lee, Jae-Hum;Park, Ho-Jin
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.4
    • /
    • pp.421-429
    • /
    • 2000
  • In present, ATM integrated switching system has been developed to a mixed modules that complexed switching system including maintenance, operation based on B-ISDN/LAN service and plug-in module, , which runs on workstation computer system. Meanwhile, workstation has HMI operation system feature including file system management, time management, graphic processing, TMN agent function. The workstation has communicated with between ATM switching module and clients. This computer system architecture has much burden messages communication among processes or processor. These messages communication consume system resources which are socket, message queue, IO device files, regular files, and so on. Therefore, in this paper we proposed new performance modeling with this system architecture. We will analyze the system bottleneck and improve system performance. In addition, in the future, the system has many additional features should be migrated to workstation system, we need previously to evaluate system bottleneck and redesign it. In performance model, we use queueing network model and the simulation package is used PDQ and C-program.

  • PDF

Segmentation and Visualization of Human Anatomy using Medical Imagery (의료영상을 이용한 인체장기의 분할 및 시각화)

  • Lee, Joon-Ku;Kim, Yang-Mo;Kim, Do-Yeon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.8 no.1
    • /
    • pp.191-197
    • /
    • 2013
  • Conventional CT and MRI scans produce cross-section slices of body that are viewed sequentially by radiologists who must imagine or extrapolate from these views what the 3 dimensional anatomy should be. By using sophisticated algorithm and high performance computing, these cross-sections may be rendered as direct 3D representations of human anatomy. The 2D medical image analysis forced to use time-consuming, subjective, error-prone manual techniques, such as slice tracing and region painting, for extracting regions of interest. To overcome the drawbacks of 2D medical image analysis, combining with medical image processing, 3D visualization is essential for extracting anatomical structures and making measurements. We used the gray-level thresholding, region growing, contour following, deformable model to segment human organ and used the feature vectors from texture analysis to detect harmful cancer. We used the perspective projection and marching cube algorithm to render the surface from volumetric MR and CT image data. The 3D visualization of human anatomy and segmented human organ provides valuable benefits for radiation treatment planning, surgical planning, surgery simulation, image guided surgery and interventional imaging applications.

A Study on the Plans for Effective Use of Public Data: From the Perspectives of Benefit, Opportunity, Cost, and Risk (인터넷기반 공공데이터 활용방안 연구: 혜택, 기회, 비용, 그리고 위험요소 관점에서)

  • Song, In Kuk
    • Journal of Internet Computing and Services
    • /
    • v.16 no.4
    • /
    • pp.131-139
    • /
    • 2015
  • With the request for the advent of new engine toward economic growth, the issue regarding public-owned data disclosure has been increasing. The Korean governments are forced to open public-owned data and to utilize them in solving the various social problems and in promoting the welfare for the people. In contrast, due to the distrust of the effectiveness for the policy, many public owned organizations hesitate to open the public-owned data. However, in spite of communication gap between the government and public organizations, Ministry of Government Administration and National Information Society Agency recently planned to accelerate the information disclosure. The study aims to analyze the perception of the public organization for public data utilization and to provide proper recommendations. This research identified mutual weights that the organization recognize in opening and sharing the public data, based on benefit, opportunity, cost, and risk. ANP decision making tool and BOCR model were applied to the analyses. The results show that there are significant differences in perceiving risk and opportunity elements between the government and public organizations. Finally, the study proposed the ideal alternatives based on four elements. The study will hopefully provide the guideline to the public organizations, and assist the related authorities with the information disclosure policy in coming up with the relevant regulations.

Frame-Layer H.264 Rate Control for Scene-Change Video at Low Bit Rate (저 비트율 장면 전환 영상에 대한 향상된 H.264 프레임 단위 데이터율 제어 알고리즘)

  • Lee, Chang-Hyun;Jung, Yun-Ho;Kim, Jae-Seok
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.11
    • /
    • pp.127-136
    • /
    • 2007
  • An abrupt scene-change frame is one that is hardly correlated with the previous frames. In that case, because an intra-coded frame has less distortion than an inter-coded one, almost all macroblocks are encoded in intra mode. This breaks up the rate control flow and increases the number of bits used. Since the reference software for H.264 takes no special action for a scene-change frame, several studies have been conducted to solve the problem using the quadratic R-D model. However, since this model is more suitable for inter frames, the existing schemes are unsuitable for computing the QP of the scene-change intra frame. In this paper, an improved rate control scheme accounting for the characteristics of intra coding is proposed for scene-change frames. The proposed scheme was validated using 16 test sequences. The results showed that the proposed scheme performed better than the existing H.264 rate control schemes. The PSNR was improved by an average of 0.4-0.6 dB and a maximum of 1.1-1.6 dB. The PSNR fluctuation was also in proved by an average of 18.6 %.

A Study on the Problems and Policy Implementation for Open-Source Software Industry in Korea: Soft System Methodology Approach (소프트시스템 모델 방법론을 통해 진단한 국내 공개 SW 산업의 문제점과 정책전략 연구)

  • Kang, Songhee;Shim, Dongnyok;Pack, Pill Ho
    • The Journal of Society for e-Business Studies
    • /
    • v.20 no.4
    • /
    • pp.193-208
    • /
    • 2015
  • In knowledge based society, information technology (IT) has been playing a key role in economic growth. In recent years, it is surprisingly notable that the source of value creation moved from hardware to software in IT industry. Especially, among many kinds of software products, the economic potential of open source was realized by many government agencies. Open source means software codes made by voluntary and open participation of worldwide IT developers, and many policies to promote open source activities were implemented for the purpose of fast growth in IT industry. But in many cases, especially in Korea, the policies promoting open source industry and its ecosystem were not considered successful. Therefore, this study provides the practical reasons for the low performance of Korean open source industry and suggests the pragmatic requisites for effective open source policy. For this purpose, this study applies soft system model (SSM) which is frequently used in academy and industry as a methodology for problem-solving and we link the problems with corresponding policy solutions based on SSM. Given concerns which Korean open source faces now, this study suggests needs for the three different kinds of government policies promoting multiple dimensions of industry: research and development (R&D)-side, supply-side, and computing environment-side. The implications suggested by this research will contribute to implement the practical policy solutions to boost open source industry in Korea.