• Title/Summary/Keyword: 최적화 연구모델

Search Result 1,829, Processing Time 0.032 seconds

Performance Evaluation and Analysis on Single and Multi-Network Virtualization Systems with Virtio and SR-IOV (가상화 시스템에서 Virtio와 SR-IOV 적용에 대한 단일 및 다중 네트워크 성능 평가 및 분석)

  • Jaehak Lee;Jongbeom Lim;Heonchang Yu
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.2
    • /
    • pp.48-59
    • /
    • 2024
  • As functions that support virtualization on their own in hardware are developed, user applications having various workloads are operating efficiently in the virtualization system. SR-IOV is a virtualization support function that takes direct access to PCI devices, thus giving a high I/O performance by minimizing the need for hypervisor or operating system interventions. With SR-IOV, network I/O acceleration can be realized in virtualization systems that have relatively long I/O paths compared to bare-metal systems and frequent context switches between the user area and kernel area. To take performance advantages of SR-IOV, network resource management policies that can derive optimal network performance when SR-IOV is applied to an instance such as a virtual machine(VM) or container are being actively studied.This paper evaluates and analyzes the network performance of SR-IOV implementing I/O acceleration is compared with Virtio in terms of 1) network delay, 2) network throughput, 3) network fairness, 4) performance interference, and 5) multi-network. The contributions of this paper are as follows. First, the network I/O process of Virtio and SR-IOV was clearly explained in the virtualization system, and second, the evaluation results of the network performance of Virtio and SR-IOV were analyzed based on various performance metrics. Third, the system overhead and the possibility of optimization for the SR-IOV network in a virtualization system with high VM density were experimentally confirmed. The experimental results and analysis of the paper are expected to be referenced in the network resource management policy for virtualization systems that operate network-intensive services such as smart factories, connected cars, deep learning inference models, and crowdsourcing.

Automation of Online to Offline Stores: Extremely Small Depth-Yolov8 and Feature-Based Product Recognition (Online to Offline 상점의 자동화 : 초소형 깊이의 Yolov8과 특징점 기반의 상품 인식)

  • Jongwook Si;Daemin Kim;Sungyoung Kim
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.3
    • /
    • pp.121-129
    • /
    • 2024
  • The rapid advancement of digital technology and the COVID-19 pandemic have significantly accelerated the growth of online commerce, highlighting the need for support mechanisms that enable small business owners to effectively respond to these market changes. In response, this paper presents a foundational technology leveraging the Online to Offline (O2O) strategy to automatically capture products displayed on retail shelves and utilize these images to create virtual stores. The essence of this research lies in precisely identifying and recognizing the location and names of displayed products, for which a single-class-targeted, lightweight model based on YOLOv8, named ESD-YOLOv8, is proposed. The detected products are identified by their names through feature-point-based technology, equipped with the capability to swiftly update the system by simply adding photos of new products. Through experiments, product name recognition demonstrated an accuracy of 74.0%, and position detection achieved a performance with an F2-Score of 92.8% using only 0.3M parameters. These results confirm that the proposed method possesses high performance and optimized efficiency.

Three-dimensional finite element analysis of stress distribution for different implant thread slope and implant angulation (임플란트 나사선 경사각과 식립 각도에 따른 3차원 유한요소 응력분석)

  • Seo, Young-Hun;Lim, Hyun-Pil;Yun, Kwi-Dug;Yoon, Suk-Ja;Vang, Mong-Sook
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.51 no.1
    • /
    • pp.1-10
    • /
    • 2013
  • Purpose: The purpose of this study was to find an inclination slope of the screw thread that is favorable in distributing the stresses to alveolar bone by using three dimensional finite element analysis. Materials and methods: Three types modelling changed implant thread with fixed pitch of 0.8 mm is the single thread implant with $3.8^{\circ}$ inclination, double thread implant with $7.7^{\circ}$ inclination and the triple thread implant with $11.5^{\circ}$ inclination. And three types implant angulation is the $0^{\circ}$, $10^{\circ}$ and $15^{\circ}$ on alveolar bone. The 9 modelling fabricated for three dimensional finite element analysis that restored prosthesis crown. The crown center applied on 200 N vertical load and $15^{\circ}$ tilting load. Results: 1. The more tilting of implant angulation, the more Von-Mises stress and Max principal stress is increasing. 2. Von-Mises stress and Max principal stress is increasing when applied $15^{\circ}$ tilting load than vertical load on the bone. 3. When the number of thread increased, the amount of Von-Mises stress, Max principal stress was reduced since the generated stress was effectively distributed. 4. Since the maximum principal stress affects on the alveolar bone can influence deeply on the longevity of the implants. When comparing the magnitude of the maximum principal stress, the triple thread implant had a least amount of stress. This shows that the triple thread implant gave a best result. Conclusion: A triple thread implant to increase in the thread slope inclination and number of thread is more effective on the distribution of stress than the single and double thread implants especially, implant angulation is more tilting than $10^{\circ}$ on alveolar bone. Thus, effective combination of thread number and thread slope inclination can help prolonging the longevity of implant.

Evaluation of Approximate Exposure to Low-dose Ionizing Radiation from Medical Images using a Computed Radiography (CR) System (전산화 방사선촬영(CR) 시스템을 이용한 근사적 의료 피폭 선량 평가)

  • Yu, Minsun;Lee, Jaeseung;Im, Inchul
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.6
    • /
    • pp.455-464
    • /
    • 2012
  • This study suggested evaluation of approximately exposure to low-dose ionization radiation from medical images using a computed radiography (CR) system in standard X-ray examination and experimental model can compare diagnostic reference level (DRL) will suggest on optimization condition of guard about medical radiation of low dose space. Entrance surface dose (ESD) cross-measuring by standard dosimeter and optically stimulated luminescence dosimeters (OSLDs) in experiment condition about tube voltage and current of X-ray generator. Also, Hounsfield unit (HU) scale measured about each experiment condition in CR system and after character relationship table and graph tabulate about ESD and HU scale, approximately radiation dose about head, neck, thoracic, abdomen, and pelvis draw a measurement. In result measuring head, neck, thoracic, abdomen, and pelvis, average of ESD is 2.10, 2.01, 1.13, 2.97, and 1.95 mGy, respectively. HU scale is $3,276{\pm}3.72$, $3,217{\pm}2.93$, $2,768{\pm}3.13$, $3,782{\pm}5.19$, and $2,318{\pm}4.64$, respectively, in CR image. At this moment, using characteristic relationship table and graph, ESD measured approximately 2.16, 2.06, 1.19, 3.05, and 2.07 mGy, respectively. Average error of measuring value and ESD measured approximately smaller than 3%, this have credibility cover all the bases radiology area of measurement 5%. In its final analysis, this study suggest new experimental model approximately can assess radiation dose of patient in standard X-ray examination and can apply to CR examination, digital radiography and even film-cassette system.

On Method for LBS Multi-media Services using GML 3.0 (GML 3.0을 이용한 LBS 멀티미디어 서비스에 관한 연구)

  • Jung, Kee-Joong;Lee, Jun-Woo;Kim, Nam-Gyun;Hong, Seong-Hak;Choi, Beyung-Nam
    • 한국공간정보시스템학회:학술대회논문집
    • /
    • 2004.12a
    • /
    • pp.169-181
    • /
    • 2004
  • SK Telecom has already constructed GIMS system as the base common framework of LBS/GIS service system based on OGC(OpenGIS Consortium)'s international standard for the first mobile vector map service in 2002, But as service content appears more complex, renovation has been needed to satisfy multi-purpose, multi-function and maximum efficiency as requirements have been increased. This research is for preparation ion of GML3-based platform to upgrade service from GML2 based GIMS system. And with this, it will be possible for variety of application services to provide location and geographic data easily and freely. In GML 3.0, it has been selected animation, event handling, resource for style mapping, topology specification for 3D and telematics services for mobile LBS multimedia service. And the schema and transfer protocol has been developed and organized to optimize data transfer to MS(Mobile Stat ion) Upgrade to GML 3.0-based GIMS system has provided innovative framework in the view of not only construction but also service which has been implemented and applied to previous research and system. Also GIMS channel interface has been implemented to simplify access to GIMS system, and service component of GIMS internals, WFS and WMS, has gotten enhanded and expanded function.

  • PDF

Development of a Korean Standard Structural Brain Template in Cognitive Normals and Patients with Mild Cognitive Impairment and Alzheimer's Disease (정상노인 및 경도인지장애 및 알츠하이머성 치매 환자에서의 한국인 뇌 구조영상 표준판 개발)

  • Kim, Min-Ji;Jahng, Geon-Ho;Lee, Hack-Young;Kim, Sun-Mi;Ryu, Chang-Woo;Shin, Won-Chul;Lee, Soo-Yeol
    • Investigative Magnetic Resonance Imaging
    • /
    • v.14 no.2
    • /
    • pp.103-114
    • /
    • 2010
  • Purpose : To generate a Korean specific brain template, especially in patients with Alzheimer's disease (AD) by optimizing the voxel-based analysis. Materials and Methods : Three-dimensional T1-weighted images were obtained from 123 subjects who were 43 cognitively normal subjects and patients with 44 mild cognitive impairment (MCI) and 36 AD. The template and the corresponding aprior maps were created by using the matched pairs approach with considering differences of age, gender and differential diagnosis (DDX). We measured several characteristics in both our and the MNI templates, including in the ventricle size. Also, the fractions of gray matter and white matter voxels normalized by the total intracranial were evaluated. Results : The high resolution template and the corresponding aprior maps of gray matter, white matter (WM) and CSF were created with the voxel-size of $1{\times}1{\times}1\;mm$. Mean distance measures and the ventricle sizes differed between two templates. Our brain template had less gray matter and white matter areas than the MNI template. There were volume differences more in gray matter than in white matter. Conclusion : Gray matter and/or white matter integrity studies in populations of Korean elderly and patients with AD are needed to investigate with this template.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Prediction of Acer pictum subsp. mono Distribution using Bioclimatic Predictor Based on SSP Scenario Detailed Data (SSP 시나리오 상세화 자료 기반 생태기후지수를 활용한 고로쇠나무 분포 예측)

  • Kim, Whee-Moon;Kim, Chaeyoung;Cho, Jaepil;Hur, Jina;Song, Wonkyong
    • Ecology and Resilient Infrastructure
    • /
    • v.9 no.3
    • /
    • pp.163-173
    • /
    • 2022
  • Climate change is a key factor that greatly influences changes in the biological seasons and geographical distribution of species. In the ecological field, the BioClimatic predictor (BioClim), which is most related to the physiological characteristics of organisms, is used for vulnerability assessment. However, BioClim values are not provided other than the future period climate average values for each GCM for the Shared Socio-economic Pathways (SSPs) scenario. In this study, BioClim data suitable for domestic conditions was produced using 1 km resolution SSPs scenario detailed data produced by Rural Development Administration, and based on the data, a species distribution model was applied to mainly grow in southern, Gyeongsangbuk-do, Gangwon-do and humid regions. Appropriate habitat distributions were predicted every 30 years for the base years (1981 - 2010) and future years (2011 - 2100) of the Acer pictum subsp. mono. Acer pictum subsp. mono appearance data were collected from a total of 819 points through the national natural environment survey data. In order to improve the performance of the MaxEnt model, the parameters of the model (LQH-1.5) were optimized, and 7 detailed biolicm indices and 5 topographical indices were applied to the MaxEnt model. Drainage, Annual Precipitation (Bio12), and Slope significantly contributed to the distribution of Acer pictum subsp. mono in Korea. As a result of reflecting the growth characteristics that favor moist and fertile soil, the influence of climatic factors was not significant. Accordingly, in the base year, the suitable habitat for a high level of Acer pictum subsp. mono is 3.41% of the area of Korea, and in the near future (2011 - 2040) and far future (2071 - 2100), SSP1-2.6 accounts for 0.01% and 0.02%, gradually decreasing. However, in SSP5-8.5, it was 0.01% and 0.72%, respectively, showing a tendency to decrease in the near future compared to the base year, but to gradually increase toward the far future. This study confirms the future distribution of vegetation that is more easily adapted to climate change, and has significance as a basic study that can be used for future forest restoration of climate change-adapted species.

A Numerical Study for Effective Operation of MSW Incinerator for Waste of High Heating Value by the Addition of Moisture Air (함습공기를 이용한 고발열량 도시폐기물 소각로의 효율적 운전을 위한 수치 해석적 연구)

  • Shin, Mi-Soo;Shin, Na-Ra;Jang, Dong-Soon
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.35 no.2
    • /
    • pp.115-123
    • /
    • 2013
  • Stoker type incinerator is one of the most popular one used as municipal solid waste (MSW) incineration because, in general, it is quite suitable for large capacity and need no preprocessing facility. Nowadays, however, since the combustible portion of incoming MSW increases together with the decrease of the moisture content due to prohibition of directly burying food waste in landfill, the heating value of waste is remarkably increasing in comparison with the early stage of incinerator installation. Consequently, the increased heating value in incinerator operation causes a number of serious problems such as reduction of waste amount to be burned due to the boiler heat capacity together with the significant NO generation in high temperature environment. Therefore, in this study, a series of numerical simulation have been made as parameters of waste amount and the fraction of moisture in air stream in order to investigate optimal operating condition for the resolution of the problems associated with the high heating value of waste mentioned above. In specific, a detailed turbulent reaction flow field calculation with NO model was made for the full scale incinerator of D city. To this end, the injection method of moisturized air as oxidizer was intensively reviewed by the addition of moisture water amount from 10% and 20%. The calculation result, in general, showed that the reduction of maximum flame temperature appears consistently due to the combined effects of the increased specific heat of combustion air and vaporization heat by the addition of water moisture. As a consequence, the generation of NOx concentration was substantially reduced. Further, for the case of 20% moisture amount stream, the afterburner region is quite appropriate in temperature range for the operation of SNCR. This suggests the SNCR facility can be considered for reoperation. which is not in service at all due to the increased heating value of MSW.

An Outlier Detection Using Autoencoder for Ocean Observation Data (해양 이상 자료 탐지를 위한 오토인코더 활용 기법 최적화 연구)

  • Kim, Hyeon-Jae;Kim, Dong-Hoon;Lim, Chaewook;Shin, Yongtak;Lee, Sang-Chul;Choi, Youngjin;Woo, Seung-Buhm
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.265-274
    • /
    • 2021
  • Outlier detection research in ocean data has traditionally been performed using statistical and distance-based machine learning algorithms. Recently, AI-based methods have received a lot of attention and so-called supervised learning methods that require classification information for data are mainly used. This supervised learning method requires a lot of time and costs because classification information (label) must be manually designated for all data required for learning. In this study, an autoencoder based on unsupervised learning was applied as an outlier detection to overcome this problem. For the experiment, two experiments were designed: one is univariate learning, in which only SST data was used among the observation data of Deokjeok Island and the other is multivariate learning, in which SST, air temperature, wind direction, wind speed, air pressure, and humidity were used. Period of data is 25 years from 1996 to 2020, and a pre-processing considering the characteristics of ocean data was applied to the data. An outlier detection of actual SST data was tried with a learned univariate and multivariate autoencoder. We tried to detect outliers in real SST data using trained univariate and multivariate autoencoders. To compare model performance, various outlier detection methods were applied to synthetic data with artificially inserted errors. As a result of quantitatively evaluating the performance of these methods, the multivariate/univariate accuracy was about 96%/91%, respectively, indicating that the multivariate autoencoder had better outlier detection performance. Outlier detection using an unsupervised learning-based autoencoder is expected to be used in various ways in that it can reduce subjective classification errors and cost and time required for data labeling.