• Title/Summary/Keyword: 성능모델

Search Result 11,818, Processing Time 0.043 seconds

Automatic Interpretation of Epileptogenic Zones in F-18-FDG Brain PET using Artificial Neural Network (인공신경회로망을 이용한 F-18-FDG 뇌 PET의 간질원인병소 자동해석)

  • 이재성;김석기;이명철;박광석;이동수
    • Journal of Biomedical Engineering Research
    • /
    • v.19 no.5
    • /
    • pp.455-468
    • /
    • 1998
  • For the objective interpretation of cerebral metabolic patterns in epilepsy patients, we developed computer-aided classifier using artificial neural network. We studied interictal brain FDG PET scans of 257 epilepsy patients who were diagnosed as normal(n=64), L TLE (n=112), or R TLE (n=81) by visual interpretation. Automatically segmented volume of interest (VOI) was used to reliably extract the features representing patterns of cerebral metabolism. All images were spatially normalized to MNI standard PET template and smoothed with 16mm FWHM Gaussian kernel using SPM96. Mean count in cerebral region was normalized. The VOls for 34 cerebral regions were previously defined on the standard template and 17 different counts of mirrored regions to hemispheric midline were extracted from spatially normalized images. A three-layer feed-forward error back-propagation neural network classifier with 7 input nodes and 3 output nodes was used. The network was trained to interpret metabolic patterns and produce identical diagnoses with those of expert viewers. The performance of the neural network was optimized by testing with 5~40 nodes in hidden layer. Randomly selected 40 images from each group were used to train the network and the remainders were used to test the learned network. The optimized neural network gave a maximum agreement rate of 80.3% with expert viewers. It used 20 hidden nodes and was trained for 1508 epochs. Also, neural network gave agreement rates of 75~80% with 10 or 30 nodes in hidden layer. We conclude that artificial neural network performed as well as human experts and could be potentially useful as clinical decision support tool for the localization of epileptogenic zones.

  • PDF

A Basic Study for Sustainable Analysis and Evaluation of Energy Environment in Buildings : Focusing on Energy Environment Historical Data of Residential Buildings (빌딩의 지속가능 에너지환경 분석 및 평가를 위한 기초 연구 : 주거용 건물의 에너지환경 실적정보를 중심으로)

  • Lee, Goon-Jae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.1
    • /
    • pp.262-268
    • /
    • 2017
  • The energy consumption of buildings is approximately 20.5% of the total energy consumption, and the interest in energy efficiency and low consumption of the building is increasing. Several studies have performed energy analysis and evaluation. Energy analysis and evaluation are effective when applied in the initial design phase. In the initial design phase, however, the energy performance is evaluated using general level information, such as glazing area and surface area. Therefore, the evaluation results of the detailed design stage, which is based on the drawings, including detailed information of the materials and facilities, will be different. Thus far, most studies have reported the analysis and evaluation at the detailed design stage, where detailed information about the materials installed in the building becomes clear. Therefore, it is possible to improve the accuracy of the energy environment analysis if the energy environment information generated during the life cycle of the building can be established and accurate information can be provided in the analysis at the initial design stage using a probability / statistical method. On the other hand, historical data on energy use has not been established in Korea. Therefore, this study performed energy environment analysis to construct the energy environment historical data. As a result of the research, information classification system, information model, and service model for acquiring and providing energy environment information that can be used for building lifecycle information of buildings are presented and used as the basic data. The results can be utilized in the historical data management system so that the reliability of analysis can be improved by supplementing the input information at the initial design stage. If the historical data is stacked, it can be used as learning data in methods, such as probability / statistics or artificial intelligence for energy environment analysis in the initial design stage.

Timing Driven Analytic Placement for FPGAs (타이밍 구동 FPGA 분석적 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.7
    • /
    • pp.21-28
    • /
    • 2017
  • Practical models for FPGA architectures which include performance- and/or density-enhancing components such as carry chains, wide function multiplexers, and memory/multiplier blocks are being applied to academic FPGA placement tools which used to rely on simple imaginary models. Previously the techniques such as pre-packing and multi-layer density analysis are proposed to remedy issues related to such practical models, and the wire length is effectively minimized during initial analytic placement. Since timing should be optimized rather than wire length, most previous work takes into account the timing constraints. However, instead of the initial analytic placement, the timing-driven techniques are mostly applied to subsequent steps such as placement legalization and iterative improvement. This paper incorporates the timing driven techniques, which check if the placement meets the timing constraints given in the standard SDC format, and minimize the detected violations, with the existing analytic placer which implements pre-packing and multi-layer density analysis. First of all, a static timing analyzer has been used to check the timing of the wire-length minimized placement results. In order to minimize the detected violations, a function to minimize the largest arrival time at end points is added to the objective function of the analytic placer. Since each clock has a different period, the function is proposed to be evaluated for each clock, and added to the objective function. Since this function can unnecessarily reduce the unviolated paths, a new function which calculates and minimizes the largest negative slack at end points is also proposed, and compared. Since the existing legalization which is non-timing driven is used before the timing analysis, any improvement on timing is entirely due to the functions added to the objective function. The experiments on twelve industrial examples show that the minimum arrival time function improves the worst negative slack by 15% on average whereas the minimum worst negative slack function improves the negative slacks by additional 6% on average.

An Analysis of Soil Pressure Gauge Result from KHC Test Road (시험도로 토압계 계측결과 분석)

  • In Byeong-Eock;Kim Ji-Won;Kim Kyong-Ha;Lee Kwang-Ho
    • International Journal of Highway Engineering
    • /
    • v.8 no.3 s.29
    • /
    • pp.129-141
    • /
    • 2006
  • The vertical soil pressure developed in the granular layer of asphalt pavement system is influenced by various factors, including the wheel load magnitude, the loading speed, and asphalt pavement temperature. This research observed the distribution of vertical soil pressure in pavement supporting layer by investigating measured data from soil pressure gage in the KHC Test Road. The existing specification of subbase and subgrade compaction was also evaluated with measured vertical pressure. The finite element analysis was conducted to verify the accuracy of results with measured data because it can maximize research capacity without significant field test. The test data was collected from A5, A7, A14, and A15 test sections at August, September, and November 2004 and August 2005. Those test sections and test data were selected because they had best quality. The size of influence area was evaluated and the vertical pressure variation was investigated with respect to load level, load speed, and pavement temperature. The lower speed, higher load level, and higher pavement temperature increased the vertical pressure and reduced the area of influence. The finite element result showed the similar trend of vertical pressure variation in comparison with measured data. The specification of compaction quality for subbase and subgrade is higher than the level of vertical pressure measured with truck load so that it should be lurker investigated.

  • PDF

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

Environmental Prediction in Greenhouse According to Modified Greenhouse Structure and Heat Exchanger Location for Efficient Thermal Energy Management (효율적인 열에너지 관리를 위한 온실 형상 및 열 교환 장치 위치 개선에 따른 온실 내부 환경 예측)

  • Jeong, In Seon;Lee, Chung Geon;Cho, La Hoon;Park, Sun Yong;Kim, Seok Jun;Kim, Dae Hyun;Oh, Jae-Heun
    • Journal of Bio-Environment Control
    • /
    • v.30 no.4
    • /
    • pp.278-286
    • /
    • 2021
  • In this study, based on the Computational Fluid Dynamics (CFD) simulation model developed through previous study, inner environmenct of the modified glass greenhouse was predicted. Also, suggested the optimal shape of the greenhouse and location of the heat exchangers for heat energy management of the greenhouse using the developed model. For efficient heating energy management, the glass greenhouse was modified by changing the cross-section design and the location of the heat exchanger. The optimal cross-section design was selected based on the cross-section design standard of Republic of Korea's glass greenhouse, and the Fan Coil Unit(FCU) and the radiating pipe were re-positioned based on "Standard of greenhouse environment design" to enhance energy saving efficiency. The simulation analysis was performed to predict the inner temperature distribution and heat transfer with the modified greenhouse structure using the developed inner environment prediction model. As a result of simulation, the mean temperature and uniformity of the modified greenhouse were 0.65℃, 0.75%p higher than those of the control greenhouse, respectively. Also, the maximum deviation decreased by an average of 0.25℃. And the mean age of air was 18 sec. lower than that of the control greenhouse. It was confirmed that efficient heating energy management was possible in the modified greenhouse, when considered the temperature uniformity and the ventilation performance.

A Study on Training Dataset Configuration for Deep Learning Based Image Matching of Multi-sensor VHR Satellite Images (다중센서 고해상도 위성영상의 딥러닝 기반 영상매칭을 위한 학습자료 구성에 관한 연구)

  • Kang, Wonbin;Jung, Minyoung;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1505-1514
    • /
    • 2022
  • Image matching is a crucial preprocessing step for effective utilization of multi-temporal and multi-sensor very high resolution (VHR) satellite images. Deep learning (DL) method which is attracting widespread interest has proven to be an efficient approach to measure the similarity between image pairs in quick and accurate manner by extracting complex and detailed features from satellite images. However, Image matching of VHR satellite images remains challenging due to limitations of DL models in which the results are depending on the quantity and quality of training dataset, as well as the difficulty of creating training dataset with VHR satellite images. Therefore, this study examines the feasibility of DL-based method in matching pair extraction which is the most time-consuming process during image registration. This paper also aims to analyze factors that affect the accuracy based on the configuration of training dataset, when developing training dataset from existing multi-sensor VHR image database with bias for DL-based image matching. For this purpose, the generated training dataset were composed of correct matching pairs and incorrect matching pairs by assigning true and false labels to image pairs extracted using a grid-based Scale Invariant Feature Transform (SIFT) algorithm for a total of 12 multi-temporal and multi-sensor VHR images. The Siamese convolutional neural network (SCNN), proposed for matching pair extraction on constructed training dataset, proceeds with model learning and measures similarities by passing two images in parallel to the two identical convolutional neural network structures. The results from this study confirm that data acquired from VHR satellite image database can be used as DL training dataset and indicate the potential to improve efficiency of the matching process by appropriate configuration of multi-sensor images. DL-based image matching techniques using multi-sensor VHR satellite images are expected to replace existing manual-based feature extraction methods based on its stable performance, thus further develop into an integrated DL-based image registration framework.

Landscape Object Classification and Attribute Information System for Standardizing Landscape BIM Library (조경 BIM 라이브러리 표준화를 위한 조경객체 및 속성정보 분류체계)

  • Kim, Bok-Young
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.51 no.2
    • /
    • pp.103-119
    • /
    • 2023
  • Since the Korean government has decided to apply the policy of BIM (Building Information Modeling) to the entire construction industry, it has experienced a positive trend in adoption and utilization. BIM can reduce workloads by building model objects into libraries that conform to standards and enable consistent quality, data integrity, and compatibility. In the domestic architecture, civil engineering, and the overseas landscape architecture sectors, many BIM library standardization studies have been conducted, and guidelines have been established based on them. Currently, basic research and attempts to introduce BIM are being made in Korean landscape architecture field, but the diffusion has been delayed due to difficulties in application. This can be addressed by enhancing the efficiency of BIM work using standardized libraries. Therefore, this study aims to provide a starting point for discussions and present a classification system for objects and attribute information that can be referred to when creating landscape libraries in practice. The standardization of landscape BIM library was explored from two directions: object classification and attribute information items. First, the Korean construction information classification system, product inventory classification system, landscape design and construction standards, and BIM object classification of the NLA (Norwegian Association of Landscape Architects) were referred to classify landscape objects. As a result, the objects were divided into 12 subcategories, including 'trees', 'shrubs', 'ground cover and others', 'outdoor installation', 'outdoor lighting facility', 'stairs and ramp', 'outdoor wall', 'outdoor structure', 'pavement', 'curb', 'irrigation', and 'drainage' under five major categories: 'landscape plant', 'landscape facility', 'landscape structure', 'landscape pavement', and 'irrigation and drainage'. Next, the attribute information for the objects was extracted and structured. To do this, the common attribute information items of the KBIMS (Korean BIM Standard) were included, and the object attribute information items that vary according to the type of objects were included by referring to the PDT (Product Data Template) of the LI (UK Landscape Institute). As a result, the common attributes included information on 'identification', 'distribution', 'classification', and 'manufacture and supply' information, while the object attributes included information on 'naming', 'specifications', 'installation or construction', 'performance', 'sustainability', and 'operations and maintenance'. The significance of this study lies in establishing the foundation for the introduction of landscape BIM through the standardization of library objects, which will enhance the efficiency of modeling tasks and improve the data consistency of BIM models across various disciplines in the construction industry.

Gap-Filling of Sentinel-2 NDVI Using Sentinel-1 Radar Vegetation Indices and AutoML (Sentinel-1 레이더 식생지수와 AutoML을 이용한 Sentinel-2 NDVI 결측화소 복원)

  • Youjeong Youn;Jonggu Kang;Seoyeon Kim;Yemin Jeong;Soyeon Choi;Yungyo Im;Youngmin Seo;Myoungsoo Won;Junghwa Chun;Kyungmin Kim;Keunchang Jang;Joongbin Lim;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1341-1352
    • /
    • 2023
  • The normalized difference vegetation index (NDVI) derived from satellite images is a crucial tool to monitor forests and agriculture for broad areas because the periodic acquisition of the data is ensured. However, optical sensor-based vegetation indices(VI) are not accessible in some areas covered by clouds. This paper presented a synthetic aperture radar (SAR) based approach to retrieval of the optical sensor-based NDVI using machine learning. SAR system can observe the land surface day and night in all weather conditions. Radar vegetation indices (RVI) from the Sentinel-1 vertical-vertical (VV) and vertical-horizontal (VH) polarizations, surface elevation, and air temperature are used as the input features for an automated machine learning (AutoML) model to conduct the gap-filling of the Sentinel-2 NDVI. The mean bias error (MAE) was 7.214E-05, and the correlation coefficient (CC) was 0.878, demonstrating the feasibility of the proposed method. This approach can be applied to gap-free nationwide NDVI construction using Sentinel-1 and Sentinel-2 images for environmental monitoring and resource management.

Ameliorative Effects of Soybean Leaf Extract on Dexamethasone-Induced Muscle Atrophy in C2C12 Myotubes and a C57BL/6 Mouse Model (콩잎 추출물의 근위축 개선 효과)

  • Hye Young Choi;Young-Sool Hah;Yeong Ho Ji;Jun Young Ha;Hwan Hee Bae;Dong Yeol Lee;Won Min Jeong;Dong Kyu Jeong;Jun-Il Yoo;Sang Gon Kim
    • Journal of Life Science
    • /
    • v.33 no.12
    • /
    • pp.1036-1045
    • /
    • 2023
  • Sarcopenia, a condition characterized by the insidious loss of skeletal muscle mass and strength, represents a significant and growing healthcare challenge, impacting the mobility and quality of life of aging populations worldwide. This study investigated the therapeutic potential of soybean leaf extract (SL) for dexamethasone (Dexa)-induced muscle atrophy in vitro and in an in vivo model. In vitro experiments showed that SL significantly alleviated Dexa-induced atrophy in C2C12 myotube cells, as evidenced by preserved myotube morphology, density, and size. Moreover, SL treatment significantly reduced the mRNA and protein levels of muscle RING-finger protein-1 (MuRF1) and muscle atrophy F-box (MAFbx), key factors regulating muscle atrophy. In a Dexa-induced atrophy mouse model, SL administration significantly inhibited Dexa-induced weight loss and muscle wasting, preserving the mass of the gastrocnemius and tibialis anterior muscles. Furthermore, mice treated with SL exhibited significant improvements in muscle function compared to their counterparts suffering from Dexa-induced muscle atrophy, as evidenced by a notable increase in grip strength and extended endurance on treadmill tests. Moreover, SL suppressed the expression of muscle atrophy-related proteins in skeletal muscle, highlighting its protective role against Dexa-induced muscle atrophy. These results suggest that SL has potential as a natural treatment for muscle-wasting conditions, such as sarcopenia.