• Title/Summary/Keyword: feature model validation

Search Result 111, Processing Time 0.032 seconds

Detection of Pulmonary Region in Medical Images through Improved Active Control Model

  • Kwon Yong-Jun;Won Chul-Ho;Kim Dong-Hun;Kim Pil-Un;Park Il-Yong;Park Hee-Jun;Lee Jyung-Hyun;Kim Myoung-Nam;Cho Jin-HO
    • Journal of Biomedical Engineering Research
    • /
    • v.26 no.6
    • /
    • pp.357-363
    • /
    • 2005
  • Active contour models have been extensively used to segment, match, and track objects of interest in computer vision and image processing applications, particularly to locate object boundaries. With conventional methods an object boundary can be extracted by controlling the internal energy and external energy based on energy minimization. However, this still leaves a number of problems, such as initialization and poor convergence in concave regions. In particular, a contour is unable to enter a concave region based on the stretching and bending characteristic of the internal energy. Therefore, this study proposes a method that controls the internal energy by moving the local perpendicular bisector point of each control point on the contour, and determines the object boundary by minimizing the energy relative to the external energy. Convergence at a concave region can then be effectively implemented as regards the feature of interest using the internal energy, plus several objects can be detected using a multi-detection method based on the initial contour. The proposed method is compared with other conventional methods through objective validation and subjective consideration. As a result, it is anticipated that the proposed method can be efficiently applied to the detection of the pulmonary parenchyma region in medical images.

Revolutionizing Traffic Sign Recognition with YOLOv9 and CNNs

  • Muteb Alshammari;Aadil Alshammari
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.8
    • /
    • pp.14-20
    • /
    • 2024
  • Traffic sign recognition is an essential feature of intelligent transportation systems and Advanced Driver Assistance Systems (ADAS), which are necessary for improving road safety and advancing the development of autonomous cars. This research investigates the incorporation of the YOLOv9 model into traffic sign recognition systems, utilizing its sophisticated functionalities such as Programmable Gradient Information (PGI) and Generalized Efficient Layer Aggregation Network (GELAN) to tackle enduring difficulties in object detection. We employed a publically accessible dataset obtained from Roboflow, which consisted of 3130 images classified into five distinct categories: speed_40, speed_60, stop, green, and red. The dataset was separated into training (68%), validation (21%), and testing (12%) subsets in a methodical manner to ensure a thorough examination. Our comprehensive trials have shown that YOLOv9 obtains a mean Average Precision (mAP@0.5) of 0.959, suggesting exceptional precision and recall for the majority of traffic sign classes. However, there is still potential for improvement specifically in the red traffic sign class. An analysis was conducted on the distribution of instances among different traffic sign categories and the differences in size within the dataset. This analysis aimed to guarantee that the model would perform well in real-world circumstances. The findings validate that YOLOv9 substantially improves the precision and dependability of traffic sign identification, establishing it as a dependable option for implementation in intelligent transportation systems and ADAS. The incorporation of YOLOv9 in real-world traffic sign recognition and classification tasks demonstrates its promise in making roadways safer and more efficient.

Corpus of Eye Movements in L3 Spanish Reading: A Prediction Model

  • Hui-Chuan Lu;Li-Chi Kao;Zong-Han Li;Wen-Hsiang Lu;An-Chung Cheng
    • Asia Pacific Journal of Corpus Research
    • /
    • v.5 no.1
    • /
    • pp.23-36
    • /
    • 2024
  • This research centers on the Taiwan Eye-Movement Corpus of Spanish (TECS), a specially created corpus comprising eye-tracking data from Chinese-speaking learners of Spanish as a third language in Taiwan. Its primary purpose is to explore the broad utility of TECS in understanding language learning processes, particularly the initial stages of language learning. Constructing this corpus involves gathering data on eye-tracking, reading comprehension, and language proficiency to develop a machine-learning model that predicts learner behaviors, and subsequently undergoes a predictability test for validation. The focus is on examining attention in input processing and their relationship to language learning outcomes. The TECS eye-tracking data consists of indicators derived from eye movement recordings while reading Spanish sentences with temporal references. These indicators are obtained from eye movement experiments focusing on tense verbal inflections and temporal adverbs. Chinese expresses tense using aspect markers, lexical references, and contextual cues, differing significantly from inflectional languages like Spanish. Chinese-speaking learners of Spanish face particular challenges in learning verbal morphology and tenses. The data from eye movement experiments were structured into feature vectors, with learner behaviors serving as class labels. After categorizing the collected data, we used two types of machine learning methods for classification and regression: Random Forests and the k-nearest neighbors algorithm (KNN). By leveraging these algorithms, we predicted learner behaviors and conducted performance evaluations to enhance our understanding of the nexus between learner behaviors and language learning process. Future research may further enrich TECS by gathering data from subsequent eye-movement experiments, specifically targeting various Spanish tenses and temporal lexical references during text reading. These endeavors promise to broaden and refine the corpus, advancing our understanding of language processing.

A SVR Based-Pseudo Modified Einstein Procedure Incorporating H-ADCP Model for Real-Time Total Sediment Discharge Monitoring (실시간 총유사량 모니터링을 위한 H-ADCP 연계 수정 아인슈타인 방법의 의사 SVR 모형)

  • Noh, Hyoseob;Son, Geunsoo;Kim, Dongsu;Park, Yong Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.43 no.3
    • /
    • pp.321-335
    • /
    • 2023
  • Monitoring sediment loads in natural rivers is the key process in river engineering, but it is costly and dangerous. In practice, suspended loads are directly measured, and total loads, which is a summation of suspended loads and bed loads, are estimated. This study proposes a real-time sediment discharge monitoring system using the horizontal acoustic Doppler current profiler (H-ADCP) and support vector regression (SVR). The proposed system is comprised of the SVR model for suspended sediment concentration (SVR-SSC) and for total loads (SVR-QTL), respectively. SVR-SSC estimates SSC and SVR-QTL mimics the modified Einstein procedure. The grid search with K-fold cross validation (Grid-CV) and the recursive feature elimination (RFE) were employed to determine SVR's hyperparameters and input variables. The two SVR models showed reasonable cross-validation scores (R2) with 0.885 (SVR-SSC) and 0.860 (SVR-QTL). During the time-series sediment load monitoring period, we successfully detected various sediment transport phenomena in natural streams, such as hysteresis loops and sensitive sediment fluctuations. The newly proposed sediment monitoring system depends only on the gauged features by H-ADCP without additional assumptions in hydraulic variables (e.g., friction slope and suspended sediment size distribution). This method can be applied to any ADCP-installed discharge monitoring station economically and is expected to enhance temporal resolution in sediment monitoring.

A modified U-net for crack segmentation by Self-Attention-Self-Adaption neuron and random elastic deformation

  • Zhao, Jin;Hu, Fangqiao;Qiao, Weidong;Zhai, Weida;Xu, Yang;Bao, Yuequan;Li, Hui
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.1-16
    • /
    • 2022
  • Despite recent breakthroughs in deep learning and computer vision fields, the pixel-wise identification of tiny objects in high-resolution images with complex disturbances remains challenging. This study proposes a modified U-net for tiny crack segmentation in real-world steel-box-girder bridges. The modified U-net adopts the common U-net framework and a novel Self-Attention-Self-Adaption (SASA) neuron as the fundamental computing element. The Self-Attention module applies softmax and gate operations to obtain the attention vector. It enables the neuron to focus on the most significant receptive fields when processing large-scale feature maps. The Self-Adaption module consists of a multiplayer perceptron subnet and achieves deeper feature extraction inside a single neuron. For data augmentation, a grid-based crack random elastic deformation (CRED) algorithm is designed to enrich the diversities and irregular shapes of distributed cracks. Grid-based uniform control nodes are first set on both input images and binary labels, random offsets are then employed on these control nodes, and bilinear interpolation is performed for the rest pixels. The proposed SASA neuron and CRED algorithm are simultaneously deployed to train the modified U-net. 200 raw images with a high resolution of 4928 × 3264 are collected, 160 for training and the rest 40 for the test. 512 × 512 patches are generated from the original images by a sliding window with an overlap of 256 as inputs. Results show that the average IoU between the recognized and ground-truth cracks reaches 0.409, which is 29.8% higher than the regular U-net. A five-fold cross-validation study is performed to verify that the proposed method is robust to different training and test images. Ablation experiments further demonstrate the effectiveness of the proposed SASA neuron and CRED algorithm. Promotions of the average IoU individually utilizing the SASA and CRED module add up to the final promotion of the full model, indicating that the SASA and CRED modules contribute to the different stages of model and data in the training process.

A Development of Torsional Analysis Model and Parametric Study for PSC Box Girder Bridge with Corrugated Steel Web (복부 파형강판을 사용한 PSC 복합 교량의 비틀림 해석모델의 제안 및 변수해석)

  • Lee, Han-Koo;Kim, Kwang-Soo
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.2A
    • /
    • pp.281-288
    • /
    • 2008
  • The Prestressed Concrete (hereinafter PSC) box girder bridges with corrugated steel webs have been drawing an attention as a new structure type of PSC bridge fully utilizing the feature of concrete and steel. However, the previous study focused on the shear buckling of the corrugated steel web and development of connection between concrete flange and steel web. Therefore, it needs to perform a study on the torsional behavior and develop the rational torsional analysis model for PSC box girder with corrugated steel web. In this study, torsional analysis model is developed using Rausch's equation based on space truss model, equilibrium equation considering softening effect of reinforced concrete element and compatibility equation. Validation studies are performed on developed model through the comparison with the experimental results of loading test for PSC box girder with corrugated steel webs. Parametric studies are also performed to investigate the effect of prestressing force and concrete strength in torsional behavior of PSC box girder with corrugated steel web. The modified correction factor is also derived for the torsional coefficient of PSC box girder with corrugated steel web through the parametric study using the proposed anlaytical model.

Prediction of Key Variables Affecting NBA Playoffs Advancement: Focusing on 3 Points and Turnover Features (미국 프로농구(NBA)의 플레이오프 진출에 영향을 미치는 주요 변수 예측: 3점과 턴오버 속성을 중심으로)

  • An, Sehwan;Kim, Youngmin
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.263-286
    • /
    • 2022
  • This study acquires NBA statistical information for a total of 32 years from 1990 to 2022 using web crawling, observes variables of interest through exploratory data analysis, and generates related derived variables. Unused variables were removed through a purification process on the input data, and correlation analysis, t-test, and ANOVA were performed on the remaining variables. For the variable of interest, the difference in the mean between the groups that advanced to the playoffs and did not advance to the playoffs was tested, and then to compensate for this, the average difference between the three groups (higher/middle/lower) based on ranking was reconfirmed. Of the input data, only this year's season data was used as a test set, and 5-fold cross-validation was performed by dividing the training set and the validation set for model training. The overfitting problem was solved by comparing the cross-validation result and the final analysis result using the test set to confirm that there was no difference in the performance matrix. Because the quality level of the raw data is high and the statistical assumptions are satisfied, most of the models showed good results despite the small data set. This study not only predicts NBA game results or classifies whether or not to advance to the playoffs using machine learning, but also examines whether the variables of interest are included in the major variables with high importance by understanding the importance of input attribute. Through the visualization of SHAP value, it was possible to overcome the limitation that could not be interpreted only with the result of feature importance, and to compensate for the lack of consistency in the importance calculation in the process of entering/removing variables. It was found that a number of variables related to three points and errors classified as subjects of interest in this study were included in the major variables affecting advancing to the playoffs in the NBA. Although this study is similar in that it includes topics such as match results, playoffs, and championship predictions, which have been dealt with in the existing sports data analysis field, and comparatively analyzed several machine learning models for analysis, there is a difference in that the interest features are set in advance and statistically verified, so that it is compared with the machine learning analysis result. Also, it was differentiated from existing studies by presenting explanatory visualization results using SHAP, one of the XAI models.

A Bayesian Estimation of Price for Commercial Property: Using subjective priors and a kriging technique (상업용 토지 가격의 베이지안 추정: 주관적 사전지식과 크리깅 기법의 활용을 중심으로)

  • Lee, Chang Ro;Eum, Young Seob;Park, Key Ho
    • Journal of the Korean Geographical Society
    • /
    • v.49 no.5
    • /
    • pp.761-778
    • /
    • 2014
  • There has been relatively little study to model price for commercial property because of its low transaction volume in the market. Despite of this thin market character, this paper tried to estimate prices for commercial lots as accurate as possible. We constructed a model whose components consist of mean structure(global trend), exponential covariance function and a pure error term, and applied it to actual sales price data of Seoul. We explicitly took account of spatial autocorrelation of land price by utilizing a kriging technique, a representative method of spatial interpolation, because the land price of commercial lots has feature of differential price forming pattern depending on submarkets they belong to. In addition, we chose to apply a bayesian kriging to overcome data scarcity by incorporating experts' knowledge into prior probability distribution. The chosen model's excellent performance was verified by the result from validation data. We confirmed that the excellence of the model is attributed to incorporating both autocorexperts' knowledge and spatial autocorrelation in the model construction. This paper is differentiated from previous studies in the sense that it applied the bayesian kriging technique to estimate price for commercial lots and explicitly combined experts' knowledge with data. It is expected that the result of this paper would provide a useful guide for the circumstances under which property price has to be estimated reliably based on sparse transaction data.

  • PDF

Spatial Big Data Query Processing System Supporting SQL-based Query Language in Hadoop (Hadoop에서 SQL 기반 질의언어를 지원하는 공간 빅데이터 질의처리 시스템)

  • Joo, In-Hak
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper we present a spatial big data query processing system that can store spatial data in Hadoop and query the data with SQL-based query language. The system stores large-scale spatial data in HDFS-based storage system, and supports spatial queries expressed in SQL-based query language extended for spatial data processing. It supports standard spatial data types and functions defined in OGC simple feature model in the query language. This paper presents the development of core functions of the system including query language parsing, query validation, query planning, and connection with storage system. We compares the performance of the suggested system with an existing system, and our experiments show that the system shows about 58% performance improvement of query execution time over the existing system when executing region query for spatial data stored in Hadoop.

Analyzing Machine Learning Techniques for Fault Prediction Using Web Applications

  • Malhotra, Ruchika;Sharma, Anjali
    • Journal of Information Processing Systems
    • /
    • v.14 no.3
    • /
    • pp.751-770
    • /
    • 2018
  • Web applications are indispensable in the software industry and continuously evolve either meeting a newer criteria and/or including new functionalities. However, despite assuring quality via testing, what hinders a straightforward development is the presence of defects. Several factors contribute to defects and are often minimized at high expense in terms of man-hours. Thus, detection of fault proneness in early phases of software development is important. Therefore, a fault prediction model for identifying fault-prone classes in a web application is highly desired. In this work, we compare 14 machine learning techniques to analyse the relationship between object oriented metrics and fault prediction in web applications. The study is carried out using various releases of Apache Click and Apache Rave datasets. En-route to the predictive analysis, the input basis set for each release is first optimized using filter based correlation feature selection (CFS) method. It is found that the LCOM3, WMC, NPM and DAM metrics are the most significant predictors. The statistical analysis of these metrics also finds good conformity with the CFS evaluation and affirms the role of these metrics in the defect prediction of web applications. The overall predictive ability of different fault prediction models is first ranked using Friedman technique and then statistically compared using Nemenyi post-hoc analysis. The results not only upholds the predictive capability of machine learning models for faulty classes using web applications, but also finds that ensemble algorithms are most appropriate for defect prediction in Apache datasets. Further, we also derive a consensus between the metrics selected by the CFS technique and the statistical analysis of the datasets.