• Title/Summary/Keyword: Mean vector

Search Result 707, Processing Time 0.026 seconds

Development of Simulation Software for EEG Signal Accuracy Improvement (EEG 신호 정확도 향상을 위한 시뮬레이션 소프트웨어 개발)

  • Jeong, Haesung;Lee, Sangmin;Kwon, Jangwoo
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.10 no.3
    • /
    • pp.221-228
    • /
    • 2016
  • In this paper, we introduce our simulation software for EEG signal accuracy improvement. Users can check and train own EEG signal accuracy using our simulation software. Subjects were shown emotional imagination condition with landscape photography and logical imagination condition with a mathematical problem to subject. We use that EEG signal data, and apply Independent Component Analysis algorithm for noise removal. So we can have beta waves(${\beta}$, 14-30Hz) data through Band Pass Filter. We extract feature using Root Mean Square algorithm and That features are classified through Support Vector Machine. The classification result is 78.21% before EEG signal accuracy improvement training. but after successive training, the result is 91.67%. So user can improve own EEG signal accuracy using our simulation software. And we are expecting efficient use of BCI system based EEG signal.

Hierarchical Search-based Fast Schemes for Consecutive Block Error Concealment (연속된 블록 오류 은닉을 위한 계층 탐색 기반의 고속 알고리즘)

  • Jeon Soo-Yeol;Sohn Chae-Bong;Oh Seoung-Jun;Ahn Chang-Beom
    • Journal of Broadcast Engineering
    • /
    • v.9 no.4 s.25
    • /
    • pp.446-454
    • /
    • 2004
  • With the growth of multimedia systems, compressing image data has become more important in the area of multimedia services. Since a compressed image bitstream can often be seriously distorted by various types of channel noise, an error concealment algorithm becomes a very important issue. In order to solve this problem, Hsia proposed the error concealment algorithm where he recovered lost block data using 1D boundary matching vectors. His algorithm, however, requires high computational complexity since each matching vector needs MAD (Mean Absolute Difference) values of all pixels, which is either a boundary line top or a boundary line bottom of a damaged block. We propose a hierarchical search-based fast error concealment scheme as well as its approximated version to reduce computational time. In the proposed scheme, a hierarchical search is applied to reduce the number of checking points for searching a vector. The error concealment schemes proposed in this paper can be about 3 times faster than Hsia's with keeping visual quality and PSNR.

Swimming speed measurement of Pacific saury (Cololabis saira) using Acoustic Doppler Current Profiler (음향도플러유향유속계를 이용한 꽁치어군의 유영속도 측정)

  • Lee, Kyoung-Hoon;Lee, Dae-Jae;Kim, Hyung-Seok;Park, Seong-Wook
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.46 no.2
    • /
    • pp.165-172
    • /
    • 2010
  • This study was performed to estimate the swimming velocity of Pacific saury (Cololabis saira) migrated offshore Funka Bay of Hokkaido using an acoustic Doppler current profiler (OceanSurveyor, RDI, 153.6kHz) established in T/S Ushio-maru of Hokkaido University, in September 27, 2003. The ADCP's doppler shift revealed as the raw data that the maximum swimming velocity was measured 163.0cm/s, and its horizontal swimming speed and direction were $72.4{\pm}24.1\;cm/s$, $160.1^{\circ}{\pm}22.3^{\circ}$ while the surrounding current speed and direction were $19.6{\pm}8.4\;cm/s$, $328.1^{\circ}{\pm}45.3^{\circ}$. To calculate the actual swimming speed of Pacific saury in each bins, comparisons for each stratified bins must be made between the mean surrounding current velocity vectors, measured for each stratified bin, and its mean swimming velocity vectors, assumed by reference (threshold > -70dB) and 5dB margin among four beams of ADCP. As a result, the actual averaged swimming velocity was 88.6cm/s and the averaged 3-D swimming velocity was 91.3cm/s using the 3-D velocity vector, respectively.

FORECAST OF DAILY MAJOR FLARE PROBABILITY USING RELATIONSHIPS BETWEEN VECTOR MAGNETIC PROPERTIES AND FLARING RATES

  • Lim, Daye;Moon, Yong-Jae;Park, Jongyeob;Park, Eunsu;Lee, Kangjin;Lee, Jin-Yi;Jang, Soojeong
    • Journal of The Korean Astronomical Society
    • /
    • v.52 no.4
    • /
    • pp.133-144
    • /
    • 2019
  • We develop forecast models of daily probabilities of major flares (M- and X-class) based on empirical relationships between photospheric magnetic parameters and daily flaring rates from May 2010 to April 2018. In this study, we consider ten magnetic parameters characterizing size, distribution, and non-potentiality of vector magnetic fields from Solar Dynamics Observatory (SDO)/Helioseismic and Magnetic Imager (HMI) and Geostationary Operational Environmental Satellites (GOES) X-ray flare data. The magnetic parameters are classified into three types: the total unsigned parameters, the total signed parameters, and the mean parameters. We divide the data into two sets chronologically: 70% for training and 30% for testing. The empirical relationships between the parameters and flaring rates are used to predict flare occurrence probabilities for a given magnetic parameter value. Major results of this study are as follows. First, major flare occurrence rates are well correlated with ten parameters having correlation coefficients above 0.85. Second, logarithmic values of flaring rates are well approximated by linear equations. Third, using total unsigned and signed parameters achieved better performance for predicting flares than the mean parameters in terms of verification measures of probabilistic and converted binary forecasts. We conclude that the total quantity of non-potentiality of magnetic fields is crucial for flare forecasting among the magnetic parameters considered in this study. When this model is applied for operational use, it can be used using the data of 21:00 TAI with a slight underestimation of 2-6.3%.

Evaluating flexural strength of concrete with steel fibre by using machine learning techniques

  • Sharma, Nitisha;Thakur, Mohindra S.;Upadhya, Ankita;Sihag, Parveen
    • Composite Materials and Engineering
    • /
    • v.3 no.3
    • /
    • pp.201-220
    • /
    • 2021
  • In this study, potential of three machine learning techniques i.e., M5P, Support vector machines and Gaussian processes were evaluated to find the best algorithm for the prediction of flexural strength of concrete mix with steel fibre. The study comprises the comparison of results obtained from above-said techniques for given dataset. The dataset consists of 124 observations from past research studies and this dataset is randomly divided into two subsets namely training and testing datasets with (70-30)% proportion by weight. Cement, fine aggregates, coarse aggregates, water, super plasticizer/ high-range water reducer, steel fibre, fibre length and curing days were taken as input parameters whereas flexural strength of the concrete mix was taken as the output parameter. Performance of the techniques was checked by statistic evaluation parameters. Results show that the Gaussian process technique works better than other techniques with its minimum error bandwidth. Statistical analysis shows that the Gaussian process predicts better results with higher coefficient of correlation value (0.9138) and minimum mean absolute error (1.2954) and Root mean square error value (1.9672). Sensitivity analysis proves that steel fibre is the significant parameter among other parameters to predict the flexural strength of concrete mix. According to the shape of the fibre, the mixed type performs better for this data than the hooked shape of the steel fibre, which has a higher CC of 0.9649, which shows that the shape of fibers do effect the flexural strength of the concrete. However, the intricacy of the mixed fibres needs further investigations. For future mixes, the most favorable range for the increase in flexural strength of concrete mix found to be (1-3)%.

A Study on the Development of Model for Estimating the Thickness of Clay Layer of Soft Ground in the Nakdong River Estuary (낙동강 조간대 연약지반의 지역별 점성토층 두께 추정 모델 개발에 관한 연구)

  • Seongin, Ahn;Dong-Woo, Ryu
    • Tunnel and Underground Space
    • /
    • v.32 no.6
    • /
    • pp.586-597
    • /
    • 2022
  • In this study, a model was developed for the estimating the locational thickness information of the upper clay layer to be used for the consolidation vulnerability evaluation in the Nakdong river estuary. To estimate ground layer thickness information, we developed four spatial estimation models using machine learning algorithms, which are RF (Random Forest), SVR (Support Vector Regression) and GPR (Gaussian Process Regression), and geostatistical technique such as Ordinary Kriging. Among the 4,712 borehole data in the study area collected for model development, 2,948 borehole data with an upper clay layer were used, and Pearson correlation coefficient and mean squared error were used to quantitatively evaluate the performance of the developed models. In addition, for qualitative evaluation, each model was used throughout the study area to estimate the information of the upper clay layer, and the thickness distribution characteristics of it were compared with each other.

Predicting rock brittleness indices from simple laboratory test results using some machine learning methods

  • Davood Fereidooni;Zohre Karimi
    • Geomechanics and Engineering
    • /
    • v.34 no.6
    • /
    • pp.697-726
    • /
    • 2023
  • Brittleness as an important property of rock plays a crucial role both in the failure process of intact rock and rock mass response to excavation in engineering geological and geotechnical projects. Generally, rock brittleness indices are calculated from the mechanical properties of rocks such as uniaxial compressive strength, tensile strength and modulus of elasticity. These properties are generally determined from complicated, expensive and time-consuming tests in laboratory. For this reason, in the present research, an attempt has been made to predict the rock brittleness indices from simple, inexpensive, and quick laboratory test results namely dry unit weight, porosity, slake-durability index, P-wave velocity, Schmidt rebound hardness, and point load strength index using multiple linear regression, exponential regression, support vector machine (SVM) with various kernels, generating fuzzy inference system, and regression tree ensemble (RTE) with boosting framework. So, this could be considered as an innovation for the present research. For this purpose, the number of 39 rock samples including five igneous, twenty-six sedimentary, and eight metamorphic were collected from different regions of Iran. Mineralogical, physical and mechanical properties as well as five well known rock brittleness indices (i.e., B1, B2, B3, B4, and B5) were measured for the selected rock samples before application of the above-mentioned machine learning techniques. The performance of the developed models was evaluated based on several statistical metrics such as mean square error, relative absolute error, root relative absolute error, determination coefficients, variance account for, mean absolute percentage error and standard deviation of the error. The comparison of the obtained results revealed that among the studied methods, SVM is the most suitable one for predicting B1, B2 and B5, while RTE predicts B3 and B4 better than other methods.

Few-shot learning using the median prototype of the support set (Support set의 중앙값 prototype을 활용한 few-shot 학습)

  • Eu Tteum Baek
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.24-31
    • /
    • 2023
  • Meta-learning is metacognition that instantly distinguishes between knowing and unknown. It is a learning method that adapts and solves new problems by self-learning with a small amount of data.A few-shot learning method is a type of meta-learning method that accurately predicts query data even with a very small support set. In this study, we propose a method to solve the limitations of the prototype created with the mean-point vector of each class. For this purpose, we use the few-shot learning method that created the prototype used in the few-shot learning method as the median prototype. For quantitative evaluation, a handwriting recognition dataset and mini-Imagenet dataset were used and compared with the existing method. Through the experimental results, it was confirmed that the performance was improved compared to the existing method.

A study of glass and carbon fibers in FRAC utilizing machine learning approach

  • Ankita Upadhya;M. S. Thakur;Nitisha Sharma;Fadi H. Almohammed;Parveen Sihag
    • Advances in materials Research
    • /
    • v.13 no.1
    • /
    • pp.63-86
    • /
    • 2024
  • Asphalt concrete (AC), is a mixture of bitumen and aggregates, which is very sensitive in the design of flexible pavement. In this study, the Marshall stability of the glass and carbon fiber bituminous concrete was predicted by using Artificial Neural Network (ANN), Support Vector Machine (SVM), Random Forest (RF), and M5P Tree machine learning algorithms. To predict the Marshall stability, nine inputs parameters i.e., Bitumen, Glass and Carbon fibers mixed in 100:0, 75:25, 50:50, 25:75, 0:100 percentage (designated as 100GF:0CF, 75GF:25CF, 50GF:50 CF, 25GF:75CF, 0GF:100CF), Bitumen grade (VG), Fiber length (FL), and Fiber diameter (FD) were utilized from the experimental and literary data. Seven statistical indices i.e., coefficient of correlation (CC), mean absolute error (MAE), root mean squared error (RMSE), relative absolute error (RAE), root relative squared error (RRSE), Scattering index (SI), and BIAS were applied to assess the effectiveness of the developed models. According to the performance evaluation results, Artificial neural network (ANN) was outperforming among other models with CC values as 0.9147 and 0.8648, MAE values as 1.3757 and 1.978, RMSE values as 1.843 and 2.6951, RAE values as 39.88 and 49.31, RRSE values as 40.62 and 50.50, SI values as 0.1379 and 0.2027 and BIAS value as -0.1 290 and -0.2357 in training and testing stage respectively. The Taylor diagram (testing stage) also confirmed that the ANN-based model outperforms the other models. Results of sensitivity analysis showed that the fiber length is the most influential in all nine input parameters whereas the fiber combination of 25GF:75CF was the most effective among all the fiber mixes in Marshall stability.

The Prediction of Purchase Amount of Customers Using Support Vector Regression with Separated Learning Method (Support Vector Regression에서 분리학습을 이용한 고객의 구매액 예측모형)

  • Hong, Tae-Ho;Kim, Eun-Mi
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.213-225
    • /
    • 2010
  • Data mining has empowered the managers who are charge of the tasks in their company to present personalized and differentiated marketing programs to their customers with the rapid growth of information technology. Most studies on customer' response have focused on predicting whether they would respond or not for their marketing promotion as marketing managers have been eager to identify who would respond to their marketing promotion. So many studies utilizing data mining have tried to resolve the binary decision problems such as bankruptcy prediction, network intrusion detection, and fraud detection in credit card usages. The prediction of customer's response has been studied with similar methods mentioned above because the prediction of customer's response is a kind of dichotomous decision problem. In addition, a number of competitive data mining techniques such as neural networks, SVM(support vector machine), decision trees, logit, and genetic algorithms have been applied to the prediction of customer's response for marketing promotion. The marketing managers also have tried to classify their customers with quantitative measures such as recency, frequency, and monetary acquired from their transaction database. The measures mean that their customers came to purchase in recent or old days, how frequent in a period, and how much they spent once. Using segmented customers we proposed an approach that could enable to differentiate customers in the same rating among the segmented customers. Our approach employed support vector regression to forecast the purchase amount of customers for each customer rating. Our study used the sample that included 41,924 customers extracted from DMEF04 Data Set, who purchased at least once in the last two years. We classified customers from first rating to fifth rating based on the purchase amount after giving a marketing promotion. Here, we divided customers into first rating who has a large amount of purchase and fifth rating who are non-respondents for the promotion. Our proposed model forecasted the purchase amount of the customers in the same rating and the marketing managers could make a differentiated and personalized marketing program for each customer even though they were belong to the same rating. In addition, we proposed more efficient learning method by separating the learning samples. We employed two learning methods to compare the performance of proposed learning method with general learning method for SVRs. LMW (Learning Method using Whole data for purchasing customers) is a general learning method for forecasting the purchase amount of customers. And we proposed a method, LMS (Learning Method using Separated data for classification purchasing customers), that makes four different SVR models for each class of customers. To evaluate the performance of models, we calculated MAE (Mean Absolute Error) and MAPE (Mean Absolute Percent Error) for each model to predict the purchase amount of customers. In LMW, the overall performance was 0.670 MAPE and the best performance showed 0.327 MAPE. Generally, the performances of the proposed LMS model were analyzed as more superior compared to the performance of the LMW model. In LMS, we found that the best performance was 0.275 MAPE. The performance of LMS was higher than LMW in each class of customers. After comparing the performance of our proposed method LMS to LMW, our proposed model had more significant performance for forecasting the purchase amount of customers in each class. In addition, our approach will be useful for marketing managers when they need to customers for their promotion. Even if customers were belonging to same class, marketing managers could offer customers a differentiated and personalized marketing promotion.