• Title/Summary/Keyword: Data-driven based Method

Search Result 297, Processing Time 0.038 seconds

Deep learning-based approach to improve the accuracy of time difference of arrival - based sound source localization (도달시간차 기반의 음원 위치 추정법의 정확도 향상을 위한 딥러닝 적용 연구)

  • Iljoo Jeong;Hyunsuk Huh;In-Jee Jung;Seungchul Lee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.2
    • /
    • pp.178-183
    • /
    • 2024
  • This study introduces an enhanced sound source localization technique, bolstered by a data-driven deep learning approach, to improve the precision and accuracy of direction of arrival estimation. Focused on refining Time Difference Of Arrival (TDOA) based sound source localization, the research hinges on accurately estimating TDOA from cross-correlation functions. Accurately estimating the TDOA still remains a limitation in this research field because the measured value from actual microphones are mixed with a lot of noise. Additionally, the digitization process of acoustic signals introduces quantization errors, associated with the sampling frequency of the measurement system, that limit the precision of TDOA estimation. A deep learning-based approach is designed to overcome these limitations in TDOA accuracy and precision. To validate the method, we conduct comprehensive evaluations using both two and three-microphone array configurations. Moreover, the feasibility and real-world applicability of the suggested method are further substantiated through experiments conducted in an anechoic chamber.

Development of GIS Application Component for Supporting Administration Business of Local Government (지자체 행정업무 지원을위한 GIS 응용 컴포넌트 개발 : 토지 민원서비스 컴포넌트)

  • 서창완;김태현;이덕호;김일석
    • Spatial Information Research
    • /
    • v.8 no.1
    • /
    • pp.15-29
    • /
    • 2000
  • In the Recent rapidly changing technology environment the computerization of administration business which is driven or will be driven to give improved information services to people by local government or central government with a huge budget. The possibility of applying GIS application component to the computerization of administration business is investigated to prevent local government from investing redundant money and to reuse the existing investment at this point of time. Land civil service application component was developed at the $\ulcorner Development of Open GIS Component S/W \lrcorner$ project which was managed by Ministry of Information and Communication . GIS application component was based on Open GIS OLE/COM specification for development of standard interface and USD(Unified System Development ) for development method and UML (Unified Modeling Language) for system design and Visual C++ for component implementation. Implemented components were Process Control, Map, Print, Statistics component and were verified by using Visual Basic and Delhi. tis study shows that the development of component is very useful at the GIS application development for local governments. But the standard of business and data and system is the essential prerequisite to maximize business application.

  • PDF

On the Implementation of a Facial Animation Using the Emotional Expression Techniques (FAES : 감성 표현 기법을 이용한 얼굴 애니메이션 구현)

  • Kim Sang-Kil;Min Yong-Sik
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.2
    • /
    • pp.147-155
    • /
    • 2005
  • In this paper, we present a FAES(a Facial Animation with Emotion and Speech) system for speech-driven face animation with emotions. We animate face cartoons not only from input speech, but also based on emotions derived from speech signal. And also our system can ensure smooth transitions and exact representation in animation. To do this, after collecting the training data, we have made the database using SVM(Support Vector Machine) to recognize four different categories of emotions: neutral, dislike, fear and surprise. So that, we can make the system for speech-driven animation with emotions. Also, we trained on Korean young person and focused on only Korean emotional face expressions. Experimental results of our system demonstrate that more emotional areas expanded and the accuracies of the emotional recognition and the continuous speech recognition are respectively increased 7% and 5% more compared with the previous method.

  • PDF

Improving Performance of Recommendation Systems Using Topic Modeling (사용자 관심 이슈 분석을 통한 추천시스템 성능 향상 방안)

  • Choi, Seongi;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.101-116
    • /
    • 2015
  • Recently, due to the development of smart devices and social media, vast amounts of information with the various forms were accumulated. Particularly, considerable research efforts are being directed towards analyzing unstructured big data to resolve various social problems. Accordingly, focus of data-driven decision-making is being moved from structured data analysis to unstructured one. Also, in the field of recommendation system, which is the typical area of data-driven decision-making, the need of using unstructured data has been steadily increased to improve system performance. Approaches to improve the performance of recommendation systems can be found in two aspects- improving algorithms and acquiring useful data with high quality. Traditionally, most efforts to improve the performance of recommendation system were made by the former approach, while the latter approach has not attracted much attention relatively. In this sense, efforts to utilize unstructured data from variable sources are very timely and necessary. Particularly, as the interests of users are directly connected with their needs, identifying the interests of the user through unstructured big data analysis can be a crew for improving performance of recommendation systems. In this sense, this study proposes the methodology of improving recommendation system by measuring interests of the user. Specially, this study proposes the method to quantify interests of the user by analyzing user's internet usage patterns, and to predict user's repurchase based upon the discovered preferences. There are two important modules in this study. The first module predicts repurchase probability of each category through analyzing users' purchase history. We include the first module to our research scope for comparing the accuracy of traditional purchase-based prediction model to our new model presented in the second module. This procedure extracts purchase history of users. The core part of our methodology is in the second module. This module extracts users' interests by analyzing news articles the users have read. The second module constructs a correspondence matrix between topics and news articles by performing topic modeling on real world news articles. And then, the module analyzes users' news access patterns and then constructs a correspondence matrix between articles and users. After that, by merging the results of the previous processes in the second module, we can obtain a correspondence matrix between users and topics. This matrix describes users' interests in a structured manner. Finally, by using the matrix, the second module builds a model for predicting repurchase probability of each category. In this paper, we also provide experimental results of our performance evaluation. The outline of data used our experiments is as follows. We acquired web transaction data of 5,000 panels from a company that is specialized to analyzing ranks of internet sites. At first we extracted 15,000 URLs of news articles published from July 2012 to June 2013 from the original data and we crawled main contents of the news articles. After that we selected 2,615 users who have read at least one of the extracted news articles. Among the 2,615 users, we discovered that the number of target users who purchase at least one items from our target shopping mall 'G' is 359. In the experiments, we analyzed purchase history and news access records of the 359 internet users. From the performance evaluation, we found that our prediction model using both users' interests and purchase history outperforms a prediction model using only users' purchase history from a view point of misclassification ratio. In detail, our model outperformed the traditional one in appliance, beauty, computer, culture, digital, fashion, and sports categories when artificial neural network based models were used. Similarly, our model outperformed the traditional one in beauty, computer, digital, fashion, food, and furniture categories when decision tree based models were used although the improvement is very small.

A vibration-based approach for detecting arch dam damage using RBF neural networks and Jaya algorithms

  • Ali Zar;Zahoor Hussain;Muhammad Akbar;Bassam A. Tayeh;Zhibin Lin
    • Smart Structures and Systems
    • /
    • v.32 no.5
    • /
    • pp.319-338
    • /
    • 2023
  • The study presents a new hybrid data-driven method by combining radial basis functions neural networks (RBF-NN) with the Jaya algorithm (JA) to provide effective structural health monitoring of arch dams. The novelty of this approach lies in that only one user-defined parameter is required and thus can increase its effectiveness and efficiency, as compared to other machine learning techniques that often require processing a large amount of training and testing model parameters and hyper-parameters, with high time-consuming. This approach seeks rapid damage detection in arch dams under dynamic conditions, to prevent potential disasters, by utilizing the RBF-NNN to seamlessly integrate the dynamic elastic modulus (DEM) and modal parameters (such as natural frequency and mode shape) as damage indicators. To determine the dynamic characteristics of the arch dam, the JA sequentially optimizes an objective function rooted in vibration-based data sets. Two case studies of hyperbolic concrete arch dams were carefully designed using finite element simulation to demonstrate the effectiveness of the RBF-NN model, in conjunction with the Jaya algorithm. The testing results demonstrated that the proposed methods could exhibit significant computational time-savings, while effectively detecting damage in arch dam structures with complex nonlinearities. Furthermore, despite training data contaminated with a high level of noise, the RBF-NN and JA fusion remained the robustness, with high accuracy.

Wireless Caching Techniques Based on Content Popularity for Network Resource Efficiency and Quality of Experience Improvement (네트워크 자원효율 및 QoE 향상을 위한 콘텐츠 인기도 기반 무선 캐싱 기술)

  • Kim, Geun-Uk;Hong, Jun-Pyo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.8
    • /
    • pp.1498-1507
    • /
    • 2017
  • According to recent report, global mobile data traffic is expected to increase by 11 times from 2016 to 2020. Moreover, this growth is expected to be driven mainly by mobile video traffic which is expected to account for about 70% of the total mobile data traffic. To cope with enormous mobile traffic, we need to understand video traffic's characteristic. Recently, the repetitive requests of some popular content such as popular YouTube videos cause a enormous network traffic overheads. If we constitute a network with the nodes capable of content caching based on the content popularity, we can reduce the network overheads by using the cached content for every request. Through device-to-device, multicast, and helpers, the video throughput can improve about 1.5~2 times and prefix caching reduces the playback delay by about 0.2~0.5 times than the conventional method. In this paper, we introduce some recent work on content popularity-based caching techniques in wireless networks.

An AODV Re-route Methods for Low-Retransmission in Wireless Sensor Networks (무선센서네트워크에서 저-재전송율을 위한 AODV 경로 재설정 방법)

  • Son, Nam-Rye;Jung, Min-A;Lee, Sung-Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.9A
    • /
    • pp.844-851
    • /
    • 2010
  • Recently, AODV routing protocol which one of the table driven method for the purpose data transmission between nodes has been broadly used in mobile wireless sensor networks. An existing AODV has a little overhead of routing packets because of keeping the routing table for activity route and re-routes to recovery the routes in route discontinuation. However that has faults in that excesses useless of the network bandwidth to recovery the route and takes a lone time to recovery the route. This paper proposes an efficient route recovery method for AODV based on wireless sensor networks in connection breaks. The proposed method. The propose method controls the number of RREQ message considering the energy's node and distance between nodes to restrict the flooding range of RREQ message while expanding the range of local repair. In test results, the proposed method are compared to existing method, the number of drops decrease 15.43% and the delay time for re-route decrease 0.20sec.

Data-centric XAI-driven Data Imputation of Molecular Structure and QSAR Model for Toxicity Prediction of 3D Printing Chemicals (3D 프린팅 소재 화학물질의 독성 예측을 위한 Data-centric XAI 기반 분자 구조 Data Imputation과 QSAR 모델 개발)

  • ChanHyeok Jeong;SangYoun Kim;SungKu Heo;Shahzeb Tariq;MinHyeok Shin;ChangKyoo Yoo
    • Korean Chemical Engineering Research
    • /
    • v.61 no.4
    • /
    • pp.523-541
    • /
    • 2023
  • As accessibility to 3D printers increases, there is a growing frequency of exposure to chemicals associated with 3D printing. However, research on the toxicity and harmfulness of chemicals generated by 3D printing is insufficient, and the performance of toxicity prediction using in silico techniques is limited due to missing molecular structure data. In this study, quantitative structure-activity relationship (QSAR) model based on data-centric AI approach was developed to predict the toxicity of new 3D printing materials by imputing missing values in molecular descriptors. First, MissForest algorithm was utilized to impute missing values in molecular descriptors of hazardous 3D printing materials. Then, based on four different machine learning models (decision tree, random forest, XGBoost, SVM), a machine learning (ML)-based QSAR model was developed to predict the bioconcentration factor (Log BCF), octanol-air partition coefficient (Log Koa), and partition coefficient (Log P). Furthermore, the reliability of the data-centric QSAR model was validated through the Tree-SHAP (SHapley Additive exPlanations) method, which is one of explainable artificial intelligence (XAI) techniques. The proposed imputation method based on the MissForest enlarged approximately 2.5 times more molecular structure data compared to the existing data. Based on the imputed dataset of molecular descriptor, the developed data-centric QSAR model achieved approximately 73%, 76% and 92% of prediction performance for Log BCF, Log Koa, and Log P, respectively. Lastly, Tree-SHAP analysis demonstrated that the data-centric-based QSAR model achieved high prediction performance for toxicity information by identifying key molecular descriptors highly correlated with toxicity indices. Therefore, the proposed QSAR model based on the data-centric XAI approach can be extended to predict the toxicity of potential pollutants in emerging printing chemicals, chemical process, semiconductor or display process.

Automatic Extraction of Abstract Components for supporting Model-driven Development of Components (모델기반 컴포넌트 개발방법론의 지원을 위한 추상컴포넌트 자동 추출기법)

  • Yun, Sang Kwon;Park, Min Gyu;Choi, Yunja
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.8
    • /
    • pp.543-554
    • /
    • 2013
  • Model-Driven Development(MDD) helps developers verify requirements and design issues of a software system in the early stage of development process by taking advantage of a software model which is the most highly abstracted form of a software system. In practice, however, many software systems have been developed through a code-centric method that builds a software system bottom-up rather than top-down. So, without support of appropriate tools, it is not easy to introduce MDD to real development process. Although there are many researches about extracting a model from code to help developers introduce MDD to code-centrically developed system, most of them only extracted base-level models. However, using concept of abstract component one can continuously extract higher level model from base-level model. In this paper we propose a practical method for automatic extraction of base level abstract component from source code, which is the first stage of continuous extraction process of abstract component, and validate the method by implementing an extraction tool based on the method. Target code chosen is the source code of TinyOS, an operating system for wireless sensor networks. The tool is applied to the source code of TinyOS, written in nesC language.

Uncertainty analysis for Section-by-Section method of ADCP discharge measurement based on GUM standard (GUM 표준안 기반 ADCP 지점 측정 방법 유량 측정 불확도 분석)

  • Kim, Dongsu;Kim, Jongmin;Byeon, Hyunhyuk;Kang, Junkoo
    • Journal of Korea Water Resources Association
    • /
    • v.50 no.8
    • /
    • pp.521-535
    • /
    • 2017
  • Acoustic Doppler Current Profilers (ADCPs) have been widely utilized for assessing streamflow discharge, yet few comprehensive studies were conducted to evaluate discharge uncertainty in consideration of individual uncertainty components. It could be mostly because it was not easy to determine which uncertainty framework can be appropriate to rigorously analyze streamflow discharge driven by ADCPs. In this regard, considerable efforts have been made by scientific and engineering societies to develop a standardized theoretical framework for uncertainty analysis in hydrometry. One of the well-established UA methodology based on sound statistical and engineering concepts is Guide to the Expression of Uncertainty Measurement (GUM) adopted widely by various scientific and research communities. This research fundamentally adapted the GUM framework to assess individual uncertainty components of ADCP discharge measurements, and subsequently provided results of a customized experiment in a controllable real-scale artificial river channel. We focused particularly upon sensitivities of uncertainty components in the GUM framework driven by ADCPs direct measurements such as depths, edge distance, submerged depth, velocity gap, sampling time, repeatability, bed roughness and so on. Section-by-Section method for ADCP discharge measurement was applied for uncertainty analysis for this study. All of measurements were carefully compared with data using other instrumentations such as ADV to evaluate individual uncertainty components.