• Title/Summary/Keyword: computational algorithm

Search Result 4,381, Processing Time 0.034 seconds

Voice Activity Detection using Motion and Variation of Intensity in The Mouth Region (입술 영역의 움직임과 밝기 변화를 이용한 음성구간 검출 알고리즘 개발)

  • Kim, Gi-Bak;Ryu, Je-Woong;Cho, Nam-Ik
    • Journal of Broadcast Engineering
    • /
    • v.17 no.3
    • /
    • pp.519-528
    • /
    • 2012
  • Voice activity detection (VAD) is generally conducted by extracting features from the acoustic signal and a decision rule. The performance of such VAD algorithms driven by the input acoustic signal highly depends on the acoustic noise. When video signals are available as well, the performance of VAD can be enhanced by using the visual information which is not affected by the acoustic noise. Previous visual VAD algorithms usually use single visual feature to detect the lip activity, such as active appearance models, optical flow or intensity variation. Based on the analysis of the weakness of each feature, we propose to combine intensity change measure and the optical flow in the mouth region, which can compensate for each other's weakness. In order to minimize the computational complexity, we develop simple measures that avoid statistical estimation or modeling. Specifically, the optical flow is the averaged motion vector of some grid regions and the intensity variation is detected by simple thresholding. To extract the mouth region, we propose a simple algorithm which first detects two eyes and uses the profile of intensity to detect the center of mouth. Experiments show that the proposed combination of two simple measures show higher detection rates for the given false positive rate than the methods that use a single feature.

Machinability investigation and sustainability assessment in FDHT with coated ceramic tool

  • Panda, Asutosh;Das, Sudhansu Ranjan;Dhupal, Debabrata
    • Steel and Composite Structures
    • /
    • v.34 no.5
    • /
    • pp.681-698
    • /
    • 2020
  • The paper addresses contribution to the modeling and optimization of major machinability parameters (cutting force, surface roughness, and tool wear) in finish dry hard turning (FDHT) for machinability evaluation of hardened AISI grade die steel D3 with PVD-TiN coated (Al2O3-TiCN) mixed ceramic tool insert. The turning trials are performed based on Taguchi's L18 orthogonal array design of experiments for the development of regression model as well as adequate model prediction by considering tool approach angle, nose radius, cutting speed, feed rate, and depth of cut as major machining parameters. The models or correlations are developed by employing multiple regression analysis (MRA). In addition, statistical technique (response surface methodology) followed by computational approaches (genetic algorithm and particle swarm optimization) have been employed for multiple response optimization. Thereafter, the effectiveness of proposed three (RSM, GA, PSO) optimization techniques are evaluated by confirmation test and subsequently the best optimization results have been used for estimation of energy consumption which includes savings of carbon footprint towards green machining and for tool life estimation followed by cost analysis to justify the economic feasibility of PVD-TiN coated Al2O3+TiCN mixed ceramic tool in FDHT operation. Finally, estimation of energy savings, economic analysis, and sustainability assessment are performed by employing carbon footprint analysis, Gilbert approach, and Pugh matrix, respectively. Novelty aspects, the present work: (i) contributes to practical industrial application of finish hard turning for the shaft and die makers to select the optimum cutting conditions in a range of hardness of 45-60 HRC, (ii) demonstrates the replacement of expensive, time-consuming conventional cylindrical grinding process and proposes the alternative of costlier CBN tool by utilizing ceramic tool in hard turning processes considering technological, economical and ecological aspects, which are helpful and efficient from industrial point of view, (iii) provides environment friendliness, cleaner production for machining of hardened steels, (iv) helps to improve the desirable machinability characteristics, and (v) serves as a knowledge for the development of a common language for sustainable manufacturing in both research field and industrial practice.

Computational estimation of the earthquake response for fibre reinforced concrete rectangular columns

  • Liu, Chanjuan;Wu, Xinling;Wakil, Karzan;Jermsittiparsert, Kittisak;Ho, Lanh Si;Alabduljabbar, Hisham;Alaskar, Abdulaziz;Alrshoudi, Fahed;Alyousef, Rayed;Mohamed, Abdeliazim Mustafa
    • Steel and Composite Structures
    • /
    • v.34 no.5
    • /
    • pp.743-767
    • /
    • 2020
  • Due to the impressive flexural performance, enhanced compressive strength and more constrained crack propagation, Fibre-reinforced concrete (FRC) have been widely employed in the construction application. Majority of experimental studies have focused on the seismic behavior of FRC columns. Based on the valid experimental data obtained from the previous studies, the current study has evaluated the seismic response and compressive strength of FRC rectangular columns while following hybrid metaheuristic techniques. Due to the non-linearity of seismic data, Adaptive neuro-fuzzy inference system (ANFIS) has been incorporated with metaheuristic algorithms. 317 different datasets from FRC column tests has been applied as one database in order to determine the most influential factor on the ultimate strengths of FRC rectangular columns subjected to the simulated seismic loading. ANFIS has been used with the incorporation of Particle Swarm Optimization (PSO) and Genetic algorithm (GA). For the analysis of the attained results, Extreme learning machine (ELM) as an authentic prediction method has been concurrently used. The variable selection procedure is to choose the most dominant parameters affecting the ultimate strengths of FRC rectangular columns subjected to simulated seismic loading. Accordingly, the results have shown that ANFIS-PSO has successfully predicted the seismic lateral load with R2 = 0.857 and 0.902 for the test and train phase, respectively, nominated as the lateral load prediction estimator. On the other hand, in case of compressive strength prediction, ELM is to predict the compressive strength with R2 = 0.657 and 0.862 for test and train phase, respectively. The results have shown that the seismic lateral force trend is more predictable than the compressive strength of FRC rectangular columns, in which the best results belong to the lateral force prediction. Compressive strength prediction has illustrated a significant deviation above 40 Mpa which could be related to the considerable non-linearity and possible empirical shortcomings. Finally, employing ANFIS-GA and ANFIS-PSO techniques to evaluate the seismic response of FRC are a promising reliable approach to be replaced for high cost and time-consuming experimental tests.

Integration of Ontology Open-World and Rule Closed-World Reasoning (온톨로지 Open World 추론과 규칙 Closed World 추론의 통합)

  • Choi, Jung-Hwa;Park, Young-Tack
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.4
    • /
    • pp.282-296
    • /
    • 2010
  • OWL is an ontology language for the Semantic Web, and suited to modelling the knowledge of a specific domain in the real-world. Ontology also can infer new implicit knowledge from the explicit knowledge. However, the modeled knowledge cannot be complete as the whole of the common-sense of the human cannot be represented totally. Ontology do not concern handling nonmonotonic reasoning to detect incomplete modeling such as the integrity constraints and exceptions. A default rule can handle the exception about a specific class in ontology. Integrity constraint can be clear that restrictions on class define which and how many relationships the instances of that class must hold. In this paper, we propose a practical reasoning system for open and closed-world reasoning that supports a novel hybrid integration of ontology based on open world assumption (OWA) and non-monotonic rule based on closed-world assumption (CWA). The system utilizes a method to solve the problem which occurs when dealing with the incomplete knowledge under the OWA. The method uses the answer set programming (ASP) to find a solution. ASP is a logic-program, which can be seen as the computational embodiment of non-monotonic reasoning, and enables a query based on CWA to knowledge base (KB) of description logic. Our system not only finds practical cases from examples by the Protege, which require non-monotonic reasoning, but also estimates novel reasoning results for the cases based on KB which realizes a transparent integration of rules and ontologies supported by some well-known projects.

Real-time 3D Feature Extraction Combined with 3D Reconstruction (3차원 물체 재구성 과정이 통합된 실시간 3차원 특징값 추출 방법)

  • Hong, Kwang-Jin;Lee, Chul-Han;Jung, Kee-Chul;Oh, Kyoung-Su
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.789-799
    • /
    • 2008
  • For the communication between human and computer in an interactive computing environment, the gesture recognition has been studied vigorously. The algorithms which use the 2D features for the feature extraction and the feature comparison are faster, but there are some environmental limitations for the accurate recognition. The algorithms which use the 2.5D features provide higher accuracy than 2D features, but these are influenced by rotation of objects. And the algorithms which use the 3D features are slow for the recognition, because these algorithms need the 3d object reconstruction as the preprocessing for the feature extraction. In this paper, we propose a method to extract the 3D features combined with the 3D object reconstruction in real-time. This method generates three kinds of 3D projection maps using the modified GPU-based visual hull generation algorithm. This process only executes data generation parts only for the gesture recognition and calculates the Hu-moment which is corresponding to each projection map. In the section of experimental results, we compare the computational time of the proposed method with the previous methods. And the result shows that the proposed method can apply to real time gesture recognition environment.

Prediction of Lung Cancer Based on Serum Biomarkers by Gene Expression Programming Methods

  • Yu, Zhuang;Chen, Xiao-Zheng;Cui, Lian-Hua;Si, Hong-Zong;Lu, Hai-Jiao;Liu, Shi-Hai
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.21
    • /
    • pp.9367-9373
    • /
    • 2014
  • In diagnosis of lung cancer, rapid distinction between small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC) tumors is very important. Serum markers, including lactate dehydrogenase (LDH), C-reactive protein (CRP), carcino-embryonic antigen (CEA), neurone specific enolase (NSE) and Cyfra21-1, are reported to reflect lung cancer characteristics. In this study classification of lung tumors was made based on biomarkers (measured in 120 NSCLC and 60 SCLC patients) by setting up optimal biomarker joint models with a powerful computerized tool - gene expression programming (GEP). GEP is a learning algorithm that combines the advantages of genetic programming (GP) and genetic algorithms (GA). It specifically focuses on relationships between variables in sets of data and then builds models to explain these relationships, and has been successfully used in formula finding and function mining. As a basis for defining a GEP environment for SCLC and NSCLC prediction, three explicit predictive models were constructed. CEA and NSE are requentlyused lung cancer markers in clinical trials, CRP, LDH and Cyfra21-1 have significant meaning in lung cancer, basis on CEA and NSE we set up three GEP models-GEP 1(CEA, NSE, Cyfra21-1), GEP2 (CEA, NSE, LDH), GEP3 (CEA, NSE, CRP). The best classification result of GEP gained when CEA, NSE and Cyfra21-1 were combined: 128 of 135 subjects in the training set and 40 of 45 subjects in the test set were classified correctly, the accuracy rate is 94.8% in training set; on collection of samples for testing, the accuracy rate is 88.9%. With GEP2, the accuracy was significantly decreased by 1.5% and 6.6% in training set and test set, in GEP3 was 0.82% and 4.45% respectively. Serum Cyfra21-1 is a useful and sensitive serum biomarker in discriminating between NSCLC and SCLC. GEP modeling is a promising and excellent tool in diagnosis of lung cancer.

A Study on the Robust Double Talk Detector for Acoustic Echo Cancellation System (음향반항 제거 시스템을 위한 강인한 동시통화 검출기에 관한 연구)

  • 백수진;박규식
    • The Journal of the Acoustical Society of Korea
    • /
    • v.22 no.2
    • /
    • pp.121-128
    • /
    • 2003
  • Acoustic Echo Cancellation(m) is very active research topic having many applications like teleconference and hands-free communication and it employs Double Talk Detector(DTD) to indicate whether the near-end speaker is active or not. However. the DTD is very sensitive to the variation of acoustical environment and it sometimes provides wrong information about the near-end speaker. In this paper, we are focusing on the development of robust DTD algorithm which is a basic building block for reliable AEC system. The proposed AEC system consists of delayless subband AEC and narrow-band DTD. Delayless subband AEC has proven to have excellent performance of echo cancellation with a low complexity and high convergence speed. In addition, it solves the signal delay problem in the existing subband AEC. On the other hand, the proposed narrowband DTD is operating on low frequency subband. It can take most advantages from the narrow subband such as a low computational complexity due to the down-sampling and the reliable DTD decision making procedure because of the low-frequency nature of the subband signal. From the simulation results of the proposed narrowband DTD and wideband DTD, we confirm that the proposed DTD outperforms the wideband DTD in a sense of removing possible false decision making about the near-end speaker activity.

Real-Time Hybrid Testing Using a Fixed Iteration Implicit HHT Time Integration Method for a Reinforced Concrete Frame (고정반복법에 의한 암시적 HHT 시간적분법을 이용한 철근콘크리트 골조구조물의 실시간 하이브리드실험)

  • Kang, Dae-Hung;Kim, Sung-Il
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.15 no.5
    • /
    • pp.11-24
    • /
    • 2011
  • A real-time hybrid test of a 3 story-3 bay reinforced concrete frame which is divided into numerical and physical substructure models under uniaxial earthquake excitation was run using a fixed iteration implicit HHT time integration method. The first story inner non-ductile column was selected as the physical substructure model, and uniaxial earthquake excitation was applied to the numerical model until the specimen failed due to severe damage. A finite-element analysis program, Mercury, was newly developed and optimized for a real-time hybrid test. The drift ratio based on the top horizontal displacement of the physical substructure model was compared with the result of a numerical simulation by OpenSees and the result of a shaking table test. The experiment in this paper is one of the most complex real-time hybrid tests, and the description of the hardware, algorithm and models is presented in detail. If there is an improvement in the numerical model, the evaluation of the tangent stiffness matrix of the physical substructure model in the finite element analysis program and better software to reduce the computational time of the element state determination for the force-based beam-column element, then the comparison with the results of the real-time hybrid test and the shaking table test deserves to make a recommendation. In addition, for the goal of a "Numerical simulation of the complex structures under dynamic loading", the real time hybrid test has enough merit as an alternative to dynamic experiments of large and complex structures.

Design and Performance Analysis of the Efficient Equalization Method for OFDM system using QAM in multipath fading channel (다중경로 페이딩 채널에서 QAM을 사용하는 OFDM시스템의 효율적인 등화기법 설계 및 성능분석)

  • 남성식;백인기;조성호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.6B
    • /
    • pp.1082-1091
    • /
    • 2000
  • In this paper, the efficient equalization method for OFDM(Orthogonal Frequency Division Multiflexing) System using the QAM(Quadrature Amplitude Modulation) in multipath fading channel is proposed in order to faster and more efficiently equalize the received signals that are sent over real channel. In generally, the one-tap linear equalizers have been used in the frequency-domain as the existing equalization method for OFDM system. In this technique, if characteristics of the channel are changed fast, the one-tap linear equalizers cannot compensate for the distortion due to time variant multipath channels. Therefore, in this paper, we use one-tap non-linear equalizers instead of using one-tap linear equalizers in the frequency-domain, and also use the linear equalizer in the time-domain to compensate the rapid performance reduction at the low SNR(Signal-to-Noise Ratio) that is the disadvantage of the non-linear equalizer. In the frequency-domain, when QAM signals, consisting of in-phase components and quadrature (out-phase) components, are sent over the complex channel, the only in-phase and quadrature components of signals distorted by the multipath fading are changed the same as signals distorted by the noise. So the cross components are canceled in the frequency-domain equalizer. The time-domain equalizer and the adaptive algorithm that has lower-error probability and fast convergence speed are applied to compensate for the error that is caused by canceling the cross components in the frequency-domain equalizer. In the time-domain, To compensate for the performance of frequency-domain equalizer the time-domain equalizes the distorted signals at a frame by using the Gold-code as a training sequence in the receiver after the Gold-codes are inserted into the guard signal in the transmitter. By using the proposed equalization method, we can achieve faster and more efficient equalization method that has the reduced computational complexity and improved performance.

  • PDF

Fast Motion Estimation for Variable Motion Block Size in H.264 Standard (H.264 표준의 가변 움직임 블록을 위한 고속 움직임 탐색 기법)

  • 최웅일;전병우
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.6
    • /
    • pp.209-220
    • /
    • 2004
  • The main feature of H.264 standard against conventional video standards is the high coding efficiency and the network friendliness. In spite of these outstanding features, it is not easy to implement H.264 codec as a real-time system due to its high requirement of memory bandwidth and intensive computation. Although the variable block size motion compensation using multiple reference frames is one of the key coding tools to bring about its main performance gain, it demands substantial computational complexity due to SAD (Sum of Absolute Difference) calculation among all possible combinations of coding modes to find the best motion vector. For speedup of motion estimation process, therefore, this paper proposes fast algorithms for both integer-pel and fractional-pel motion search. Since many conventional fast integer-pel motion estimation algorithms are not suitable for H.264 having variable motion block sizes, we propose the motion field adaptive search using the hierarchical block structure based on the diamond search applicable to variable motion block sizes. Besides, we also propose fast fractional-pel motion search using small diamond search centered by predictive motion vector based on statistical characteristic of motion vector.