• Title/Summary/Keyword: Fast Computation

Search Result 750, Processing Time 0.033 seconds

Stress Level Based Emotion Classification Using Hybrid Deep Learning Algorithm

  • Sivasankaran Pichandi;Gomathy Balasubramanian;Venkatesh Chakrapani
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.11
    • /
    • pp.3099-3120
    • /
    • 2023
  • The present fast-moving era brings a serious stress issue that affects elders and youngsters. Everyone has undergone stress factors at least once in their lifetime. Stress is more among youngsters as they are new to the working environment. whereas the stress factors for elders affect the individual and overall performance in an organization. Electroencephalogram (EEG) based stress level classification is one of the widely used methodologies for stress detection. However, the signal processing methods evolved so far have limitations as most of the stress classification models compute the stress level in a predefined environment to detect individual stress factors. Specifically, machine learning based stress classification models requires additional algorithm for feature extraction which increases the computation cost. Also due to the limited feature learning characteristics of machine learning algorithms, the classification performance reduces and inaccurate sometimes. It is evident from numerous research works that deep learning models outperforms machine learning techniques. Thus, to classify all the emotions based on stress level in this research work a hybrid deep learning algorithm is presented. Compared to conventional deep learning models, hybrid models outperforms in feature handing. Better feature extraction and selection can be made through deep learning models. Adding machine learning classifiers in deep learning architecture will enhance the classification performances. Thus, a hybrid convolutional neural network model was presented which extracts the features using CNN and classifies them through machine learning support vector machine. Simulation analysis of benchmark datasets demonstrates the proposed model performances. Finally, existing methods are comparatively analyzed to demonstrate the better performance of the proposed model as a result of the proposed hybrid combination.

Automatic Indexing Algorithm of Golf Video Using Audio Information (오디오 정보를 이용한 골프 동영상 자동 색인 알고리즘)

  • Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.5
    • /
    • pp.441-446
    • /
    • 2009
  • This paper proposes an automatic indexing algorithm of golf video using audio information. In the proposed algorithm, the input audio stream is demultiplexed into the stream of video and audio. By means of Adaboost-cascade classifier, the continuous audio stream is classified into announcer's speech segment recorded in studio, music segment accompanied with players' names on TV screen, reaction segment of audience according to the play, reporter's speech segment with field background, filed noise segment like wind or waves. And golf swing sound including drive shot, iron shot, and putting shot is detected by the method of impulse onset detection and modulation spectrum verification. The detected swing and applause are used effectively to index action or highlight unit. Compared with video based semantic analysis, main advantage of the proposed system is its small computation requirement so that it facilitates to apply the technology to embedded consumer electronic devices for fast browsing.

A Study on the Optimal Convolution Neural Network Backbone for Sinkhole Feature Extraction of GPR B-scan Grayscale Images (GPR B-scan 회색조 이미지의 싱크홀 특성추출 최적 컨볼루션 신경망 백본 연구)

  • Park, Younghoon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.385-396
    • /
    • 2024
  • To enhance the accuracy of sinkhole detection using GPR, this study derived a convolutional neural network that can optimally extract sinkhole characteristics from GPR B-scan grayscale images. The pre-trained convolutional neural network is evaluated to be more than twice as effective as the vanilla convolutional neural network. In pre-trained convolutional neural networks, fast feature extraction is found to cause less overfitting than feature extraction. It is analyzed that the top-1 verification accuracy and computation time are different depending on the type of architecture and simulation conditions. Among the pre-trained convolutional neural networks, InceptionV3 are evaluated as most robust for sinkhole detection in GPR B-scan grayscale images. When considering both top-1 verification accuracy and architecture efficiency index, VGG19 and VGG16 are analyzed to have high efficiency as the backbone for extracting sinkhole feature from GPR B-scan grayscale images. MobileNetV3-Large backbone is found to be suitable when mounted on GPR equipment to extract sinkhole feature in real time.

Hardware Approach to Fuzzy Inference―ASIC and RISC―

  • Watanabe, Hiroyuki
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.975-976
    • /
    • 1993
  • This talk presents the overview of the author's research and development activities on fuzzy inference hardware. We involved it with two distinct approaches. The first approach is to use application specific integrated circuits (ASIC) technology. The fuzzy inference method is directly implemented in silicon. The second approach, which is in its preliminary stage, is to use more conventional microprocessor architecture. Here, we use a quantitative technique used by designer of reduced instruction set computer (RISC) to modify an architecture of a microprocessor. In the ASIC approach, we implemented the most widely used fuzzy inference mechanism directly on silicon. The mechanism is beaded on a max-min compositional rule of inference, and Mandami's method of fuzzy implication. The two VLSI fuzzy inference chips are designed, fabricated, and fully tested. Both used a full-custom CMOS technology. The second and more claborate chip was designed at the University of North Carolina(U C) in cooperation with MCNC. Both VLSI chips had muliple datapaths for rule digital fuzzy inference chips had multiple datapaths for rule evaluation, and they executed multiple fuzzy if-then rules in parallel. The AT & T chip is the first digital fuzzy inference chip in the world. It ran with a 20 MHz clock cycle and achieved an approximately 80.000 Fuzzy Logical inferences Per Second (FLIPS). It stored and executed 16 fuzzy if-then rules. Since it was designed as a proof of concept prototype chip, it had minimal amount of peripheral logic for system integration. UNC/MCNC chip consists of 688,131 transistors of which 476,160 are used for RAM memory. It ran with a 10 MHz clock cycle. The chip has a 3-staged pipeline and initiates a computation of new inference every 64 cycle. This chip achieved an approximately 160,000 FLIPS. The new architecture have the following important improvements from the AT & T chip: Programmable rule set memory (RAM). On-chip fuzzification operation by a table lookup method. On-chip defuzzification operation by a centroid method. Reconfigurable architecture for processing two rule formats. RAM/datapath redundancy for higher yield It can store and execute 51 if-then rule of the following format: IF A and B and C and D Then Do E, and Then Do F. With this format, the chip takes four inputs and produces two outputs. By software reconfiguration, it can store and execute 102 if-then rules of the following simpler format using the same datapath: IF A and B Then Do E. With this format the chip takes two inputs and produces one outputs. We have built two VME-bus board systems based on this chip for Oak Ridge National Laboratory (ORNL). The board is now installed in a robot at ORNL. Researchers uses this board for experiment in autonomous robot navigation. The Fuzzy Logic system board places the Fuzzy chip into a VMEbus environment. High level C language functions hide the operational details of the board from the applications programme . The programmer treats rule memories and fuzzification function memories as local structures passed as parameters to the C functions. ASIC fuzzy inference hardware is extremely fast, but they are limited in generality. Many aspects of the design are limited or fixed. We have proposed to designing a are limited or fixed. We have proposed to designing a fuzzy information processor as an application specific processor using a quantitative approach. The quantitative approach was developed by RISC designers. In effect, we are interested in evaluating the effectiveness of a specialized RISC processor for fuzzy information processing. As the first step, we measured the possible speed-up of a fuzzy inference program based on if-then rules by an introduction of specialized instructions, i.e., min and max instructions. The minimum and maximum operations are heavily used in fuzzy logic applications as fuzzy intersection and union. We performed measurements using a MIPS R3000 as a base micropro essor. The initial result is encouraging. We can achieve as high as a 2.5 increase in inference speed if the R3000 had min and max instructions. Also, they are useful for speeding up other fuzzy operations such as bounded product and bounded sum. The embedded processor's main task is to control some device or process. It usually runs a single or a embedded processer to create an embedded processor for fuzzy control is very effective. Table I shows the measured speed of the inference by a MIPS R3000 microprocessor, a fictitious MIPS R3000 microprocessor with min and max instructions, and a UNC/MCNC ASIC fuzzy inference chip. The software that used on microprocessors is a simulator of the ASIC chip. The first row is the computation time in seconds of 6000 inferences using 51 rules where each fuzzy set is represented by an array of 64 elements. The second row is the time required to perform a single inference. The last row is the fuzzy logical inferences per second (FLIPS) measured for ach device. There is a large gap in run time between the ASIC and software approaches even if we resort to a specialized fuzzy microprocessor. As for design time and cost, these two approaches represent two extremes. An ASIC approach is extremely expensive. It is, therefore, an important research topic to design a specialized computing architecture for fuzzy applications that falls between these two extremes both in run time and design time/cost. TABLEI INFERENCE TIME BY 51 RULES {{{{Time }}{{MIPS R3000 }}{{ASIC }}{{Regular }}{{With min/mix }}{{6000 inference 1 inference FLIPS }}{{125s 20.8ms 48 }}{{49s 8.2ms 122 }}{{0.0038s 6.4㎲ 156,250 }} }}

  • PDF

Study on Improvement of Weil Pairing IBE for Secret Document Distribution (기밀문서유통을 위한 Weil Pairing IBE 개선 연구)

  • Choi, Cheong-Hyeon
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.59-71
    • /
    • 2012
  • PKI-based public key scheme is outstanding in terms of authenticity and privacy. Nevertheless its application brings big burden due to the certificate/key management. It is difficult to apply it to limited computing devices in WSN because of its high encryption complexity. The Bilinear Pairing emerged from the original IBE to eliminate the certificate, is a future significant cryptosystem as based on the DDH(Decisional DH) algorithm which is significant in terms of computation and secure enough for authentication, as well as secure and faster. The practical EC Weil Pairing presents that its encryption algorithm is simple and it satisfies IND/NM security constraints against CCA. The Random Oracle Model based IBE PKG is appropriate to the structure of our target system with one secret file server in the operational perspective. Our work proposes modification of the Weil Pairing as proper to the closed network for secret file distribution[2]. First we proposed the improved one computing both encryption and message/user authentication as fast as O(DES) level, in which our scheme satisfies privacy, authenticity and integrity. Secondly as using the public key ID as effective as PKI, our improved IBE variant reduces the key exposure risk.

THE LORENTZ FORCE IN ATMOSPHERES OF CP STARS: θ AUR

  • VALYAVIN G.;KOCHUKHOV O.;SHULYAK D.;LEE B.-C.;GALAZUTDINOV G.;KIM K.-M.;HAN I.
    • Journal of The Korean Astronomical Society
    • /
    • v.38 no.2
    • /
    • pp.283-287
    • /
    • 2005
  • The slow evolution of global magnetic fields and other dynamical processes in atmospheres of CP magnetic stars lead to the development of induced electric currents in all conductive atmospheric layers. The Lorentz force, which results from the interaction between a magnetic field and the induced currents, may modify the atmospheric structure and provide insight into the formation and evolution of stellar magnetic fields. This modification of the pressure-temperature structure influences the formation of absorption spectral features producing characteristic rotational variability of some spectral lines, especially the Balmer lines (Valyavin et al., 2004 and references therein). In order to study these theoretical predictions we began systematic spectroscopic survey of Balmer line variability in spectra of brightest CP magnetic stars. Here we present the first results of the program. A0p star $\Theta$ Aur revealed significant variability of the Balmer profiles during the star's rotation. Character of this variablity corresponds to that classified by Kroll (1989) as a result of an impact of significant Lorentz force. From the obtained data we estimate that amplitudes of the variation at H$\alpha$, H$\beta$, H$\gamma$ and H$\delta$ profiles reach up to $2.4\%$during full rotation cycle of the star. Using computation of our model atmospheres (Valyavin et al., 2004) we interpret these data within the framework of the simplest model of the evolution of global magnetic fields in chemically peculiar stars. Assuming that the field is represented by a dipole, we estimate the characteristic e.m.f. induced by the field decay electric current (and the Lorentz force as the result) on the order of $E {\~} 10^{-11}$ cgs units, which may indicate very fast (< < $10^{10}$ years) evolution rate of the field. This result strongly contradicts the theoretical point of view that global stellar magnetic fields of CP stars are fossil and their the characteristic decay time of about $10^{10}$ yr. Alternatively, we briefly discuss concurring effects (like the ambipolar diffusion) which may also lead to significant atmospheric currents producing the observable Lorentz force.

Detection of Gradual Transitions in MPEG Compressed Video using Hidden Markov Model (은닉 마르코프 모델을 이용한 MPEG 압축 비디오에서의 점진적 변환의 검출)

  • Choi, Sung-Min;Kim, Dai-Jin;Bang, Sung-Yang
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.3
    • /
    • pp.379-386
    • /
    • 2004
  • Video segmentation is a fundamental task in video indexing and it includes two kinds of shot change detections such as the abrupt transition and the gradual transition. The abrupt shot boundaries are detected by computing the image-based distance between adjacent frames and comparing this distance with a pre-determined threshold value. However, the gradual shot boundaries are difficult to detect with this approach. To overcome this difficulty, we propose the method that detects gradual transition in the MPEG compressed video using the HMM (Hidden Markov Model). We take two different HMMs such as a discrete HMM and a continuous HMM with a Gaussian mixture model. As image features for HMM's observations, we use two distinct features such as the difference of histogram of DC images between two adjacent frames and the difference of each individual macroblock's deviations at the corresponding macroblock's between two adjacent frames, where deviation means an arithmetic difference of each macroblock's DC value from the mean of DC values in the given frame. Furthermore, we obtain the DC sequences of P and B frame by the first order approximation for a fast and effective computation. Experiment results show that we obtain the best detection and classification performance of gradual transitions when a continuous HMM with one Gaussian model is taken and two image features are used together.

Comparative Performance Analysis of Feature Detection and Matching Methods for Lunar Terrain Images (달 지형 영상에서 특징점 검출 및 정합 기법의 성능 비교 분석)

  • Hong, Sungchul;Shin, Hyu-Soung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.40 no.4
    • /
    • pp.437-444
    • /
    • 2020
  • A lunar rover's optical camera is used to provide navigation and terrain information in an exploration zone. However, due to the scant presence of atmosphere, the Moon has homogeneous terrain with dark soil. Also, in extreme environments, the rover has limited data storage with low computation capability. Thus, for successful exploration, it is required to examine feature detection and matching methods which are robust to lunar terrain and environmental characteristics. In this research, SIFT, SURF, BRISK, ORB, and AKAZE are comparatively analyzed with lunar terrain images from a lunar rover. Experimental results show that SIFT and AKAZE are most robust for lunar terrain characteristics. AKAZE detects less quantity of feature points than SIFT, but feature points are detected and matched with high precision and the least computational cost. AKAZE is adequate for fast and accurate navigation information. Although SIFT has the highest computational cost, the largest quantity of feature points are stably detected and matched. The rover periodically sends terrain images to Earth. Thus, SIFT is suitable for global 3D terrain map construction in that a large amount of terrain images can be processed on Earth. Study results are expected to provide a guideline to utilize feature detection and matching methods for future lunar exploration rovers.

Real-time Fluid Animation using Particle Dynamics Simulation and Pre-integrated Volume Rendering (입자 동역학 시뮬레이션과 선적분 볼륨 렌더링을 이용한 실시간 유체 애니메이션)

  • Lee Jeongjin;Kang Moon Koo;Kim Dongho;Shin Yeong Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.29-38
    • /
    • 2005
  • The fluid animation procedure consists of physical simulation and visual rendering. In the physical simulation of fluids, the most frequently used practices are the numerical simulation of fluid particles using particle dynamics equations and the continuum analysis of flow via Wavier-Stokes equation. Particle dynamics method is fast in calculation, but the resulting fluid motion is conditionally unrealistic The method using Wavier-Stokes equation, on the contrary, yields lifelike fluid motion when properly conditioned, yet the complexity of calculation restrains this method from being used in real-time applications. Global illumination is generally successful in producing premium-Duality rendered images, but is also excessively slow for real-time applications. In this paper, we propose a rapid fluid animation method incorporating enhanced particle dynamics simulation method and pre-integrated volume rendering technique. The particle dynamics simulation of fluid flow was conducted in real-time using Lennard-Jones model, and the computation efficiency was enhanced such that a small number of particles can represent a significant volume. For real-time rendering, pre-integrated volume rendering method was used so that fewer slices than ever can construct seamless inter-laminar shades. The proposed method could successfully simulate and render the fluid motion in real time at an acceptable speed and visual quality.

GPS L5 Signal Tracking Scheme Using GPS L1 Signal Tracking Results (GPS L1 신호추적 결과를 이용한 GPS L5 신호추적 기법)

  • Joo, Inone;Lee, Sanguk
    • Journal of Satellite, Information and Communications
    • /
    • v.7 no.3
    • /
    • pp.99-104
    • /
    • 2012
  • The United States will proceed with the effort to modernize the GPS system, and one of its main content is to provide L5 signal. L5 will be transmitted in a radio band reserved exclusively for aviation safety services. And, L5, in combination with L1, will improve the position accuracy via ionospheric correction and robustness via signal redundancy. However, The acquisition processing time of L5 takes longer than that of L1 as the code length of L5 is 10 times longer than that of L1. To reduce this acquisition processing time, a higher number of correlators in the aquisition module should be used. However, there is a problem that this causes increase in the complexity of the correlator configuration and the computation power. So, in this paper, we propose L5 signal tracking scheme using tracking results in the GPS L1/L5 receiver. The proposed scheme could reduce the hardware complexity as the GPS L5 signal acquisition module is not needed, and provide fast and stable tracking of L5 signal by aiding L1 tracking results such as PRN, the code phase synchronization, and the Doppler frequency. The feasibility of the proposed scheme is demonstrated through simulation results.