• Title/Summary/Keyword: Information input algorithm

Search Result 2,444, Processing Time 0.039 seconds

Enhancing Wind Speed and Wind Power Forecasting Using Shape-Wise Feature Engineering: A Novel Approach for Improved Accuracy and Robustness

  • Mulomba Mukendi Christian;Yun Seon Kim;Hyebong Choi;Jaeyoung Lee;SongHee You
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.393-405
    • /
    • 2023
  • Accurate prediction of wind speed and power is vital for enhancing the efficiency of wind energy systems. Numerous solutions have been implemented to date, demonstrating their potential to improve forecasting. Among these, deep learning is perceived as a revolutionary approach in the field. However, despite their effectiveness, the noise present in the collected data remains a significant challenge. This noise has the potential to diminish the performance of these algorithms, leading to inaccurate predictions. In response to this, this study explores a novel feature engineering approach. This approach involves altering the data input shape in both Convolutional Neural Network-Long Short-Term Memory (CNN-LSTM) and Autoregressive models for various forecasting horizons. The results reveal substantial enhancements in model resilience against noise resulting from step increases in data. The approach could achieve an impressive 83% accuracy in predicting unseen data up to the 24th steps. Furthermore, this method consistently provides high accuracy for short, mid, and long-term forecasts, outperforming the performance of individual models. These findings pave the way for further research on noise reduction strategies at different forecasting horizons through shape-wise feature engineering.

Implementation of the BLDC Motor Drive System using PFC converter and DTC (PFC 컨버터와 DTC를 이용한 BLDC 모터의 구동 시스템 구현)

  • Yang, Oh
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.44 no.5
    • /
    • pp.62-70
    • /
    • 2007
  • In this paper, the boost Power Factor Correction(PFC) technique for Direct Torque Control(DTC) of brushless DC motor drive in the constant torque region is implemented on a TMS320F2812DSP. Unlike conventional six-step PWM current control, by properly selecting the inverter voltage space vectors of the two-phase conduction mode from a simple look-up table at a predefined sampling time, the desired quasi-square wave current is obtained, therefore a much faster torque response is achieved compared to conventional current control. Furthermore, to eliminate the low-frequency torque oscillations caused by the non-ideal trapezoidal shape of the actual back-EMF waveform of the BLDC motor, a pre-stored back-EMF versus position look-up table is designed. The duty cycle of the boost converter is determined by a control algorithm based on the input voltage, output voltage which is the dc-link of the BLDC motor drive, and inductor current using average current control method with input voltage feed-forward compensation during each sampling period of the drive system. With the emergence of high-speed digital signal processors(DSPs), both PFC and simple DTC algorithms can be executed during a single sampling period of the BLDC motor drive. In the proposed method, since no PWM algorithm is required for DTC or BLDC motor drive, only one PWM output for the boost converter with 80 kHz switching frequency is used in a TMS320F2812 DSP. The validity and effectiveness of the proposed DTC of BLDC motor drive scheme with PFC are verified through the experimental results. The test results verify that the proposed PFC for DTC of BLDC motor drive improves power factor considerably from 0.77 to as close as 0.9997 with and without load conditions.

Image Contrast Enhancement by Illumination Change Detection (조명 변화 감지에 의한 영상 콘트라스트 개선)

  • Odgerel, Bayanmunkh;Lee, Chang Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • There are many image processing based algorithms and applications that fail when illumination change occurs. Therefore, the illumination change has to be detected then the illumination change occurred images need to be enhanced in order to keep the appropriate algorithm processing in a reality. In this paper, a new method for detecting illumination changes efficiently in a real time by using local region information and fuzzy logic is introduced. The effective way for detecting illumination changes in lighting area and the edge of the area was selected to analyze the mean and variance of the histogram of each area and to reflect the changing trends on previous frame's mean and variance for each area of the histogram. The ways are used as an input. The changes of mean and variance make different patterns w hen illumination change occurs. Fuzzy rules were defined based on the patterns of the input for detecting illumination changes. Proposed method was tested with different dataset through the evaluation metrics; in particular, the specificity, recall and precision showed high rates. An automatic parameter selection method was proposed for contrast limited adaptive histogram equalization method by using entropy of image through adaptive neural fuzzy inference system. The results showed that the contrast of images could be enhanced. The proposed algorithm is robust to detect global illumination change, and it is also computationally efficient in real applications.

Real-Time Object Tracking Algorithm based on Pattern Classification in Surveillance Networks (서베일런스 네트워크에서 패턴인식 기반의 실시간 객체 추적 알고리즘)

  • Kang, Sung-Kwan;Chun, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.183-190
    • /
    • 2016
  • This paper proposes algorithm to reduce the computing time in a neural network that reduces transmission of data for tracking mobile objects in surveillance networks in terms of detection and communication load. Object Detection can be defined as follows : Given image sequence, which can forom a digitalized image, the goal of object detection is to determine whether or not there is any object in the image, and if present, returns its location, direction, size, and so on. But object in an given image is considerably difficult because location, size, light conditions, obstacle and so on change the overall appearance of objects, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact object detection which overcomes some restrictions by using neural network. Proposed system can be object detection irrelevant to obstacle, background and pose rapidly. And neural network calculation time is decreased by reducing input vector size of neural network. Principle Component Analysis can reduce the dimension of data. In the video input in real time from a CCTV was experimented and in case of color segment, the result shows different success rate depending on camera settings. Experimental results show proposed method attains 30% higher recognition performance than the conventional method.

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.

Learning-based Super-resolution for Text Images (글자 영상을 위한 학습기반 초고해상도 기법)

  • Heo, Bo-Young;Song, Byung Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.175-183
    • /
    • 2015
  • The proposed algorithm consists of two stages: the learning and synthesis stages. At the learning stage, we first collect various high-resolution (HR)-low-resolution (LR) text image pairs, and quantize the LR images, and extract HR-LR block pairs. Based on quantized LR blocks, the LR-HR block pairs are clustered into a pre-determined number of classes. For each class, an optimal 2D-FIR filter is computed, and it is stored into a dictionary with the corresponding LR block for indexing. At the synthesis stage, each quantized LR block in an input LR image is compared with every LR block in the dictionary, and the FIR filter of the best-matched LR block is selected. Finally, a HR block is synthesized with the chosen filter, and a final HR image is produced. Also, in order to cope with noisy environment, we generate multiple dictionaries according to noise level at the learning stage. So, the dictionary corresponding to the noise level of the input image is chosen, and a final HR image is produced using the selected dictionary. Experimental results show that the proposed algorithm outperforms the previous works for noisy images as well as noise-free images.

Test Case Generation for Simulink/Stateflow Model Based on a Modified Rapidly Exploring Random Tree Algorithm (변형된 RRT 알고리즘 기반 Simulink/Stateflow 모델 테스트 케이스 생성)

  • Park, Han Gon;Chung, Ki Hyun;Choi, Kyung Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.12
    • /
    • pp.653-662
    • /
    • 2016
  • This paper describes a test case generation algorithm for Simulink/Stateflow models based on the Rapidly exploring Random Tree (RRT) algorithm that has been successfully applied to path finding. An important factor influencing the performance of the RRT algorithm is the metric used for calculating the distance between the nodes in the RRT space. Since a test case for a Simulink/Stateflow (SL/SF) model is an input sequence to check a specific condition (called a test target in this paper) at a specific status of the model, it is necessary to drive the model to the status before checking the condition. A status maps to a node of the RRT. It is usually necessary to check various conditions at a specific status. For example, when the specific status represents an SL/SF model state from which multiple transitions are made, we must check multiple conditions to measure the transition coverage. We propose a unique distance calculation metric, based on the observation that the test targets are gathered around some specific status such as an SL/SF state, named key nodes in this paper. The proposed metric increases the probability that an RRT is extended from key nodes by imposing penalties to non-key nodes. A test case generation algorithm utilizing the proposed metric is proposed. Three models of Electrical Control Units (ECUs) embedded in a commercial vehicle are used for the performance evaluation. The performances are evaluated in terms of penalties and compared with those of the algorithm using a typical RRT algorithm.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.

Design and frnplernentation of a Query Processing Algorithm for Dtstributed Semistructlred Documents Retrieval with Metadata hterface (메타데이타 인터페이스를 이용한 분산된 반구조적 문서 검색을 위한 질의처리 알고리즘 설계 및 구현)

  • Choe Cuija;Nam Young-Kwang
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.554-569
    • /
    • 2005
  • In the semistructured distributed documents, it is very difficult to formalize and implement the query processing system due to the lack of structure and rule of the data. In order to precisely retrieve and process the heterogeneous semistructured documents, it is required to handle multiple mappings such as 1:1, 1:W and W:1 on an element simultaneously and to generate the schema from the distributed documents. In this paper, we have proposed an query processing algorithm for querying and answering on the heterogeneous semistructured data or documents over distributed systems and implemented with a metadata interface. The algorithm for generating local queries from the global query consists of mapping between g1oba1 and local nodes, data transformation according to the mapping types, path substitution, and resolving the heterogeneity among nodes on a global input query with metadata information. The mapping, transformation, and path substitution algorithms between the global schema and the local schemas have been implemented the metadata interface called DBXMI (for Distributed Documents XML Metadata Interface). The nodes with the same node name and different mapping or meanings is resolved by automatically extracting node identification information from the local schema automatically. The system uses Quilt as its XML query language. An experiment testing is reported over 3 different OEM model semistructured restaurant documents. The prototype system is developed under Windows system with Java and JavaCC compiler.

Study on the method of safety diagnosis of electrical equipments using fuzzy algorithm (퍼지알고리즘을 이용한 전기전자기기의 안전진단방법에 대한 연구)

  • Lee, Jae-Cheol
    • Journal of Digital Convergence
    • /
    • v.16 no.7
    • /
    • pp.223-229
    • /
    • 2018
  • Recently, the necessity of safety diagnosis of electrical devices has been increasing as the fire caused by electric devices has increased rapidly. This study is concerned with the safety diagnosis of electric equipment using intelligent Fuzzy technology. It is used as a diagnostic input for the multiple electrical safety factors such as the use current, cumulative use time, deterioration and arc characteristics inherent to the equipment. In order to extract these information in real time, a device composed of various sensor circuits, DSP signal processing, and communication circuit is implemented. The fuzzy logic algorithm using the Gaussian function for each information is designed and compiled to be implemented on a small DSP board. The fuzzy logic receives the four diagnostic information, deduces it by the fuzzy engine, and outputs the overall safety status of the device as a 100-step analog fuzzy value familiar to human sensibility. By experiments of a device that combines hardware and fuzzy algorithm implemented in this study, it is verified that it can be implemented in a small DSP board with human-friendly fuzzy value, diagnosing real-time safety conditions during operation of electric equipment. In the future, we expect to be able to study more intelligent diagnostic systems based on artificial intelligent with AI dedicated Micom.