• Title/Summary/Keyword: sliding window

Search Result 236, Processing Time 0.025 seconds

Implementation of an Efficient Requirements Analysis supporting System using Similarity Measure Techniques (유사도 측정 기법을 이용한 효율적인 요구 분석 지원 시스템의 구현)

  • Kim, Hark-Soo;Ko, Young-Joong;Park, Soo-Yong;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.1
    • /
    • pp.13-23
    • /
    • 2000
  • As software becomes more complicated and large-scaled, user's demands become more varied and his expectation levels about software products are raised. Therefore it is very important that a software engineer analyzes user's requirements precisely and applies it effectively in the development step. This paper presents a requirements analysis system that reduces and revises errors of requirements specifications analysis effectively. As this system measures the similarity among requirements documents and sentences, it assists users in analyzing the dependency among requirements specifications and finding the traceability, redundancy, inconsistency and incompleteness among requirements sentences. It also extracts sentences that contain ambiguous words. Indexing method for the similarity measurement combines sliding window model and dependency structure model. This method can complement each model's weeknesses. This paper verifies the efficiency of similarity measure techniques through experiments and presents a proccess of the requirements specifications analysis using the embodied system.

  • PDF

An adaptive Fuzzy Binarization (적응 퍼지 이진화)

  • Jeon, Wang-Su;Rhee, Sang-Yong
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.6
    • /
    • pp.485-492
    • /
    • 2016
  • A role of the binarization is very important in separating the foreground and the background in the field of the computer vision. In this study, an adaptive fuzzy binarization is proposed. An ${\alpha}$-cut control ratio is obtained by the distribution of grey level of pixels in a sliding window, and binarization is performed using the value. To obtain the ${\alpha}$-cut, existing thresholding methods which execution speed is fast are used. The threshold values are set as the center of each membership function and the fuzzy intervals of the functions are specified with the distribution of grey level of the pixel. Then ${\alpha}$-control ratio is calculated using the specified function and binarization is performed according to the membership degree of the pixels. The experimental results show the proposed method can segment the foreground and the background well than existing binarization methods and decrease loss of the foreground.

Prediction of Baltic Dry Index by Applications of Long Short-Term Memory (Long Short-Term Memory를 활용한 건화물운임지수 예측)

  • HAN, Minsoo;YU, Song-Jin
    • Journal of Korean Society for Quality Management
    • /
    • v.47 no.3
    • /
    • pp.497-508
    • /
    • 2019
  • Purpose: The purpose of this study is to overcome limitations of conventional studies that to predict Baltic Dry Index (BDI). The study proposed applications of Artificial Neural Network (ANN) named Long Short-Term Memory (LSTM) to predict BDI. Methods: The BDI time-series prediction was carried out through eight variables related to the dry bulk market. The prediction was conducted in two steps. First, identifying the goodness of fitness for the BDI time-series of specific ANN models and determining the network structures to be used in the next step. While using ANN's generalization capability, the structures determined in the previous steps were used in the empirical prediction step, and the sliding-window method was applied to make a daily (one-day ahead) prediction. Results: At the empirical prediction step, it was possible to predict variable y(BDI time series) at point of time t by 8 variables (related to the dry bulk market) of x at point of time (t-1). LSTM, known to be good at learning over a long period of time, showed the best performance with higher predictive accuracy compared to Multi-Layer Perceptron (MLP) and Recurrent Neural Network (RNN). Conclusion: Applying this study to real business would require long-term predictions by applying more detailed forecasting techniques. I hope that the research can provide a point of reference in the dry bulk market, and furthermore in the decision-making and investment in the future of the shipping business as a whole.

Vehicle Detection in Aerial Images Based on Hyper Feature Map in Deep Convolutional Network

  • Shen, Jiaquan;Liu, Ningzhong;Sun, Han;Tao, Xiaoli;Li, Qiangyi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.4
    • /
    • pp.1989-2011
    • /
    • 2019
  • Vehicle detection based on aerial images is an interesting and challenging research topic. Most of the traditional vehicle detection methods are based on the sliding window search algorithm, but these methods are not sufficient for the extraction of object features, and accompanied with heavy computational costs. Recent studies have shown that convolutional neural network algorithm has made a significant progress in computer vision, especially Faster R-CNN. However, this algorithm mainly detects objects in natural scenes, it is not suitable for detecting small object in aerial view. In this paper, an accurate and effective vehicle detection algorithm based on Faster R-CNN is proposed. Our method fuse a hyperactive feature map network with Eltwise model and Concat model, which is more conducive to the extraction of small object features. Moreover, setting suitable anchor boxes based on the size of the object is used in our model, which also effectively improves the performance of the detection. We evaluate the detection performance of our method on the Munich dataset and our collected dataset, with improvements in accuracy and effectivity compared with other methods. Our model achieves 82.2% in recall rate and 90.2% accuracy rate on Munich dataset, which has increased by 2.5 and 1.3 percentage points respectively over the state-of-the-art methods.

Comparison of Deep Learning-based CNN Models for Crack Detection (콘크리트 균열 탐지를 위한 딥 러닝 기반 CNN 모델 비교)

  • Seol, Dong-Hyeon;Oh, Ji-Hoon;Kim, Hong-Jin
    • Journal of the Architectural Institute of Korea Structure & Construction
    • /
    • v.36 no.3
    • /
    • pp.113-120
    • /
    • 2020
  • The purpose of this study is to compare the models of Deep Learning-based Convolution Neural Network(CNN) for concrete crack detection. The comparison models are AlexNet, GoogLeNet, VGG16, VGG19, ResNet-18, ResNet-50, ResNet-101, and SqueezeNet which won ImageNet Large Scale Visual Recognition Challenge(ILSVRC). To train, validate and test these models, we constructed 3000 training data and 12000 validation data with 256×256 pixel resolution consisting of cracked and non-cracked images, and constructed 5 test data with 4160×3120 pixel resolution consisting of concrete images with crack. In order to increase the efficiency of the training, transfer learning was performed by taking the weight from the pre-trained network supported by MATLAB. From the trained network, the validation data is classified into crack image and non-crack image, yielding True Positive (TP), True Negative (TN), False Positive (FP), False Negative (FN), and 6 performance indicators, False Negative Rate (FNR), False Positive Rate (FPR), Error Rate, Recall, Precision, Accuracy were calculated. The test image was scanned twice with a sliding window of 256×256 pixel resolution to classify the cracks, resulting in a crack map. From the comparison of the performance indicators and the crack map, it was concluded that VGG16 and VGG19 were the most suitable for detecting concrete cracks.

Real-time Moving Object Detection Based on RPCA via GD for FMCW Radar

  • Nguyen, Huy Toan;Yu, Gwang Hyun;Na, Seung You;Kim, Jin Young;Seo, Kyung Sik
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.6
    • /
    • pp.103-114
    • /
    • 2019
  • Moving-target detection using frequency-modulated continuous-wave (FMCW) radar systems has recently attracted attention. Detection tasks are more challenging with noise resulting from signals reflected from strong static objects or small moving objects(clutter) within radar range. Robust Principal Component Analysis (RPCA) approach for FMCW radar to detect moving objects in noisy environments is employed in this paper. In detail, compensation and calibration are first applied to raw input signals. Then, RPCA via Gradient Descents (RPCA-GD) is adopted to model the low-rank noisy background. A novel update algorithm for RPCA is proposed to reduce the computation cost. Finally, moving-targets are localized using an Automatic Multiscale-based Peak Detection (AMPD) method. All processing steps are based on a sliding window approach. The proposed scheme shows impressive results in both processing time and accuracy in comparison to other RPCA-based approaches on various experimental scenarios.

The evaluation of usefulness of Electronic Portal Imaging Device(EPID) (Electronic Portal Imaging Device(EPID)의 유용성 평가)

  • Lee, Yang-Hoon;Kim, Bo-Kyoum;Jung, Chi-Hoon;Lee, Je-Hee;Park, Heung-Deuk
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.17 no.1
    • /
    • pp.19-31
    • /
    • 2005
  • Purpose : To supply the information of EPID system and to analyze the possibility of substitution EPID for film dosimetry. Materials & Methods : With amorphous silicon(aSi) type EPID and liquid filled lonization chamber(LC) type EPID, the reproducibility according to focus detector distance(FDD) change and gantry rotation was analyzed, and also the possible range of image acquisition was analyzed with Alderson Rando phantom. The resolution and the contrast of aSi type EPID image were analyzed through Las Vegas phantom and water phantom. DMLC image was analyzed with X-Omat V film and EPID to see wether it could be applied to the qualify assurance(QA) of IMRT. Results : The reproducibility of FDD position was within 1mm, but the reproducibility of gantry rotation was ${\pm}2,\;{\pm}3mm$ respectively. The resolution and the contrast of EPID image were affected by dose rate, image acquisition time, image acquisition method and frame number. According to the possible range of image acquisition of EPID, it is verified that the EPID is easier to use than film. There is no difference between X-Omat V film and EPID images for the QA of IMRT. Conclusion : Through various evaluation, we could obtain lots of useful information about the EPID. Because the EPID has digital data, also we found that the EPID is more useful than film dosimerty for the periodical Qualify Assurance of IMRT. Especially when it is difficult to do point dose measurement with diode or ionization chamber, the EPID could be very useful substitute. And we found that the diode and ionization chamber are difficult to evaluate the sliding window images of IMRT, but the EPID was more useful to do it.

  • PDF

Study on the Various Size Dependence of Ionization Chamber in IMRT Measurement to Improve Dose-accuracy (세기조절 방사선치료(IMRT)의 환자 정도관리에서 다양한 이온전리함 볼륨이 정확도에 미치는 영향)

  • Kim, Sun-Young;Lee, Doo-Hyun;Cho, Jung-Keun;Jung, Do-Hyeung;Kim, Ho-Sick;Choi, Gye-Sook
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.18 no.1
    • /
    • pp.1-5
    • /
    • 2006
  • Purpose: IMRT quality assurance(Q.A) is consist of the absolute dosimetry using ionization chamber and relative dosimetry using the film. We have in general used 0.015 cc ionization chamber, because small size and measure the point dose. But this ionization chamber is too small to give an accurate measurement value. In this study, we have examined the degree of calculated to measured dose difference in intensity modulated radiotherapy(IMRT) based on the observed/expected ratio using various kinds of ion chambers, which were used for absolute dosimetry. Materials and Methods: we peformed the 6 cases of IMRT sliding-window method for head and neck cases. Radiation was delivered by using a Clinac 21EX unit(Varian, USA) generating a 6 MV x-ray beam, which is equipped with an integrated multileaf collimator. The dose rate for IMRT treatment is set to 300 MU/min. The ion chamber was located 5cm below the surface of phantom giving 100cm as a source-axis distance(SAD). The various types of ion chambers were used including 0.015cc(pin point type 31014, PTW. Germany), 0.125 cc(micro type 31002, PTW, Germany) and 0.6 cc(famer type 30002, PTW, Germany). The measurement point was carefully chosen to be located at low-gradient area. Results: The experimental results show that the average differences between plan value and measured value are ${\pm}0.91%$ for 0.015 cc pin point chamber, ${\pm}0.52%$ for 0.125 cc micro type chamber and ${\pm}0.76%$ for farmer type 0.6cc chamber. The 0.125 cc micro type chamber is appropriate size for dose measure in IMRT. Conclusion: IMRT Q.A is the important procedure. Based on the various types of ion chamber measurements, we have demonstrated that the dose discrepancy between calculated dose distribution and measured dose distribution for IMRT plans is dependent on the size of ion chambers. The reason is small size ionization chamber have the high signal-to-noise ratio and big size ionization chamber is not located accurate measurement point. Therefore our results suggest the 0.125 cc farmer type chamber is appropriate size for dose measure in IMRT.

  • PDF

Continuous Query Processing in Data Streams Using Duality of Data and Queries (데이타와 질의의 이원성을 이용한 데이타스트림에서의 연속질의 처리)

  • Lim Hyo-Sang;Lee Jae-Gil;Lee Min-Jae;Whang Kyu-Young
    • Journal of KIISE:Databases
    • /
    • v.33 no.3
    • /
    • pp.310-326
    • /
    • 2006
  • In this paper, we deal with a method of efficiently processing continuous queries in a data stream environment. We classify previous query processing methods into two dual categories - data-initiative and query-initiative - depending on whether query processing is initiated by selecting a data element or a query. This classification stems from the fact that data and queries have been treated asymmetrically. For processing continuous queries, only data-initiative methods have traditionally been employed, and thus, the performance gain that could be obtained by query-initiative methods has been overlooked. To solve this problem, we focus on an observation that data and queries can be treated symmetrically. In this paper, we propose the duality model of data and queries and, based on this model, present a new viewpoint of transforming the continuous query processing problem to a multi-dimensional spatial join problem. We also present a continuous query processing algorithm based on spatial join, named Spatial Join CQ. Spatial Join CQ processes continuous queries by finding the pairs of overlapping regions from a set of data elements and a set of queries defined as regions in the multi-dimensional space. The algorithm achieves the effects of both of the two dual methods by using the spatial join, which is a symmetric operation. Experimental results show that the proposed algorithm outperforms earlier methods by up to 36 times for simple selection continuous queries and by up to 7 times for sliding window join continuous queries.

Estimation of Populations of Moth Using Object Segmentation and an SVM Classifier (객체 분할과 SVM 분류기를 이용한 해충 개체 수 추정)

  • Hong, Young-Ki;Kim, Tae-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.11
    • /
    • pp.705-710
    • /
    • 2017
  • This paper proposes an estimation method of populations of Grapholita molestas using object segmentation and an SVM classifier in the moth images. Object segmentation and moth classification were performed on images of Grapholita molestas moth acquired on a pheromone trap equipped in an orchard. Object segmentation consisted of pre-processing, thresholding, morphological filtering, and object labeling process. The classification of Grapholita molestas in the moth images consisted of the training and classification of an SVM classifier and estimation of the moth populations. The object segmentation simplifies the moth classification process by segmenting the individual objects before passing an input image to the SVM classifier. The image blocks were extracted around the center point and principle axis of the segmented objects, and fed into the SVM classifier. In the experiments, the proposed method performed an estimation of the moth populations for 10 moth images and achieved an average estimation precision rate of 97%. Therefore, it showed an effective monitoring method of populations of Grapholita molestas in the orchard. In addition, the mean processing time of the proposed method and sliding window technique were 2.4 seconds and 5.7 seconds, respectively. Therefore, the proposed method has a 2.4 times faster processing time than the latter technique.