• Title/Summary/Keyword: Convolution algorithm comparison

Search Result 28, Processing Time 0.024 seconds

Low-power Radix-4 FFT Structure for OFDM using Distributed Arithmetic (Distributed Arithmetic을 사용한 OFDM용 저전력 Radix-4 FFT 구조)

  • Jang Young-Beom;Lee Won-Sang;Kim Do-Han;Kim Bee-Chul;Hur Eun-Sung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.1 s.307
    • /
    • pp.101-108
    • /
    • 2006
  • In this paper, an efficient butterfly structure for Radix-4 FFT algorithm using DA(Distributed Arithmetic) is proposed. It is shown that DA can be efficiently used in twiddle factor calculation of the Radix-4 FFT algorithm. The Verilog-HDL coding results for the proposed DA butterfly structure show $61.02\%$ cell area reduction comparison with those of the conventional multiplier butterfly structure. furthermore, the 64-point Radix-4 pipeline structure using the proposed butterfly and delay commutators is compared with other conventional structures. Implementation coding results show $46.1\%$ cell area reduction. Due to its efficient processing scheme, the proposed FFT structure can be widely used in large size of FFT like OFDM Modem.

Comparison of Intensity Modulated Radiation Therapy Dose Calculations with a PBC and AAA Algorithms in the Lung Cancer (폐암의 세기조절방사선치료에서 PBC 알고리즘과 AAA 알고리즘의 비교연구)

  • Oh, Se-An;Kang, Min-Kyu;Yea, Ji-Woon;Kim, Sung-Hoon;Kim, Ki-Hwan;Kim, Sung-Kyu
    • Progress in Medical Physics
    • /
    • v.23 no.1
    • /
    • pp.48-53
    • /
    • 2012
  • The pencil beam convolution (PBC) algorithms in radiation treatment planning system have been widely used to calculate the radiation dose. A new photon dose calculation algorithm, referred to as the anisotropic analytical algorithm (AAA), was released for use by the Varian medical system. The aim of this paper was to investigate the difference in dose calculation between the AAA and PBC algorithm using the intensity modulated radiation therapy (IMRT) plan for lung cancer cases that were inhomogeneous in the low density. We quantitatively analyzed the differences in dose using the eclipse planning system (Varian Medical System, Palo Alto, CA) and I'mRT matirxx (IBA, Schwarzenbruck, Germany) equipment to compare the gamma evaluation. 11 patients with lung cancer at various sites were used in this study. We also used the TLD-100 (LiF) to measure the differences in dose between the calculated dose and measured dose in the Alderson Rando phantom. The maximum, mean, minimum dose for the normal tissue did not change significantly. But the volume of the PTV covered by the 95% isodose curve was decreased by 6% in the lung due to the difference in the algorithms. The difference dose between the calculated dose by the PBC algorithms and AAA algorithms and the measured dose with TLD-100 (LiF) in the Alderson Rando phantom was -4.6% and -2.7% respectively. Based on the results of this study, the treatment plan calculated using the AAA algorithms is more accurate in lung sites with a low density when compared to the treatment plan calculated using the PBC algorithms.

The Impact of the PCA Dimensionality Reduction for CNN based Hyperspectral Image Classification (CNN 기반 초분광 영상 분류를 위한 PCA 차원축소의 영향 분석)

  • Kwak, Taehong;Song, Ahram;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.959-971
    • /
    • 2019
  • CNN (Convolutional Neural Network) is one representative deep learning algorithm, which can extract high-level spatial and spectral features, and has been applied for hyperspectral image classification. However, one significant drawback behind the application of CNNs in hyperspectral images is the high dimensionality of the data, which increases the training time and processing complexity. To address this problem, several CNN based hyperspectral image classification studies have exploited PCA (Principal Component Analysis) for dimensionality reduction. One limitation to this is that the spectral information of the original image can be lost through PCA. Although it is clear that the use of PCA affects the accuracy and the CNN training time, the impact of PCA for CNN based hyperspectral image classification has been understudied. The purpose of this study is to analyze the quantitative effect of PCA in CNN for hyperspectral image classification. The hyperspectral images were first transformed through PCA and applied into the CNN model by varying the size of the reduced dimensionality. In addition, 2D-CNN and 3D-CNN frameworks were applied to analyze the sensitivity of the PCA with respect to the convolution kernel in the model. Experimental results were evaluated based on classification accuracy, learning time, variance ratio, and training process. The size of the reduced dimensionality was the most efficient when the explained variance ratio recorded 99.7%~99.8%. Since the 3D kernel had higher classification accuracy in the original-CNN than the PCA-CNN in comparison to the 2D-CNN, the results revealed that the dimensionality reduction was relatively less effective in 3D kernel.

Skin Dose Comparison of CyberKnife and Helical Tomotherapy for Head-and-Neck Stereotactic Body Radiotherapy

  • Yoon, Jeongmin;Park, Kwangwoo;Kim, Jin Sung;Kim, Yong Bae;Lee, Ho
    • Progress in Medical Physics
    • /
    • v.30 no.1
    • /
    • pp.1-6
    • /
    • 2019
  • Purpose: This study conducts a comparative evaluation of the skin dose in CyberKnife (CK) and Helical Tomotherapy (HT) to predict the accurate dose of radiation and minimize skin burns in head-and-neck stereotactic body radiotherapy. Materials and Methods: Arbitrarily-defined planning target volume (PTV) close to the skin was drawn on the planning computed tomography acquired from a head-and-neck phantom with 19 optically stimulated luminescent dosimeters (OSLDs) attached to the surface (3 OSLDs were positioned at the skin close to PTV and 16 OSLDs were near sideburns and forehead, away from PTV). The calculation doses were obtained from the MultiPlan 5.1.2 treatment planning system using raytracing (RT), finite size pencil beam (FSPB), and Monte Carlo (MC) algorithms for CK. For HT, the skin dose was estimated via convolution superposition (CS) algorithm from the Tomotherapy planning station 5.0.2.5. The prescribed dose was 8 Gy for 95% coverage of the PTV. Results and Conclusions: The mean differences between calculation and measurement values were $-1.2{\pm}3.1%$, $2.5{\pm}7.9%$, $-2.8{\pm}3.8%$, $-6.6{\pm}8.8%$, and $-1.4{\pm}1.8%$ in CS, RT, RT with contour correction (CC), FSPB, and MC, respectively. FSPB showed a dose error comparable to RT. CS and RT with CC led to a small error as compared to FSPB and RT. Considering OSLDs close to PTV, MC minimized the uncertainty of skin dose as compared to other algorithms.

Effects of Spatial Resolution on PSO Target Detection Results of Airplane and Ship (항공기와 선박의 PSO 표적탐지 결과에 공간해상도가 미치는 영향)

  • Yeom, Jun Ho;Kim, Byeong Hee;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.1
    • /
    • pp.23-29
    • /
    • 2014
  • The emergence of high resolution satellite images and the evolution of spatial resolution facilitate various studies using high resolution satellite images. Above all, target detection algorithms are effective for monitoring of traffic flow and military surveillance and reconnaissance because vehicles, airplanes, and ships on broad area could be detected easily using high resolution satellite images. Recently, many satellites are launched from global countries and the diversity of satellite images are also increased. On the contrary, studies on comparison about the spatial resolution or target detection, especially, are insufficient in domestic and foreign countries. Therefore, in this study, effects of spatial resolution on target detection are analyzed using the PSO target detection algorithm. The resampling techniques such as nearest neighbor, bilinear, and cubic convolution are adopted to resize the original image into 0.5m, 1m, 2m, 4m spatial resolutions. Then, accuracy of target detection is assessed according to not only spatial resolution but also resampling method. As a result of the study, the resolution of 0.5m and nearest neighbor among the resampling methods have the best accuracy. Additionally, it is necessary to satisfy the criteria of 2m and 4m resolution for the detection of airplane and ship, respectively. The detection of airplane need more high spatial resolution than ship because of their complexity of shape. This research suggests the appropriate spatial resolution for the plane and ship target detection and contributes to the criteria of satellite sensor design.

Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning (비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발)

  • Min, Jiyoung;Yu, Byeongjun;Kim, Jonghyeok;Jeon, Haemin
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.2
    • /
    • pp.28-36
    • /
    • 2022
  • As port structures are exposed to various extreme external loads such as wind (typhoons), sea waves, or collision with ships; it is important to evaluate the structural safety periodically. To monitor the port structure, especially the rubber fender, a fender segmentation system using a vision sensor and deep learning method has been proposed in this study. For fender segmentation, a new deep learning network that improves the encoder-decoder framework with the receptive field block convolution module inspired by the eccentric function of the human visual system into the DenseNet format has been proposed. In order to train the network, various fender images such as BP, V, cell, cylindrical, and tire-types have been collected, and the images are augmented by applying four augmentation methods such as elastic distortion, horizontal flip, color jitter, and affine transforms. The proposed algorithm has been trained and verified with the collected various types of fender images, and the performance results showed that the system precisely segmented in real time with high IoU rate (84%) and F1 score (90%) in comparison with the conventional segmentation model, VGG16 with U-net. The trained network has been applied to the real images taken at one port in Republic of Korea, and found that the fenders are segmented with high accuracy even with a small dataset.

Dose comparison according to Smooth Thickness application of Range compensator during proton therapy for brain tumor patient (뇌종양 환자의 양성자 치료 시 Range Compensator의 Smooth Thickness 적용에 따른 선량비교)

  • Kim, Tae Woan;Kim, Dae Woong;Kim, Jae Weon;Jeong, Kyeong Sik
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.28 no.2
    • /
    • pp.139-148
    • /
    • 2016
  • Purpose : Range Compensator used for proton therapy compensates the proton beam dose which delivers to the normal tissues according to the Target's Distal Margin dose. We are going to check the improvement of dose on the target part by comparing the dose of PTV and OAR according to applying in different method of Smooth Thickness of Range Compensator which is used in brain tumor therapy. Materials and Methods : For 10 brain tumor patients taking proton therapy in National Cancer Center, Apply Smooth Thickness applied in Range Compensator in order from one to five by using Compensator Editor of Eclipse Proton Planning System(Version 10.0, Varian, USA). The therapy plan algorithm used Proton Convolution Superposition(version 8.1.20 or 10.0.28), and we compared Dmax, Dmin, Homogeneity Index, Conformity Index and OAR dose around tumor by applying Smooth Thickness in phase. Results : When Smooth Thickness was applied from one to five, the Dmax of PTV was decreased max 4.3%, minimum at 0.8 and average of 1.81%. Dmin increased max 1.8%, min 1.8% and average. Difference between max dose and minimum dose decreased at max 5.9% min 1.4% and average 2.6%. Homogeneity Index decreased average of 0.018 and Conformity Index didn't had a meaningful change. OAR dose decreased in Brain Stem at max 1.6%, min 0.1% and average 0.6% and in Optic Chiasm max 1.3%, min 0.3%, and average 0.5%. However, patient C and patient E had an increase each 0.3% and 0.6%. Additionally, in Rt. Optic Nerve, there was a decrease at max 1.5%, min 0.3%, and average 0.8%, however, patient B had 0.1% increase. In Lt. Optic Nerve, there was a decrease at max 1.8%, min 0.3%, and average 0.7%, however, patient H had 0.4 increase. Conclusion : As Smooth Thickness of Range Compensator which is used as the proton treatment for brain tumor patients is applied in stages, the resolution of Compensator increased and as a result the most optimized amount of proton beam dose can be delivered. This is considered to be able to irradiate the equal amount at PTV and reduce the unnecessary dose applied at OAR to reduce the side effects.

  • PDF

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.