• Title/Summary/Keyword: A*Algorithm

Search Result 54,352, Processing Time 0.083 seconds

Performance Evaluation of LSTM-based PM2.5 Prediction Model for Learning Seasonal and Concentration-specific Data (계절별 데이터와 농도별 데이터의 학습에 대한 LSTM 기반의 PM2.5 예측 모델 성능 평가)

  • Yong-jin Jung;Chang-Heon Oh
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.1
    • /
    • pp.149-154
    • /
    • 2024
  • Research on particulate matter is advancing in real-time, and various methods are being studied to improve the accuracy of prediction models. Furthermore, studies that take into account various factors to understand the precise causes and impacts of particulate matter are actively being pursued. This paper trains an LSTM model using seasonal data and another LSTM model using concentration-based data. It compares and analyzes the PM2.5 prediction performance of the two models. To train the model, weather data and air pollutant data were collected. The collected data was then used to confirm the correlation with PM2.5. Based on the results of the correlation analysis, the data was structured for training and evaluation. The seasonal prediction model and the concentration-specific prediction model were designed using the LSTM algorithm. The performance of the prediction model was evaluated using accuracy, RMSE, and MAPE. As a result of the performance evaluation, the prediction model learned by concentration had an accuracy of 91.02% in the "bad" range of AQI. And overall, it performed better than the prediction model trained by season.

Evaluation of the Bending Moment of FRP Reinforced Concrete Using Artificial Neural Network (인공신경망을 이용한 FRP 보강 콘크리트 보의 휨모멘트 평가)

  • Park, Do Kyong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.10 no.5
    • /
    • pp.179-186
    • /
    • 2006
  • In this study, Multi-Layer Perceptron(MLP) among models of Artificial Neural Network(ANN) is used for the development of a model that evaluates the bending capacities of reinforced concrete beams strengthened by FRP Rebar. And the data of the existing researches are used for materials of ANN model. As the independent variables of input layer, main components of bending capacities, width, effective depth, compressive strength, reinforcing ratio of FRP, balanced steel ratio of FRP are used. And the moment performance measured in the experiment is used as the dependent variable of output layer. The developed model of ANN could be applied by GFRP, CFRP and AFRP Rebar and the model is verified by using the documents of other previous researchers. As the result of the ANN model presumption, comparatively precise presumption values are achieved to presume its bending capacities at the model of ANN(0.05), while observing remarkable errors in the model of ANN(0.1). From the verification of the ANN model, it is identified that the presumption values comparatively correspond to the given data ones of the experiment. In addition, from the Sensitivity Analysis of evaluation variables of bending performance, effective depth has the highest influence, followed by steel ratio of FRP, balanced steel ratio, compressive strength and width in order.

GPR Development for Landmine Detection (지뢰탐지를 위한 GPR 시스템의 개발)

  • Sato, Motoyuki;Fujiwara, Jun;Feng, Xuan;Zhou, Zheng-Shu;Kobayashi, Takao
    • Geophysics and Geophysical Exploration
    • /
    • v.8 no.4
    • /
    • pp.270-279
    • /
    • 2005
  • Under the research project supported by Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT), we have conducted the development of GPR systems for landmine detection. Until 2005, we have finished development of two prototype GPR systems, namely ALIS (Advanced Landmine Imaging System) and SAR-GPR (Synthetic Aperture Radar-Ground Penetrating Radar). ALIS is a novel landmine detection sensor system combined with a metal detector and GPR. This is a hand-held equipment, which has a sensor position tracking system, and can visualize the sensor output in real time. In order to achieve the sensor tracking system, ALIS needs only one CCD camera attached on the sensor handle. The CCD image is superimposed with the GPR and metal detector signal, and the detection and identification of buried targets is quite easy and reliable. Field evaluation test of ALIS was conducted in December 2004 in Afghanistan, and we demonstrated that it can detect buried antipersonnel landmines, and can also discriminate metal fragments from landmines. SAR-GPR (Synthetic Aperture Radar-Ground Penetrating Radar) is a machine mounted sensor system composed of B GPR and a metal detector. The GPR employs an array antenna for advanced signal processing for better subsurface imaging. SAR-GPR combined with synthetic aperture radar algorithm, can suppress clutter and can image buried objects in strongly inhomogeneous material. SAR-GPR is a stepped frequency radar system, whose RF component is a newly developed compact vector network analyzers. The size of the system is 30cm x 30cm x 30 cm, composed from six Vivaldi antennas and three vector network analyzers. The weight of the system is 17 kg, and it can be mounted on a robotic arm on a small unmanned vehicle. The field test of this system was carried out in March 2005 in Japan.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Clinical Application of Dose Reconstruction Based on Full-Scope Monte Carlo Calculations: Composite Dose Reconstruction on a Deformed Phantom (몬테칼로 계산을 통한 흡수선량 재구성의 임상적 응용: 변형된 팬텀에서의 총제적 선량재구성)

  • Yeo, Inhwan;Xu, Qianyi;Chen, Yan;Jung, Jae Won;Kim, Jong Oh
    • Progress in Medical Physics
    • /
    • v.25 no.3
    • /
    • pp.139-142
    • /
    • 2014
  • The purpose of this study was to develop a system of clinical application of reconstructed dose that includes dose reconstruction, reconstructed dose registration between fractions of treatment, and dose-volume-histogram generation and to demonstrate the system on a deformable prostate phantom. To achieve this purpose, a deformable prostate phantom was embedded into a 20 cm-deep and 40 cm-wide water phantom. The phantom was CT scanned and the anatomical models of prostate, seminal vesicles, and rectum were contoured. A coplanar 4-field intensity modulated radiation therapy (IMRT) plan was used for this study. Organ deformation was simulated by inserting a "transrectal" balloon containing 20 ml of water. A new CT scan was obtained and the deformed structures were contoured. Dose responses in phantoms and electronic portal imaging device (EPID) were calculated by using the XVMC Monte Carlo code. The IMRT plan was delivered to the two phantoms and integrated EPID images were respectively acquired. Dose reconstruction was performed on these images using the calculated responses. The deformed phantom was registered to the original phantom using an in-house developed software based on the Demons algorithm. The transfer matrix for each voxel was obtained and used to correlate the two sets of the reconstructed dose to generate a cumulative reconstructed dose on the original phantom. Forwardly calculated planning dose in the original phantom was compared to the cumulative reconstructed dose from EPID in the original phantom. The prescribed 200 cGy isodose lines showed little difference with respect to the "prostate" and "seminal vesicles", but appreciable difference (3%) was observed at the dose level greater than 210 cGy. In the rectum, the reconstructed dose showed lower volume coverage by a few percent than the plan dose in the dose range of 150 to 200 cGy. Through this study, the system of clinical application of reconstructed dose was successfully developed and demonstrated. The organ deformation simulated in this study resulted in small but observable dose changes in the target and critical structure.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.

Rear Vehicle Detection Method in Harsh Environment Using Improved Image Information (개선된 영상 정보를 이용한 가혹한 환경에서의 후방 차량 감지 방법)

  • Jeong, Jin-Seong;Kim, Hyun-Tae;Jang, Young-Min;Cho, Sang-Bok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.1
    • /
    • pp.96-110
    • /
    • 2017
  • Most of vehicle detection studies using the existing general lens or wide-angle lens have a blind spot in the rear detection situation, the image is vulnerable to noise and a variety of external environments. In this paper, we propose a method that is detection in harsh external environment with noise, blind spots, etc. First, using a fish-eye lens will help minimize blind spots compared to the wide-angle lens. When angle of the lens is growing because nonlinear radial distortion also increase, calibration was used after initializing and optimizing the distortion constant in order to ensure accuracy. In addition, the original image was analyzed along with calibration to remove fog and calibrate brightness and thereby enable detection even when visibility is obstructed due to light and dark adaptations from foggy situations or sudden changes in illumination. Fog removal generally takes a considerably significant amount of time to calculate. Thus in order to reduce the calculation time, remove the fog used the major fog removal algorithm Dark Channel Prior. While Gamma Correction was used to calibrate brightness, a brightness and contrast evaluation was conducted on the image in order to determine the Gamma Value needed for correction. The evaluation used only a part instead of the entirety of the image in order to reduce the time allotted to calculation. When the brightness and contrast values were calculated, those values were used to decided Gamma value and to correct the entire image. The brightness correction and fog removal were processed in parallel, and the images were registered as a single image to minimize the calculation time needed for all the processes. Then the feature extraction method HOG was used to detect the vehicle in the corrected image. As a result, it took 0.064 seconds per frame to detect the vehicle using image correction as proposed herein, which showed a 7.5% improvement in detection rate compared to the existing vehicle detection method.

Local Shape Analysis of the Hippocampus using Hierarchical Level-of-Detail Representations (계층적 Level-of-Detail 표현을 이용한 해마의 국부적인 형상 분석)

  • Kim Jeong-Sik;Choi Soo-Mi;Choi Yoo-Ju;Kim Myoung-Hee
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.555-562
    • /
    • 2004
  • Both global volume reduction and local shape changes of hippocampus within the brain indicate their abnormal neurological states. Hippocampal shape analysis consists of two main steps. First, construct a hippocampal shape representation model ; second, compute a shape similarity from this representation. This paper proposes a novel method for the analysis of hippocampal shape using integrated Octree-based representation, containing meshes, voxels, and skeletons. First of all, we create multi-level meshes by applying the Marching Cube algorithm to the hippocampal region segmented from MR images. This model is converted to intermediate binary voxel representation. And we extract the 3D skeleton from these voxels using the slice-based skeletonization method. Then, in order to acquire multiresolutional shape representation, we store hierarchically the meshes, voxels, skeletons comprised in nodes of the Octree, and we extract the sample meshes using the ray-tracing based mesh sampling technique. Finally, as a similarity measure between the shapes, we compute $L_2$ Norm and Hausdorff distance for each sam-pled mesh pair by shooting the rays fired from the extracted skeleton. As we use a mouse picking interface for analyzing a local shape inter-actively, we provide an interaction and multiresolution based analysis for the local shape changes. In this paper, our experiment shows that our approach is robust to the rotation and the scale, especially effective to discriminate the changes between local shapes of hippocampus and more-over to increase the speed of analysis without degrading accuracy by using a hierarchical level-of-detail approach.

A Collaborative Filtering System Combined with Users' Review Mining : Application to the Recommendation of Smartphone Apps (사용자 리뷰 마이닝을 결합한 협업 필터링 시스템: 스마트폰 앱 추천에의 응용)

  • Jeon, ByeoungKug;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.1-18
    • /
    • 2015
  • Collaborative filtering(CF) algorithm has been popularly used for recommender systems in both academic and practical applications. A general CF system compares users based on how similar they are, and creates recommendation results with the items favored by other people with similar tastes. Thus, it is very important for CF to measure the similarities between users because the recommendation quality depends on it. In most cases, users' explicit numeric ratings of items(i.e. quantitative information) have only been used to calculate the similarities between users in CF. However, several studies indicated that qualitative information such as user's reviews on the items may contribute to measure these similarities more accurately. Considering that a lot of people are likely to share their honest opinion on the items they purchased recently due to the advent of the Web 2.0, user's reviews can be regarded as the informative source for identifying user's preference with accuracy. Under this background, this study proposes a new hybrid recommender system that combines with users' review mining. Our proposed system is based on conventional memory-based CF, but it is designed to use both user's numeric ratings and his/her text reviews on the items when calculating similarities between users. In specific, our system creates not only user-item rating matrix, but also user-item review term matrix. Then, it calculates rating similarity and review similarity from each matrix, and calculates the final user-to-user similarity based on these two similarities(i.e. rating and review similarities). As the methods for calculating review similarity between users, we proposed two alternatives - one is to use the frequency of the commonly used terms, and the other one is to use the sum of the importance weights of the commonly used terms in users' review. In the case of the importance weights of terms, we proposed the use of average TF-IDF(Term Frequency - Inverse Document Frequency) weights. To validate the applicability of the proposed system, we applied it to the implementation of a recommender system for smartphone applications (hereafter, app). At present, over a million apps are offered in each app stores operated by Google and Apple. Due to this information overload, users have difficulty in selecting proper apps that they really want. Furthermore, app store operators like Google and Apple have cumulated huge amount of users' reviews on apps until now. Thus, we chose smartphone app stores as the application domain of our system. In order to collect the experimental data set, we built and operated a Web-based data collection system for about two weeks. As a result, we could obtain 1,246 valid responses(ratings and reviews) from 78 users. The experimental system was implemented using Microsoft Visual Basic for Applications(VBA) and SAS Text Miner. And, to avoid distortion due to human intervention, we did not adopt any refining works by human during the user's review mining process. To examine the effectiveness of the proposed system, we compared its performance to the performance of conventional CF system. The performances of recommender systems were evaluated by using average MAE(mean absolute error). The experimental results showed that our proposed system(MAE = 0.7867 ~ 0.7881) slightly outperformed a conventional CF system(MAE = 0.7939). Also, they showed that the calculation of review similarity between users based on the TF-IDF weights(MAE = 0.7867) leaded to better recommendation accuracy than the calculation based on the frequency of the commonly used terms in reviews(MAE = 0.7881). The results from paired samples t-test presented that our proposed system with review similarity calculation using the frequency of the commonly used terms outperformed conventional CF system with 10% statistical significance level. Our study sheds a light on the application of users' review information for facilitating electronic commerce by recommending proper items to users.

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).