• Title/Summary/Keyword: $A^*$ Algorithm

Search Result 54,473, Processing Time 0.092 seconds

Investigating Data Preprocessing Algorithms of a Deep Learning Postprocessing Model for the Improvement of Sub-Seasonal to Seasonal Climate Predictions (계절내-계절 기후예측의 딥러닝 기반 후보정을 위한 입력자료 전처리 기법 평가)

  • Uran Chung;Jinyoung Rhee;Miae Kim;Soo-Jin Sohn
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.80-98
    • /
    • 2023
  • This study explores the effectiveness of various data preprocessing algorithms for improving subseasonal to seasonal (S2S) climate predictions from six climate forecast models and their Multi-Model Ensemble (MME) using a deep learning-based postprocessing model. A pipeline of data transformation algorithms was constructed to convert raw S2S prediction data into the training data processed with several statistical distribution. A dimensionality reduction algorithm for selecting features through rankings of correlation coefficients between the observed and the input data. The training model in the study was designed with TimeDistributed wrapper applied to all convolutional layers of U-Net: The TimeDistributed wrapper allows a U-Net convolutional layer to be directly applied to 5-dimensional time series data while maintaining the time axis of data, but every input should be at least 3D in U-Net. We found that Robust and Standard transformation algorithms are most suitable for improving S2S predictions. The dimensionality reduction based on feature selections did not significantly improve predictions of daily precipitation for six climate models and even worsened predictions of daily maximum and minimum temperatures. While deep learning-based postprocessing was also improved MME S2S precipitation predictions, it did not have a significant effect on temperature predictions, particularly for the lead time of weeks 1 and 2. Further research is needed to develop an optimal deep learning model for improving S2S temperature predictions by testing various models and parameters.

Export Control System based on Case Based Reasoning: Design and Evaluation (사례 기반 지능형 수출통제 시스템 : 설계와 평가)

  • Hong, Woneui;Kim, Uihyun;Cho, Sinhee;Kim, Sansung;Yi, Mun Yong;Shin, Donghoon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.109-131
    • /
    • 2014
  • As the demand of nuclear power plant equipment is continuously growing worldwide, the importance of handling nuclear strategic materials is also increasing. While the number of cases submitted for the exports of nuclear-power commodity and technology is dramatically increasing, preadjudication (or prescreening to be simple) of strategic materials has been done so far by experts of a long-time experience and extensive field knowledge. However, there is severe shortage of experts in this domain, not to mention that it takes a long time to develop an expert. Because human experts must manually evaluate all the documents submitted for export permission, the current practice of nuclear material export is neither time-efficient nor cost-effective. Toward alleviating the problem of relying on costly human experts only, our research proposes a new system designed to help field experts make their decisions more effectively and efficiently. The proposed system is built upon case-based reasoning, which in essence extracts key features from the existing cases, compares the features with the features of a new case, and derives a solution for the new case by referencing similar cases and their solutions. Our research proposes a framework of case-based reasoning system, designs a case-based reasoning system for the control of nuclear material exports, and evaluates the performance of alternative keyword extraction methods (full automatic, full manual, and semi-automatic). A keyword extraction method is an essential component of the case-based reasoning system as it is used to extract key features of the cases. The full automatic method was conducted using TF-IDF, which is a widely used de facto standard method for representative keyword extraction in text mining. TF (Term Frequency) is based on the frequency count of the term within a document, showing how important the term is within a document while IDF (Inverted Document Frequency) is based on the infrequency of the term within a document set, showing how uniquely the term represents the document. The results show that the semi-automatic approach, which is based on the collaboration of machine and human, is the most effective solution regardless of whether the human is a field expert or a student who majors in nuclear engineering. Moreover, we propose a new approach of computing nuclear document similarity along with a new framework of document analysis. The proposed algorithm of nuclear document similarity considers both document-to-document similarity (${\alpha}$) and document-to-nuclear system similarity (${\beta}$), in order to derive the final score (${\gamma}$) for the decision of whether the presented case is of strategic material or not. The final score (${\gamma}$) represents a document similarity between the past cases and the new case. The score is induced by not only exploiting conventional TF-IDF, but utilizing a nuclear system similarity score, which takes the context of nuclear system domain into account. Finally, the system retrieves top-3 documents stored in the case base that are considered as the most similar cases with regard to the new case, and provides them with the degree of credibility. With this final score and the credibility score, it becomes easier for a user to see which documents in the case base are more worthy of looking up so that the user can make a proper decision with relatively lower cost. The evaluation of the system has been conducted by developing a prototype and testing with field data. The system workflows and outcomes have been verified by the field experts. This research is expected to contribute the growth of knowledge service industry by proposing a new system that can effectively reduce the burden of relying on costly human experts for the export control of nuclear materials and that can be considered as a meaningful example of knowledge service application.

A Study of Factors Associated with Software Developers Job Turnover (데이터마이닝을 활용한 소프트웨어 개발인력의 업무 지속수행의도 결정요인 분석)

  • Jeon, In-Ho;Park, Sun W.;Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.191-204
    • /
    • 2015
  • According to the '2013 Performance Assessment Report on the Financial Program' from the National Assembly Budget Office, the unfilled recruitment ratio of Software(SW) Developers in South Korea was 25% in the 2012 fiscal year. Moreover, the unfilled recruitment ratio of highly-qualified SW developers reaches almost 80%. This phenomenon is intensified in small and medium enterprises consisting of less than 300 employees. Young job-seekers in South Korea are increasingly avoiding becoming a SW developer and even the current SW developers want to change careers, which hinders the national development of IT industries. The Korean government has recently realized the problem and implemented policies to foster young SW developers. Due to this effort, it has become easier to find young SW developers at the beginning-level. However, it is still hard to recruit highly-qualified SW developers for many IT companies. This is because in order to become a SW developing expert, having a long term experiences are important. Thus, improving job continuity intentions of current SW developers is more important than fostering new SW developers. Therefore, this study surveyed the job continuity intentions of SW developers and analyzed the factors associated with them. As a method, we carried out a survey from September 2014 to October 2014, which was targeted on 130 SW developers who were working in IT industries in South Korea. We gathered the demographic information and characteristics of the respondents, work environments of a SW industry, and social positions for SW developers. Afterward, a regression analysis and a decision tree method were performed to analyze the data. These two methods are widely used data mining techniques, which have explanation ability and are mutually complementary. We first performed a linear regression method to find the important factors assaociated with a job continuity intension of SW developers. The result showed that an 'expected age' to work as a SW developer were the most significant factor associated with the job continuity intention. We supposed that the major cause of this phenomenon is the structural problem of IT industries in South Korea, which requires SW developers to change the work field from developing area to management as they are promoted. Also, a 'motivation' to become a SW developer and a 'personality (introverted tendency)' of a SW developer are highly importantly factors associated with the job continuity intention. Next, the decision tree method was performed to extract the characteristics of highly motivated developers and the low motivated ones. We used well-known C4.5 algorithm for decision tree analysis. The results showed that 'motivation', 'personality', and 'expected age' were also important factors influencing the job continuity intentions, which was similar to the results of the regression analysis. In addition to that, the 'ability to learn' new technology was a crucial factor for the decision rules of job continuity. In other words, a person with high ability to learn new technology tends to work as a SW developer for a longer period of time. The decision rule also showed that a 'social position' of SW developers and a 'prospect' of SW industry were minor factors influencing job continuity intensions. On the other hand, 'type of an employment (regular position/ non-regular position)' and 'type of company (ordering company/ service providing company)' did not affect the job continuity intension in both methods. In this research, we demonstrated the job continuity intentions of SW developers, who were actually working at IT companies in South Korea, and we analyzed the factors associated with them. These results can be used for human resource management in many IT companies when recruiting or fostering highly-qualified SW experts. It can also help to build SW developer fostering policy and to solve the problem of unfilled recruitment of SW Developers in South Korea.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF

Prediction of Isothermal and Reacting Flows in Widely-Spaced Coaxial Jet, Diffusion-Flame Combustor (큰 지름비를 가지는 동축제트 확산화염 연소기내의 등온 및 연소 유동장의 예측)

  • O, Gun-Seop;An, Guk-Yeong;Kim, Yong-Mo;Lee, Chang-Sik
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.20 no.7
    • /
    • pp.2386-2396
    • /
    • 1996
  • A numerical simulation has been performed for isothermal and reacting flows in an exisymmetric, bluff-body research combustor. The present formulation is based on the density-weighted averaged Navier-Stokes equations together with a k-epsilon. turbulence model and a modified eddy-breakup combustion model. The PISO algorithm is employed for solution of thel Navier-Stokes system. Comparison between measurements and predictions are made for a centerline axial velocities, location of stagnation points, strength of recirculation zone, and temperature profile. Even though the numerical simulation gives acceptable agreement with experimental data in many respects, the present model is defictient in predicting the recoveryt rate of a central near-wake region, the non-isotropic turbulence effects, and variation of turbulent Schmidt number. Several possible explanations for these discrepancies have been discussed.

A Review of Multivariate Analysis Studies Applied for Plant Morphology in Korea (국내 식물 형태 연구에 사용된 다변량분석 논문에 대한 재고)

  • Chang, Kae Sun;Oh, Hana;Kim, Hui;Lee, Heung Soo;Chang, Chin-Sung
    • Journal of Korean Society of Forest Science
    • /
    • v.98 no.3
    • /
    • pp.215-224
    • /
    • 2009
  • A review was given of the role of traditional morphometrics in plant morphological studies using 54 published studies in three major journals and others in Korea, such as Journal of Korean Forestry Society, Korean Journal of Plant Taxonomy, Korean Journal of Breeding, Korean Journal of Apiculture, Journal of Life Science, and Korean Journal of Plant Resources from 1997 to 2008. The two most commonly used techniques of data analysis, cluster analysis (CA) and principal components analysis (PCA) with other statistical tests were discussed. The common problem of PCA is the underlying assumptions of methods, like random sampling and multivariate normal distribution of data. The procedure was intended mainly for continuous data and was not efficient for data which were not well summarized by variances or covariances. Likewise CA was most appropriate for categorical rather than continuous data. Also, the CA produced clusters whether or not natural groupings existed, and the results depended on both the similarity measure chosen and the algorithm used for clustering. An additional problems of the PCA and the CA arised with both qualitative and quantitative data with a limited number of variables and/or too few numbers of samples. Some of these problems may be avoided if a certain number of variables (more than 20 at least) and sufficient samples (40-50 at least) are considered for morphometric analyses, but we do not think that the methods are all mighty tools for data analysts. Instead, we do believe that reasonable applications combined with focus on objectives and limitations of each procedure would be a step forward.

Improvement of 2-pass DInSAR-based DEM Generation Method from TanDEM-X bistatic SAR Images (TanDEM-X bistatic SAR 영상의 2-pass 위성영상레이더 차분간섭기법 기반 수치표고모델 생성 방법 개선)

  • Chae, Sung-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.847-860
    • /
    • 2020
  • The 2-pass DInSAR (Differential Interferometric SAR) processing steps for DEM generation consist of the co-registration of SAR image pair, interferogram generation, phase unwrapping, calculation of DEM errors, and geocoding, etc. It requires complicated steps, and the accuracy of data processing at each step affects the performance of the finally generated DEM. In this study, we developed an improved method for enhancing the performance of the DEM generation method based on the 2-pass DInSAR technique of TanDEM-X bistatic SAR images was developed. The developed DEM generation method is a method that can significantly reduce both the DEM error in the unwrapped phase image and that may occur during geocoding step. The performance analysis of the developed algorithm was performed by comparing the vertical accuracy (Root Mean Square Error, RMSE) between the existing method and the newly proposed method using the ground control point (GCP) generated from GPS survey. The vertical accuracy of the DInSAR-based DEM generated without correction for the unwrapped phase error and geocoding error is 39.617 m. However, the vertical accuracy of the DEM generated through the proposed method is 2.346 m. It was confirmed that the DEM accuracy was improved through the proposed correction method. Through the proposed 2-pass DInSAR-based DEM generation method, the SRTM DEM error observed by DInSAR was compensated for the SRTM 30 m DEM (vertical accuracy 5.567 m) used as a reference. Through this, it was possible to finally create a DEM with improved spatial resolution of about 5 times and vertical accuracy of about 2.4 times. In addition, the spatial resolution of the DEM generated through the proposed method was matched with the SRTM 30 m DEM and the TanDEM-X 90m DEM, and the vertical accuracy was compared. As a result, it was confirmed that the vertical accuracy was improved by about 1.7 and 1.6 times, respectively, and more accurate DEM generation was possible with the proposed method. If the method derived in this study is used to continuously update the DEM for regions with frequent morphological changes, it will be possible to update the DEM effectively in a short time at low cost.

Functional MR Imaging of Cerbral Motor Cortex: Comparison between Conventional Gradient Echo and EPI Techniques (뇌 운동피질의 기능적 영상: 고식적 Gradient Echo기법과 EPI기법간의 비교)

  • 송인찬
    • Investigative Magnetic Resonance Imaging
    • /
    • v.1 no.1
    • /
    • pp.109-113
    • /
    • 1997
  • Purpose: To evaluate the differences of functional imaging patterns between conventional spoiled gradient echo (SPGR) and echo planar imaging (EPI) methods in cerebral motor cortex activation. Materials and Methods: Functional MR imaging of cerebral motor cortex activation was examined on a 1.5T MR unit with SPGR (TRfrE/flip angle=50ms/4Oms/$30^{\circ}$, FOV=300mm, matrix $size=256{\times}256$, slice thickness=5mm) and an interleaved single shot gradient echo EPI (TRfrE/flip angle = 3000ms/40ms/$90^{\circ}$, FOV=300mm, matrix $size=128{\times}128$, slice thickness=5mm) techniques in five male healthy volunteers. A total of 160 images in one slice and 960 images in 6 slices were obtained with SPGR and EPI, respectively. A right finger movement was accomplished with a paradigm of an 8 activation/ 8 rest periods. The cross-correlation was used for a statistical mapping algorithm. We evaluated any differences of the time series and the signal intensity changes between the rest and activation periods obtained with two techniques. Also, the locations and areas of the activation sites were compared between two techniques. Results: The activation sites in the motor cortex were accurately localized with both methods. In the signal intensity changes between the rest and activation periods at the activation regions, no significant differences were found between EPI and SPGR. Signal to noise ratio (SNR) of the time series data was higher in EPI than in SPGR by two folds. Also, larger pixels were distributed over small p-values at the activation sites in EPI. Conclusions: Good quality functional MR imaging of the cerebral motor cortex activation could be obtained with both SPGR and EPI. However, EPI is preferable because it provides more precise information on hemodynamics related to neural activities than SPGR due to high sensitivity.

  • PDF

Study of Crustal Structure in North Korea Using 3D Velocity Tomography (3차원 속도 토모그래피를 이용한 북한지역의 지각구조 연구)

  • So Gu Kim;Jong Woo Shin
    • The Journal of Engineering Geology
    • /
    • v.13 no.3
    • /
    • pp.293-308
    • /
    • 2003
  • New results about the crustal structure down to a depth of 60 km beneath North Korea were obtained using the seismic tomography method. About 1013 P- and S-wave travel times from local earthquakes recorded by the Korean stations and the vicinity were used in the research. All earthquakes were relocated on the basis of an algorithm proposed in this study. Parameterization of the velocity structure is realized with a set of nodes distributed in the study volume according to the ray density. 120 nodes located at four depth levels were used to obtain the resulting P- and S-wave velocity structures. As a result, it is found that P- and S-wave velocity anomalies of the Rangnim Massif at depth of 8 km are high and low, respectively, whereas those of the Pyongnam Basin are low up to 24 km. It indicates that the Rangnim Massif contains Archean-early Lower Proterozoic Massif foldings with many faults and fractures which may be saturated with underground water and/or hot springs. On the other hand, the Pyongyang-Sariwon in the Pyongnam Basin is an intraplatform depression which was filled with sediments for the motion of the Upper Proterozoic, Silurian and Upper Paleozoic, and Lower Mesozoic origin. In particular, the high P- and S-wave velocity anomalies are observed at depth of 8, 16, and 24 km beneath Mt. Backdu, indicating that they may be the shallow conduits of the solidified magma bodies, while the low P-and S-wave velocity anomalies at depth of 38 km must be related with the magma chamber of low velocity bodies with partial melting. We also found the Moho discontinuities beneath the Origin Basin including Sari won to be about 55 km deep, whereas those of Mt. Backdu is found to be about 38 km. The high ratio of P-wave velocity/S-wave velocity at Moho suggests that there must be a partial melting body near the boundary of the crust and mantle. Consequently we may well consider Mt. Backdu as a dormant volcano which is holding the intermediate magma chamber near the Moho discontinuity. This study also brought interesting and important findings that there exist some materials with very high P- and S-wave velocity annomoalies at depth of about 40 km near Mt. Myohyang area at the edge of the Rangnim Massif shield.

A review of Deepwater Horizon Oil Budget Calculator for its Application to Korea (딥워터 호라이즌호 유출유 수지분석 모델의 국내 적용성 검토)

  • Kim, Choong-Ki;Oh, Jeong-Hwan;Kang, Seong-Gil
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.19 no.4
    • /
    • pp.322-331
    • /
    • 2016
  • Oil budget calculator identifies the removal pathways of spilled oil by both natural and response methods, and estimates the remaining oil required response activities. A oil budget calculator was newly developed as a response tool for Deepwater Horizon oil spill incident in Gulf of Mexico in 2010 to inform clean up decisions for Incident Comment System, which was also successfully utilized to media and general public promotion of oil spill response activities. This study analyzed the theoretical background of the oil budget calculator and explored its future application to Korea. The oil budge calculation of four catastrophic marine pollution incidents indicates that 3~8% of spilled oil was removed mechanically by skimmers, 1~5% by in-situ burning, 4.8~16% by chemical dispersion due to dispersant operation, and 37~56% by weathering processes such as evaporation, dissolution, and natural dispersion. The results show that in-situ burning and chemical dispersion effectively remove spilled oil more than the mechanical removal by skimming, and natural weathering processes are also very effective to remove spilled oil. To apply the oil budget calculator in Korea, its parameters need to be optimized in response to the seasonal characteristics of marine environment, the characteristics of spilled oil and response technologies. A new algorithm also needs to be developed to estimate the oil budget due to shoreline cleanup activities. An oil budget calculator optimized in Korea can play a critical role in informing decisions for oil spill response activities and communicating spill prevention and response activities with the media and general public.