• Title/Summary/Keyword: Design algorithm

Search Result 10,362, Processing Time 0.037 seconds

Running Safety and Ride Comfort Prediction for a Highspeed Railway Bridge Using Deep Learning (딥러닝 기반 고속철도교량의 주행안전성 및 승차감 예측)

  • Minsu, Kim;Sanghyun, Choi
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.35 no.6
    • /
    • pp.375-380
    • /
    • 2022
  • High-speed railway bridges carry a risk of dynamic response amplification due to resonance caused by train loads, and running safety and riding comfort must therefore be reviewed through dynamic analysis in accordance with design codes. The running safety and ride comfort calculation procedure, however, is time consuming and expensive because dynamic analyses must be performed for every 10 km/h interval up to 110% of the design speed, including the critical speed for each train type. In this paper, a deep-learning-based prediction system that can predict the running safety and ride comfort in advance is proposed. The system does not use dynamic analysis but employs a deep learning algorithm. The proposed system is based on a neural network trained on the dynamic analysis results of each train and speed of the railway bridge and can predict the running safety and ride comfort according to input parameters such as train speed and bridge characteristics. To confirm the performance of the proposed system, running safety and riding comfort are predicted for a single span, straight simple beam bridge. Our results confirm that the deck vertical displacement and deck vertical acceleration for calculating running safety and riding comfort can be predicted with high accuracy.

Graph Convolutional - Network Architecture Search : Network architecture search Using Graph Convolution Neural Networks (그래프 합성곱-신경망 구조 탐색 : 그래프 합성곱 신경망을 이용한 신경망 구조 탐색)

  • Su-Youn Choi;Jong-Youel Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.1
    • /
    • pp.649-654
    • /
    • 2023
  • This paper proposes the design of a neural network structure search model using graph convolutional neural networks. Deep learning has a problem of not being able to verify whether the designed model has a structure with optimized performance due to the nature of learning as a black box. The neural network structure search model is composed of a recurrent neural network that creates a model and a convolutional neural network that is the generated network. Conventional neural network structure search models use recurrent neural networks, but in this paper, we propose GC-NAS, which uses graph convolutional neural networks instead of recurrent neural networks to create convolutional neural network models. The proposed GC-NAS uses the Layer Extraction Block to explore depth, and the Hyper Parameter Prediction Block to explore spatial and temporal information (hyper parameters) based on depth information in parallel. Therefore, since the depth information is reflected, the search area is wider, and the purpose of the search area of the model is clear by conducting a parallel search with depth information, so it is judged to be superior in theoretical structure compared to GC-NAS. GC-NAS is expected to solve the problem of the high-dimensional time axis and the range of spatial search of recurrent neural networks in the existing neural network structure search model through the graph convolutional neural network block and graph generation algorithm. In addition, we hope that the GC-NAS proposed in this paper will serve as an opportunity for active research on the application of graph convolutional neural networks to neural network structure search.

Analysis of Transfer Learning Effect for Automatic Dog Breed Classification (반려견 자동 품종 분류를 위한 전이학습 효과 분석)

  • Lee, Dongsu;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.133-145
    • /
    • 2022
  • Compared to the continuously increasing dog population and industry size in Korea, systematic analysis of related data and research on breed classification methods are very insufficient. In this paper, an automatic breed classification method is proposed using deep learning technology for 14 major dog breeds domestically raised. To do this, dog images are collected for deep learning training and a dataset is built, and a breed classification algorithm is created by performing transfer learning based on VGG-16 and Resnet-34 as backbone networks. In order to check the transfer learning effect of the two models on dog images, we compared the use of pre-trained weights and the experiment of updating the weights. When fine tuning was performed based on VGG-16 backbone network, in the final model, the accuracy of Top 1 was about 89% and that of Top 3 was about 94%, respectively. The domestic dog breed classification method and data construction proposed in this paper have the potential to be used for various application purposes, such as classification of abandoned and lost dog breeds in animal protection centers or utilization in pet-feed industry.

Data analysis by Integrating statistics and visualization: Visual verification for the prediction model (통계와 시각화를 결합한 데이터 분석: 예측모형 대한 시각화 검증)

  • Mun, Seong Min;Lee, Kyung Won
    • Design Convergence Study
    • /
    • v.15 no.6
    • /
    • pp.195-214
    • /
    • 2016
  • Predictive analysis is based on a probabilistic learning algorithm called pattern recognition or machine learning. Therefore, if users want to extract more information from the data, they are required high statistical knowledge. In addition, it is difficult to find out data pattern and characteristics of the data. This study conducted statistical data analyses and visual data analyses to supplement prediction analysis's weakness. Through this study, we could find some implications that haven't been found in the previous studies. First, we could find data pattern when adjust data selection according as splitting criteria for the decision tree method. Second, we could find what type of data included in the final prediction model. We found some implications that haven't been found in the previous studies from the results of statistical and visual analyses. In statistical analysis we found relation among the multivariable and deducted prediction model to predict high box office performance. In visualization analysis we proposed visual analysis method with various interactive functions. Finally through this study we verified final prediction model and suggested analysis method extract variety of information from the data.

Performance Analysis of Receiver for Underwater Acoustic Communications Using Acquisition Data in Shallow Water (천해역 취득 데이터를 이용한 수중음향통신 수신기 성능분석)

  • Kim, Seung-Geun;Kim, Sea-Moon;Yun, Chang-Ho;Lim, Young-Kon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.5
    • /
    • pp.303-313
    • /
    • 2010
  • This paper describes an acoustic communication receiver structure, which is designed for QPSK (Quadrature Phase Shift Keying) signal with 25 kHz carrier frequency and 5 kHz symbol rate, and takes samples from received signal at 100 kHz sampling rate. Based on the described receiver structure, optimum design parameters, such as number of taps of FF (Feed-Forward) and FB (Feed-Back) filters and forgetting factor of RLS (Recursive Least-Square) algorithm, of joint equalizer are determined to minimize the BER (Bit Error Rate) performance of the joint equalizer output symbols when the acquisition data in shallow water using implemented acoustic transducers is decimated at a rate of 2:1 and then enforced to the input of receiver. The transmission distances are 1.4 km, 2.9 km, and 4.7 km. Analysis results show that the optimum number of taps of FF and FB filters are different according to the distance between source and destination, but the optimum or near optimum value of forgetting factor is 0.997. Therefore, we can reach a conclusion that the proper receiver structure could change the number of taps of FF and FB filters with the fixed forgetting factor 0.997 according to the transmission distance. Another analysis result is that there are an acceptable performance degradation when the 16-tap-length simple filter is used as a low-pass filter of receiver instead of 161-tap-length matched filter.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.

Developments of Local Festival Mobile Application and Data Analysis System Applying Beacon (비콘을 활용한 위치기반 지역축제 모바일 애플리케이션과 데이터 분석 시스템 개발)

  • Kim, Song I;Kim, Won Pyo;Jeong, Chul
    • Korea Science and Art Forum
    • /
    • v.31
    • /
    • pp.21-32
    • /
    • 2017
  • Local festivals form the regional cultures and atmosphere of communication; they increase the demand of domestic tourism businesses and thus, have an important role in ripple effects (e.g. regional image improvement, tourist influx, job creation, regional contents development, and local product sales) and economic revitalization. IoT (Internet of Thing) technologies have been developed especially, beacon-one of the IoT services has been applied as plenty of types and forms both domestically and internationally. However, notwithstanding expansion of current digital mobile technologies, it still remains as difficult for the individual to track the information about all the local festivals and to fulfill the tourists' needs of enjoying festivals given the weak strategic approaches and advertisement activities. Furthermore, current festival-related mobile applications don't function well as delivering information and have numerous contents issues (e.g. ways of information delivery within the festival places, independent application usage for each festival, one time usage due to one time event). This research, based on the background mentioned above, aims to develop the local festival mobile application and data analysis system applying beacon technology. First of all, three algorithms were developed, namely, 'festival crowding algorithm', 'visitor stats algorithm', and 'customized information algorithm', and then beta test was followed with the developed application and data analysis system. As a result, they could form the database of visitors' types and behaviors, and provide functions and services, such as personalized information, waiting time for festival contents, and 'hot place' function. Besides, in Google Play store, they also got the titles given with more than 13,000 downloads within first three months and as the most exposed application related with festivals; and, thus, got credited with their marketability and excellence. This research follows this order: chapter 2 shows the literature review of local festival related with technology development, beacon service, and festival application. In Chapter 3, design plans and conditions are described of developing local festival mobile application and data analysis system with beacon. Chapter 4 evaluates the results of the beta performance test to verify applicability of the developed application and data analysis system, and lastly, chapter 5 explains the conclusion and suggests the future research.

Usability of Multiple Confocal SPECT SYSTEM in the Myocardial Perfusion SPECT Using $^{99m}Tc$ ($^{99m}Tc$을 이용한 심근 관류 SPECT에서 Multiple Confocal SPECT System의 유용성)

  • Shin, Chae-Ho;Pyo, Sung-Jai;Kim, Bong-Su;Cho, Yong-Gyi;Jo, Jin-Woo;Kim, Chang-Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.65-71
    • /
    • 2011
  • Purpose: The recently adopted multiple confocal SPECT SYSTEM (hereinafter called IQ SPECT$^{TM}$) has a high difference from the conventional myocardial perfusion SPECT in the collimator form, image capture method, and image reconstruction method. This study was conducted to compare this novice equipment with the conventional one to design a protocol meeting the IQ SPECT, and also determine the characteristics and usefulness of IQ SPECT. Materials and Methods: 1. For the objects of LEHR (Low energy high resolution) collimator and Multiple confocal collimator, $^{99m}Tc$ 37MBq was put in the acrylic dish then each sensitivity ($cpm/{\mu}Ci$) was measured at the distance of 5 cm, 10 cm, 20 cm, 30 cm, and 40 cm respectively. 2. Based on the sensitivity measure results, IQ SPECT Protocol was designed according to the conventional general myocardial SPECT, then respectively 278 kBq/mL, 7.4 kBq/mL, and 48 kBq/mL of $^{99m}Tc$ were injected into the myocardial and soft tissues and liver site by using the anthropomorphic torso phantom then the myocardial perfusion SPECT was run. 3. For the comparison of FWHMs (Full Width at Half Maximum) resulted from the image reconstruction of LEHR collimator, the FWHMs (mm) were measured with only algorithms changed, in the case of the FBP (Filtered Back projection) method- a reconstruction method of conventional myocardial perfusion SPECT, and the 3D OSEM (Ordered subsets expectation maximization) method of IQ SPECT, by using $^{99m}Tc$ Line source. Results: 1. The values of IQ SPECT collimator sensitivity ($cpm/{\mu}Ci$) were 302, 382, 655, 816, 1178, and those of LEHR collimator were measured as 204, 204, 202, 201, 198, both at the distance of 5 cm, 10 cm, 20 cm, 30 cm, and 40 cm respectively. It was found the difference of sensitivity increases up to 4 times at the distance of 30 cm in the cases of IQ SPECT and LEHR. 2. The myocardial perfusion SPECT Protocol was designed according to the geometric characteristics of IQ SPECT based on the sensitivity results, then the phantom test for the aforesaid protocol was conducted. As a result, it was found the examination time can be reduced 1/4 compared to the past. 3. In the comparison of FWHMs according to the reconstructed algorithm in the FBP method and 3D OSEM method followed after the SEPCT test using a LEHR collimator, the result was obtained that FWHM rose around twice in the 3D OSEM method. Conclusion : The IQ SPECT uses the Multiple confocal collimator for the myocardial perfusion SPECT to enhance the sensitivity and also reduces examination time and contributes to improvement of visual screen quality through the myocardial-specific geometric image capture method and image reconstruction method. Due to such benefits, it is expected patients will receive more comfortable and more accurate examinations and it is considered a further study is required using additional clinical materials.

  • PDF

Development of Neural Network Based Cycle Length Design Model Minimizing Delay for Traffic Responsive Control (실시간 신호제어를 위한 신경망 적용 지체최소화 주기길이 설계모형 개발)

  • Lee, Jung-Youn;Kim, Jin-Tae;Chang, Myung-Soon
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.3 s.74
    • /
    • pp.145-157
    • /
    • 2004
  • The cycle length design model of the Korean traffic responsive signal control systems is devised to vary a cycle length as a response to changes in traffic demand in real time by utilizing parameters specified by a system operator and such field information as degrees of saturation of through phases. Since no explicit guideline is provided to a system operator, the system tends to include ambiguity in terms of the system optimization. In addition, the cycle lengths produced by the existing model have yet been verified if they are comparable to the ones minimizing delay. This paper presents the studies conducted (1) to find shortcomings embedded in the existing model by comparing the cycle lengths produced by the model against the ones minimizing delay and (2) to propose a new direction to design a cycle length minimizing delay and excluding such operator oriented parameters. It was found from the study that the cycle lengths from the existing model fail to minimize delay and promote intersection operational conditions to be unsatisfied when traffic volume is low, due to the feature of the changed target operational volume-to-capacity ratio embedded in the model. The 64 different neural network based cycle length design models were developed based on simulation data surrogating field data. The CORSIM optimal cycle lengths minimizing delay were found through the COST software developed for the study. COST searches for the CORSIM optimal cycle length minimizing delay with a heuristic searching method, a hybrid genetic algorithm. Among 64 models, the best one producing cycle lengths close enough to the optimal was selected through statistical tests. It was found from the verification test that the best model designs a cycle length as similar pattern to the ones minimizing delay. The cycle lengths from the proposed model are comparable to the ones from TRANSYT-7F.

Development of the forecasting model for import volume by item of major countries based on economic, industrial structural and cultural factors: Focusing on the cultural factors of Korea (경제적, 산업구조적, 문화적 요인을 기반으로 한 주요 국가의 한국 품목별 수입액 예측 모형 개발: 한국의, 한국에 대한 문화적 요인을 중심으로)

  • Jun, Seung-pyo;Seo, Bong-Goon;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.4
    • /
    • pp.23-48
    • /
    • 2021
  • The Korean economy has achieved continuous economic growth for the past several decades thanks to the government's export strategy policy. This increase in exports is playing a leading role in driving Korea's economic growth by improving economic efficiency, creating jobs, and promoting technology development. Traditionally, the main factors affecting Korea's exports can be found from two perspectives: economic factors and industrial structural factors. First, economic factors are related to exchange rates and global economic fluctuations. The impact of the exchange rate on Korea's exports depends on the exchange rate level and exchange rate volatility. Global economic fluctuations affect global import demand, which is an absolute factor influencing Korea's exports. Second, industrial structural factors are unique characteristics that occur depending on industries or products, such as slow international division of labor, increased domestic substitution of certain imported goods by China, and changes in overseas production patterns of major export industries. Looking at the most recent studies related to global exchanges, several literatures show the importance of cultural aspects as well as economic and industrial structural factors. Therefore, this study attempted to develop a forecasting model by considering cultural factors along with economic and industrial structural factors in calculating the import volume of each country from Korea. In particular, this study approaches the influence of cultural factors on imports of Korean products from the perspective of PUSH-PULL framework. The PUSH dimension is a perspective that Korea develops and actively promotes its own brand and can be defined as the degree of interest in each country for Korean brands represented by K-POP, K-FOOD, and K-CULTURE. In addition, the PULL dimension is a perspective centered on the cultural and psychological characteristics of the people of each country. This can be defined as how much they are inclined to accept Korean Flow as each country's cultural code represented by the country's governance system, masculinity, risk avoidance, and short-term/long-term orientation. The unique feature of this study is that the proposed final prediction model can be selected based on Design Principles. The design principles we presented are as follows. 1) A model was developed to reflect interest in Korea and cultural characteristics through newly added data sources. 2) It was designed in a practical and convenient way so that the forecast value can be immediately recalled by inputting changes in economic factors, item code and country code. 3) In order to derive theoretically meaningful results, an algorithm was selected that can interpret the relationship between the input and the target variable. This study can suggest meaningful implications from the technical, economic and policy aspects, and is expected to make a meaningful contribution to the export support strategies of small and medium-sized enterprises by using the import forecasting model.