• Title/Summary/Keyword: layer approach

Search Result 1,226, Processing Time 0.029 seconds

History of Disease Control of Korean Ginseng over the Past 50 Years (과거 50년간 고려인삼 병 방제 변천사)

  • Dae-Hui Cho
    • Journal of Ginseng Culture
    • /
    • v.6
    • /
    • pp.51-79
    • /
    • 2024
  • In the 1970s and 1980s, during the nascent phase of ginseng disease research, efforts concentrated on isolating and identifying pathogens. Subsequently, their physiological ecology and pathogenesis characteristics were scrutinized. This led to the establishment of a comprehensive control approach for safeguarding major aerial part diseases like Alternaria blight, anthracnose, and Phytophthora blight, along with underground part diseases such as Rhizoctonia seedling damping-off, Pythium seedling damping-off, and Sclerotinia white rot. In the 1980s, the sunshade was changed from traditional rice straw to polyethylene (PE) net. From 1987 to 1989, focused research aimed at enhancing disease control methods. Notably, the introduction of a four-layer woven P.E. light-shading net minimized rainwater leakage, curbing Alternaria blight occurrence. Since 1990, identification of the bacterial soft stem rot pathogen facilitated the establishment of a flower stem removal method to mitigate outbreaks. Concurrently, efforts were directed towards identifying root rot pathogens causing continuous crop failure, employing soil fumigation and filling methods for sustainable crop land use. In 2000, adapting to rapid climate changes became imperative, prompting modifications and supplements to control methods. New approaches were devised, including a crop protection agent method for Alternaria stem blight triggered by excessive rainfall during sprouting and a control method for gray mold disease. A comprehensive plan to enhance control methods for Rhizoctonia seedling damping-off and Rhizoctonia damping-off was also devised. Over the past 50 years, the initial emphasis was on understanding the causes and control of ginseng diseases, followed by refining established control methods. Drawing on these findings, future ginseng cultivation and disease control methods should be innovatively developed to proactively address evolving factors such as climate fluctuations, diminishing cultivation areas, escalating labor costs, and heightened consumer safety awareness.

A Study on Sea Surface Temperature Changes in South Sea (Tongyeong coast), South Korea, Following the Passage of Typhoon KHANUN in 2023 (2023년 태풍 카눈 통과에 따른 한국 남해 통영해역 수온 변동 연구)

  • Jae-Dong Hwang;Ji-Suk Ahn;Ju-Yeon Kim;Hui-Tae Joo;Byung-Hwa Min;Ki-Ho Nam;Si-Woo Lee
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.1
    • /
    • pp.13-19
    • /
    • 2024
  • An analysis of the coastal water temperature in the Tongyeong waters, the eastern sea of the South Sea of Korea, revealed that the water temperature rose sharply before the typhoon made landfall. The water temperature rise occurred throughout the entire water column. An analysis of the sea surface temperature data observed by NOAA(National Oceanic and Atmospheric Administration) satellites, indicated that sea water with a temperature of 30℃ existed in the eastern waters of the eastern South Sea of Korea before the typhoon landed. The southeastern sea of Korea is an area where ocean currents prevail from west to east owing to the Tsushima Warm Current. However, an analysis of the satellite data showed that seawater at 30℃ moved from east to west, indicating that it was affected by the Ekman transport caused by the typhoon before landing. In addition, because the eastern waters of the South Sea are not as deep as those of the East Sea, the water temperature of the entire water layer may remain constant owing to vertical mixing caused by the wind. Because the rise in water temperature in each water layer occurred on the same day, the rise in the bottom water temperature can be considered as owing to vertical mixing. Indeed, the southeastern sea of Korea is a sea area where the water temperature can rise rapidly depending on the direction of approach of the typhoon and the location of high temperature formation.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

User-Perspective Issue Clustering Using Multi-Layered Two-Mode Network Analysis (다계층 이원 네트워크를 활용한 사용자 관점의 이슈 클러스터링)

  • Kim, Jieun;Kim, Namgyu;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.93-107
    • /
    • 2014
  • In this paper, we report what we have observed with regard to user-perspective issue clustering based on multi-layered two-mode network analysis. This work is significant in the context of data collection by companies about customer needs. Most companies have failed to uncover such needs for products or services properly in terms of demographic data such as age, income levels, and purchase history. Because of excessive reliance on limited internal data, most recommendation systems do not provide decision makers with appropriate business information for current business circumstances. However, part of the problem is the increasing regulation of personal data gathering and privacy. This makes demographic or transaction data collection more difficult, and is a significant hurdle for traditional recommendation approaches because these systems demand a great deal of personal data or transaction logs. Our motivation for presenting this paper to academia is our strong belief, and evidence, that most customers' requirements for products can be effectively and efficiently analyzed from unstructured textual data such as Internet news text. In order to derive users' requirements from textual data obtained online, the proposed approach in this paper attempts to construct double two-mode networks, such as a user-news network and news-issue network, and to integrate these into one quasi-network as the input for issue clustering. One of the contributions of this research is the development of a methodology utilizing enormous amounts of unstructured textual data for user-oriented issue clustering by leveraging existing text mining and social network analysis. In order to build multi-layered two-mode networks of news logs, we need some tools such as text mining and topic analysis. We used not only SAS Enterprise Miner 12.1, which provides a text miner module and cluster module for textual data analysis, but also NetMiner 4 for network visualization and analysis. Our approach for user-perspective issue clustering is composed of six main phases: crawling, topic analysis, access pattern analysis, network merging, network conversion, and clustering. In the first phase, we collect visit logs for news sites by crawler. After gathering unstructured news article data, the topic analysis phase extracts issues from each news article in order to build an article-news network. For simplicity, 100 topics are extracted from 13,652 articles. In the third phase, a user-article network is constructed with access patterns derived from web transaction logs. The double two-mode networks are then merged into a quasi-network of user-issue. Finally, in the user-oriented issue-clustering phase, we classify issues through structural equivalence, and compare these with the clustering results from statistical tools and network analysis. An experiment with a large dataset was performed to build a multi-layer two-mode network. After that, we compared the results of issue clustering from SAS with that of network analysis. The experimental dataset was from a web site ranking site, and the biggest portal site in Korea. The sample dataset contains 150 million transaction logs and 13,652 news articles of 5,000 panels over one year. User-article and article-issue networks are constructed and merged into a user-issue quasi-network using Netminer. Our issue-clustering results applied the Partitioning Around Medoids (PAM) algorithm and Multidimensional Scaling (MDS), and are consistent with the results from SAS clustering. In spite of extensive efforts to provide user information with recommendation systems, most projects are successful only when companies have sufficient data about users and transactions. Our proposed methodology, user-perspective issue clustering, can provide practical support to decision-making in companies because it enhances user-related data from unstructured textual data. To overcome the problem of insufficient data from traditional approaches, our methodology infers customers' real interests by utilizing web transaction logs. In addition, we suggest topic analysis and issue clustering as a practical means of issue identification.

Recent research activities on hybrid rocket in Japan

  • Harunori, Nagata
    • Proceedings of the Korean Society of Propulsion Engineers Conference
    • /
    • 2011.04a
    • /
    • pp.1-2
    • /
    • 2011
  • Hybrid rockets have lately attracted attention as a strong candidate of small, low cost, safe and reliable launch vehicles. A significant topic is that the first commercially sponsored space ship, SpaceShipOne vehicle chose a hybrid rocket. The main factors for the choice were safety of operation, system cost, quick turnaround, and thrust termination. In Japan, five universities including Hokkaido University and three private companies organized "Hybrid Rocket Research Group" from 1998 to 2002. Their main purpose was to downsize the cost and scale of rocket experiments. In 2002, UNISEC (University Space Engineering Consortium) and HASTIC (Hokkaido Aerospace Science and Technology Incubation Center) took over the educational and R&D rocket activities respectively and the research group dissolved. In 2008, JAXA/ISAS and eleven universities formed "Hybrid Rocket Research Working Group" as a subcommittee of the Steering Committee for Space Engineering in ISAS. Their goal is to demonstrate technical feasibility of lowcost and high frequency launches of nano/micro satellites into sun-synchronous orbits. Hybrid rockets use a combination of solid and liquid propellants. Usually the fuel is in a solid phase. A serious problem of hybrid rockets is the low regression rate of the solid fuel. In single port hybrids the low regression rate below 1 mm/s causes large L/D exceeding a hundred and small fuel loading ratio falling below 0.3. Multi-port hybrids are a typical solution to solve this problem. However, this solution is not the mainstream in Japan. Another approach is to use high regression rate fuels. For example, a fuel regression rate of 4 mm/s decreases L/D to around 10 and increases the loading ratio to around 0.75. Liquefying fuels such as paraffins are strong candidates for high regression fuels and subject of active research in Japan too. Nakagawa et al. in Tokai University employed EVA (Ethylene Vinyl Acetate) to modify viscosity of paraffin based fuels and investigated the effect of viscosity on regression rates. Wada et al. in Akita University employed LTP (Low melting ThermoPlastic) as another candidate of liquefying fuels and demonstrated high regression rates comparable to paraffin fuels. Hori et al. in JAXA/ISAS employed glycidylazide-poly(ethylene glycol) (GAP-PEG) copolymers as high regression rate fuels and modified the combustion characteristics by changing the PEG mixing ratio. Regression rate improvement by changing internal ballistics is another stream of research. The author proposed a new fuel configuration named "CAMUI" in 1998. CAMUI comes from an abbreviation of "cascaded multistage impinging-jet" meaning the distinctive flow field. A CAMUI type fuel grain consists of several cylindrical fuel blocks with two ports in axial direction. The port alignment shifts 90 degrees with each other to make jets out of ports impinge on the upstream end face of the downstream fuel block, resulting in intense heat transfer to the fuel. Yuasa et al. in Tokyo Metropolitan University employed swirling injection method and improved regression rates more than three times higher. However, regression rate distribution along the axis is not uniform due to the decay of the swirl strength. Aso et al. in Kyushu University employed multi-swirl injection to solve this problem. Combinations of swirling injection and paraffin based fuel have been tried and some results show very high regression rates exceeding ten times of conventional one. High fuel regression rates by new fuel, new internal ballistics, or combination of them require faster fuel-oxidizer mixing to maintain combustion efficiency. Nakagawa et al. succeeded to improve combustion efficiency of a paraffin-based fuel from 77% to 96% by a baffle plate. Another effective approach some researchers are trying is to use an aft-chamber to increase residence time. Better understanding of the new flow fields is necessary to reveal basic mechanisms of regression enhancement. Yuasa et al. visualized the combustion field in a swirling injection type motor. Nakagawa et al. observed boundary layer combustion of wax-based fuels. To understand detailed flow structures in swirling flow type hybrids, Sawada et al. (Tohoku Univ.), Teramoto et al. (Univ. of Tokyo), Shimada et al. (ISAS), and Tsuboi et al. (Kyushu Inst. Tech.) are trying to simulate the flow field numerically. Main challenges are turbulent reaction, stiffness due to low Mach number flow, fuel regression model, and other non-steady phenomena. Oshima et al. in Hokkaido University simulated CAMUI type flow fields and discussed correspondence relation between regression distribution of a burning surface and the vortex structure over the surface.

  • PDF

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Estimation of Addition and Removal Processes of Nutrients from Bottom Water in the Saemangeum Salt-Water Lake by Using Mixing Model (혼합모델을 이용한 새만금호 저층수 내 영양염의 공급과 제거에 관한 연구)

  • Jeong, Yong Hoon;Kim, Chang Shik;Yang, Jae Sam
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.17 no.4
    • /
    • pp.306-317
    • /
    • 2014
  • This study has been executed to understand the additional and removal processes of nutrients in the Saemangeum Salt-water Lake, and discussed with other monthly-collected environmental parameters such as water temperature, salinity, dissolved oxygen, suspended solids, and Chl-a from 2008 to 2010. $NO_3$-N, TP, $PO_4$-P, and DISi showed the removal processes along with the salinity gradients at the surface water of the lake, whereas $NO_2$-N, $NH_4$-N, and Chl-a showed addition trend. In the bottom water all water quality parameters except $NO_3$-N appeared addition processes indicating evidence of continuous nutrients suppliance into the bottom layer. The mixing modelling approach revealed that the biogeochemical processes in the lake consume $NO_3$-N and consequently added $NH_4$-N and $PO_4$-P to the bottom water during the summer seasons. The $NH_4$-N and $PO_4$-P appeared strong increase at the bottom water of the river-side of the lake and strong concentration gradient difference of dissolved oxygen also appeared in the same time. DISi exhibited continuous seasonal supply from spring to summer. Internal addition of $NH_4$-N and $PO_4$-P in the river-side of the lake were much higher than the dike-side, while the increase of DISi showed similar level both the dike and river sides. The temporal distribution of benthic flux for DISi indicates that addition of nutrients in the bottom water was strongly affected by other sources, for example, submarine ground-water discharge (SGD) through bottom sediment.

Acoustic images of the submarine fan system of the northern Kumano Basin obtained during the experimental dives of the Deep Sea AUV URASHIMA (심해 자율무인잠수정 우라시마의 잠항시험에서 취득된 북 구마노 분지 해저 선상지 시스템의 음향 영상)

  • Kasaya, Takafumi;Kanamatsu, Toshiya;Sawa, Takao;Kinosita, Masataka;Tukioka, Satoshi;Yamamoto, Fujio
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.1
    • /
    • pp.80-87
    • /
    • 2011
  • Autonomous underwater vehicles (AUVs) present the important advantage of being able to approach the seafloor more closely than surface vessel surveys can. To collect bathymetric data, bottom material information, and sub-surface images, multibeam echosounder, sidescan sonar (SSS) and subbottom profiler (SBP) equipment mounted on an AUV are powerful tools. The 3000m class AUV URASHIMA was developed by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC). After finishing the engineering development and examination phase of a fuel-cell system used for the vehicle's power supply system, a renovated lithium-ion battery power system was installed in URASHIMA. The AUV was redeployed from its prior engineering tasks to scientific use. Various scientific instruments were loaded on the vehicle, and experimental dives for science-oriented missions conducted from 2006. During the experimental cruise of 2007, high-resolution acoustic images were obtained by SSS and SBP on the URASHIMA around the northern Kumano Basin off Japan's Kii Peninsula. The map of backscatter intensity data revealed many debris objects, and SBP images revealed the subsurface structure around the north-eastern end of our study area. These features suggest a structure related to the formation of the latest submarine fan. However, a strong reflection layer exists below ~20 ms below the seafloor in the south-western area, which we interpret as a denudation feature, now covered with younger surface sediments. We continue to improve the vehicle's performance, and expect that many fruitful results will be obtained using URASHIMA.

TREATMENT OF MALOCCLUSION, AS RELATED TO FINGER SUCKING : CASE REPORT (손가락 빨기로 인한 부정교합의 치험례)

  • Moon, Sang-Jin;Choi, Yeong-Chul
    • Journal of the korean academy of Pediatric Dentistry
    • /
    • v.31 no.1
    • /
    • pp.1-10
    • /
    • 2004
  • The habit of finger sucking is a reflex occurring in the oral stage, due to nutritive and psychological desire. The habit of finger sucking is considered to be normal till 3 years of age. Dento-skeletal effect on maxillo-mandibular complex including occlusion is naturally correction, when habit stopped before 3 years. If finger sucking continues till $3{\sim}4$ years, Finger sucking leads to severe malocclusion and remarkable discrepancy maxillo-mandibular complex, which is difficult in expectation of natural correction. It is necessary to positive treatment. Treatment of malocclusion, as related to finger sucking is classified two methods. (psychological approach and orthodontic appliance) To stop a habit and to correct severe skeletal discrepancy and malocclusion, $fr\ddot{a}nkel$ appliance is very effective device. This study is to report two cases of treatment of malocclusion, as related to finger sucking. 2 years 10 months old girl with severe overjet, maxillo-mandibular skeletal discrepancy and severe convex facial profile was treated with a FR-II appliance. Finger sucking habit stopped immediately After 16 months, severe overjet, maxillo-mandibular skeletal discrepancy and severe convex facial profile was corrected. 4 years 2 months old girl with midline deviation, mandibular right shift, collateral posterior crossbite and facial asymmetry was treated with a FR-III appliance. Finger sucking habit stopped immediately. After 10 month, Midline deviation, mandibular right shift, collateral posterior crossbite and facial asymmetry were corrected. FR-appliance is a recommendable appliance for a habit breaker and correction of skeletal discrepancy.

  • PDF

An Experimental Study on Dynamic Behavior Evaluation of Transitional Track (접속부 궤도의 동적거동분석을 위한 실험적 연구)

  • Cho, Sung-Jung;Choi, Jung-Youl;Chun, Dae-Sung;Kim, Man-Cheol;Park, Yong-Gul
    • Proceedings of the KSR Conference
    • /
    • 2007.11a
    • /
    • pp.1379-1385
    • /
    • 2007
  • In domestic transitional zone design, there is regulation to prevent generation of irregular substructure behaviors that negatively influence in prevention of plasticity settlement on approach section and contact section as well as relieve overall track rigidity by reducing sectional foundation and track stiffness difference, but design guideline that considers dynamic behavior of transitional track in actual service line is very insignificant. Therefore in this study, characteristics of transitional track dynamic behaviors by substructure stiffness are researched and measured dynamic response of transitional track by substructure stiffness in order to prove correlation between substructure and track and calculate elasticity(stiffness) and track load of transitional track by using measurement and formula to provide basic information for developing design guideline considering dynamic behavior of service line transitional track.

  • PDF