• Title/Summary/Keyword: recursive

Search Result 1,608, Processing Time 0.028 seconds

Has Container Shipping Industry been Fixing Prices in Collusion?: A Korean Market Case

  • Jaewoong Yoon;Yunseok Hur
    • Journal of Korea Trade
    • /
    • v.27 no.1
    • /
    • pp.79-100
    • /
    • 2023
  • Purpose - The purpose of this study is to analyze the market power of the Korea Container Shipping Market (Intra Asia, Korea-Europe, and Korea-U.S.) to verify the existence of collusion empirically, and to answer whether the joint actions of liner market participants in Korea have formed market dominance for each route. Precisely, it will be verified through the Lerner index as to whether the regional market of Asia is a monopoly, oligopoly, or perfect competition. Design/methodology - This study used a Lerner index adjusted with elasticity presented in the New Imperial Organization (NEIO) studies. NEIO refers to a series of empirical studies that estimate parameters to judge market power from industrial data. This study uses B-L empirical models by Bresnahan (1982) and Lau (1982). In addition, NEIO research data statistically contain self-regression and stability problems as price and time series data. A dynamic model following Steen and Salvanes' Error Correction Model was used to solve this problem. Findings - The empirical results are as follows. First, λ, representing market power, is nearly zero in all three markets. Second, the Korean shipping market shows low demand elasticity on average. Nevertheless, the markup is low, a characteristic that is difficult to see in other industries. Third, the Korean shipping market generally remains close to perfect competition from 2014 to 2022, but extreme market power appears in a specific period, such as COVID-19. Fourth, there was no market power in the Intra Asia market from 2008 to 2014. Originality/value - Doubts about perfect competition in the liner market continued, but there were few empirical cases. This paper confirmed that the Korea liner market is a perfect competition market. This paper is the first to implement dynamics using ECM and recursive regression to demonstrate market power in the Korean liner market by dividing the shipping market into Deep Sea and Intra Asia separately. It is also the first to prove the most controversial problems in the current shipping industry numerically and academically.

A novel adaptive unscented Kalman Filter with forgetting factor for the identification of the time-variant structural parameters

  • Yanzhe Zhang ;Yong Ding ;Jianqing Bu;Lina Guo
    • Smart Structures and Systems
    • /
    • v.32 no.1
    • /
    • pp.9-21
    • /
    • 2023
  • The parameters of civil engineering structures have time-variant characteristics during their service. When extremely large external excitations, such as earthquake excitation to buildings or overweight vehicles to bridges, apply to structures, sudden or gradual damage may be caused. It is crucially necessary to detect the occurrence time and severity of the damage. The unscented Kalman filter (UKF), as one efficient estimator, is usually used to conduct the recursive identification of parameters. However, the conventional UKF algorithm has a weak tracking ability for time-variant structural parameters. To improve the identification ability of time-variant parameters, an adaptive UKF with forgetting factor (AUKF-FF) algorithm, in which the state covariance, innovation covariance and cross covariance are updated simultaneously with the help of the forgetting factor, is proposed. To verify the effectiveness of the method, this paper conducted two case studies as follows: the identification of time-variant parameters of a simply supported bridge when the vehicle passing, and the model updating of a six-story concrete frame structure with field test during the Yangbi earthquake excitation in Yunnan Province, China. The comparison results of the numerical studies show that the proposed method is superior to the conventional UKF algorithm for the time-variant parameter identification in convergence speed, accuracy and adaptability to the sampling frequency. The field test studies demonstrate that the proposed method can provide suggestions for solving practical problems.

Force-deformation relationship prediction of bridge piers through stacked LSTM network using fast and slow cyclic tests

  • Omid Yazdanpanah;Minwoo Chang;Minseok Park;Yunbyeong Chae
    • Structural Engineering and Mechanics
    • /
    • v.85 no.4
    • /
    • pp.469-484
    • /
    • 2023
  • A deep recursive bidirectional Cuda Deep Neural Network Long Short Term Memory (Bi-CuDNNLSTM) layer is recruited in this paper to predict the entire force time histories, and the corresponding hysteresis and backbone curves of reinforced concrete (RC) bridge piers using experimental fast and slow cyclic tests. The proposed stacked Bi-CuDNNLSTM layers involve multiple uncertain input variables, including horizontal actuator displacements, vertical actuators axial loads, the effective height of the bridge pier, the moment of inertia, and mass. The functional application programming interface in the Keras Python library is utilized to develop a deep learning model considering all the above various input attributes. To have a robust and reliable prediction, the dataset for both the fast and slow cyclic tests is split into three mutually exclusive subsets of training, validation, and testing (unseen). The whole datasets include 17 RC bridge piers tested experimentally ten for fast and seven for slow cyclic tests. The results bring to light that the mean absolute error, as a loss function, is monotonically decreased to zero for both the training and validation datasets after 5000 epochs, and a high level of correlation is observed between the predicted and the experimentally measured values of the force time histories for all the datasets, more than 90%. It can be concluded that the maximum mean of the normalized error, obtained through Box-Whisker plot and Gaussian distribution of normalized error, associated with unseen data is about 10% and 3% for the fast and slow cyclic tests, respectively. In recapitulation, it brings to an end that the stacked Bi-CuDNNLSTM layer implemented in this study has a myriad of benefits in reducing the time and experimental costs for conducting new fast and slow cyclic tests in the future and results in a fast and accurate insight into hysteretic behavior of bridge piers.

A study on the time-varying causal relationship between the housing sales market and the jeonse market in Seoul (서울 주택 매매시장과 전세시장의 시간가변적인 인과관계에 관한 연구)

  • Min, Chul hong;Park, Jinbaek
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.281-286
    • /
    • 2023
  • This study analyzed the causal relationship between housing sales prices and jeonse prices in Seoul, specifically in the Gangnam and Gangbuk neighborhoods. The time-invariant Granger causality test showed bidirectional causality between the sales price and the jeonse price in Seoul and Gangbuk, but no bidirectional causality was found in Gangnam. However, the time-varying Granger causality test showed a Granger causal relationship between the housing jeonse price and the sales price for the entire period after 1993 in all three areas. Notably, the causal effect of jeonse prices on sales prices has been continuous in Gangnam since 2010. These analysis results suggest that an increase in liquidity supply to the jeonse market could increase volatility throughout the housing market, given the strong influence between the sales and jeonse markets in both directions.

Network Operation Support System on Graph Database (그래프데이터베이스 기반 통신망 운영관리 방안)

  • Jung, Sung Jae;Choi, Mi Young;Lee, Hwasik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.22-24
    • /
    • 2022
  • Recently, Graph Database (GDB) is being used in wide range of industrial fields. GDB is a database system which adopts graph structure for storing the information. GDB handles the information in the form of a graph which consists of vertices and edges. In contrast to the relational database system which requires pre-defined table schema, GDB doesn't need a pre-defined structure for storing data, allowing a very flexible way of thinking about and using the data. With GDB, we can handle a large volume of heavily interconnected data. A network service provider provides its services based on the heavily interconnected communication network facilities. In many cases, their information is hosted in relational database, where it is not easy to process a query that requires recursive graph traversal operation. In this study, we suggest a way to store an example set of interconnected network facilities in GDB, then show how to graph-query them efficiently.

  • PDF

Object Tracking Using Adaptive Scale Factor Neural Network (적응형 스케일조절 신경망을 이용한 객체 위치 추적)

  • Sun-Bae Park;Do-Sik Yoo
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.6
    • /
    • pp.522-527
    • /
    • 2022
  • Object tracking is a field of signal processing that sequentially tracks the location of an object based on the previous-time location estimations and the present-time observation data. In this paper, we propose an adaptive scaling neural network that can track and adjust the scale of the input data with three recursive neural network (RNN) submodules. To evaluate object tracking performance, we compare the proposed system with the Kalman filter and the maximum likelihood object tracking scheme under an one-dimensional object movement model in which the object moves with piecewise constant acceleration. We show that the proposed scheme is generally better, in terms of root mean square error (RMSE) performance, than maximum likelihood scheme and Kalman filter and that the performance gaps grow with increased observation noise.

Policy evaluation of the rice market isolation system and production adjustment system

  • Dae Young Kwak;Sukho Han
    • Korean Journal of Agricultural Science
    • /
    • v.50 no.4
    • /
    • pp.629-643
    • /
    • 2023
  • The purpose of this study was to examine the effectiveness and efficiency of a policy by comparing and analyzing the impact of the rice market isolation system and production adjustment system (strategic crops direct payment system that induces the cultivation of other crops instead of rice) on rice supply, rice price, and government's financial expenditure. To achieve this purpose, a rice supply and demand forecasting and policy simulation model was developed in this study using a partial equilibrium model limited to a single item (rice), a dynamic equation model system, and a structural equation system that reflects the casual relationship between variables with economic theory. The rice policy analysis model used a recursive model and not a simultaneous equation model. The policy is distinct from that of previous studies, in which changes in government's policy affected the price of rice during harvest and the lean season before the next harvest, and price changes affected the supply and demand of rice according to the modeling, that is, a more specific policy effect analysis. The analysis showed that the market isolation system increased government's financial expenditure compared to the production adjustment system, suggesting low policy financial efficiency, low policy effectiveness on target, and increased harvest price. In particular, the market isolation system temporarily increased the price during harvest season but decreased the price during the lean season due to an increase in ending stock caused by increased production and government stock. Therefore, a decrease in price during the lean season may decrease annual farm-gate prices, and the reverse seasonal amplitude is expected to intensify.

Survival Analysis of Patients with Brain Metastsis by Weighting According to the Primary Tumor Oncotype (전이성 뇌종양 환자에서 원발 종양 가중치에 따른 생존율 분석)

  • Gwak, Hee-Keun;Kim, Woo-Chul;Kim, Hun-Jung;Park, Jung-Hoon;Song, Chang-Hoon
    • Radiation Oncology Journal
    • /
    • v.27 no.3
    • /
    • pp.140-144
    • /
    • 2009
  • Purpose: This study was performed to retrospectively analyze patient survival by weighting according to the primary tumor oncotype in 160 patients with brain metastasis and who underwent whole brain radiotherapy. Materials and Methods: A total of 160 metastatic brain cancer patients who were treated with whole brain radiotherapy of 30 Gy between 2002 and 2008 were retrospectively analyzed. The primary tumor oncotype of 20 patients was breast cancer, and that of 103 patients was lung cancer. Except for 18 patients with leptomeningeal seeding, a total of 142 patients were analyzed according to the prognostic factors and the Recursive Partitioning Analysis (RPA) class. Weighted Partitioning Analysis (WPA), with the weighting being done according to the primary tumor oncotype, was performed and the results were correlated with survival and then compared with the RPA Class. Results: The median survival of the patients in RPA Class I (8 patients) was 20.0 months, that for Class II (76 patients) was 10.0 months and that for Class III (58 patients) was 3.0 months (p<0.003). The median survival of patients in WPA Class I (3 patients) was 36 months, that for the patients in Class II (9 patients) was 23.7 months, that for the patients in Class III (70 patients) was 10.9 months and that for the patients in Class IV (60 patients) was 8.6 months (p<0.001). The WPA Class might have more accuracy in assessing survival, and it may be superior to the RPA Class for assessing survival. Conclusion: A new prognostic index, the WPA Class, has more prognostic value than the RPA Class for the treatment of patients with metastatic brain cancer. This WPA Class may be useful to guide the appropriate treatment of metastatic brain lesions.

Self-optimizing feature selection algorithm for enhancing campaign effectiveness (캠페인 효과 제고를 위한 자기 최적화 변수 선택 알고리즘)

  • Seo, Jeoung-soo;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.173-198
    • /
    • 2020
  • For a long time, many studies have been conducted on predicting the success of campaigns for customers in academia, and prediction models applying various techniques are still being studied. Recently, as campaign channels have been expanded in various ways due to the rapid revitalization of online, various types of campaigns are being carried out by companies at a level that cannot be compared to the past. However, customers tend to perceive it as spam as the fatigue of campaigns due to duplicate exposure increases. Also, from a corporate standpoint, there is a problem that the effectiveness of the campaign itself is decreasing, such as increasing the cost of investing in the campaign, which leads to the low actual campaign success rate. Accordingly, various studies are ongoing to improve the effectiveness of the campaign in practice. This campaign system has the ultimate purpose to increase the success rate of various campaigns by collecting and analyzing various data related to customers and using them for campaigns. In particular, recent attempts to make various predictions related to the response of campaigns using machine learning have been made. It is very important to select appropriate features due to the various features of campaign data. If all of the input data are used in the process of classifying a large amount of data, it takes a lot of learning time as the classification class expands, so the minimum input data set must be extracted and used from the entire data. In addition, when a trained model is generated by using too many features, prediction accuracy may be degraded due to overfitting or correlation between features. Therefore, in order to improve accuracy, a feature selection technique that removes features close to noise should be applied, and feature selection is a necessary process in order to analyze a high-dimensional data set. Among the greedy algorithms, SFS (Sequential Forward Selection), SBS (Sequential Backward Selection), SFFS (Sequential Floating Forward Selection), etc. are widely used as traditional feature selection techniques. It is also true that if there are many risks and many features, there is a limitation in that the performance for classification prediction is poor and it takes a lot of learning time. Therefore, in this study, we propose an improved feature selection algorithm to enhance the effectiveness of the existing campaign. The purpose of this study is to improve the existing SFFS sequential method in the process of searching for feature subsets that are the basis for improving machine learning model performance using statistical characteristics of the data to be processed in the campaign system. Through this, features that have a lot of influence on performance are first derived, features that have a negative effect are removed, and then the sequential method is applied to increase the efficiency for search performance and to apply an improved algorithm to enable generalized prediction. Through this, it was confirmed that the proposed model showed better search and prediction performance than the traditional greed algorithm. Compared with the original data set, greed algorithm, genetic algorithm (GA), and recursive feature elimination (RFE), the campaign success prediction was higher. In addition, when performing campaign success prediction, the improved feature selection algorithm was found to be helpful in analyzing and interpreting the prediction results by providing the importance of the derived features. This is important features such as age, customer rating, and sales, which were previously known statistically. Unlike the previous campaign planners, features such as the combined product name, average 3-month data consumption rate, and the last 3-month wireless data usage were unexpectedly selected as important features for the campaign response, which they rarely used to select campaign targets. It was confirmed that base attributes can also be very important features depending on the type of campaign. Through this, it is possible to analyze and understand the important characteristics of each campaign type.

Trace-Back Viterbi Decoder with Sequential State Transition Control (순서적 역방향 상태천이 제어에 의한 역추적 비터비 디코더)

  • 정차근
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.40 no.11
    • /
    • pp.51-62
    • /
    • 2003
  • This paper presents a novel survivor memeory management and decoding techniques with sequential backward state transition control in the trace back Viterbi decoder. The Viterbi algorithm is an maximum likelihood decoding scheme to estimate the likelihood of encoder state for channel error detection and correction. This scheme is applied to a broad range of digital communication such as intersymbol interference removing and channel equalization. In order to achieve the area-efficiency VLSI chip design with high throughput in the Viterbi decoder in which recursive operation is implied, more research is required to obtain a simple systematic parallel ACS architecture and surviver memory management. As a method of solution to the problem, this paper addresses a progressive decoding algorithm with sequential backward state transition control in the trace back Viterbi decoder. Compared to the conventional trace back decoding techniques, the required total memory can be greatly reduced in the proposed method. Furthermore, the proposed method can be implemented with a simple pipelined structure with systolic array type architecture. The implementation of the peripheral logic circuit for the control of memory access is not required, and memory access bandwidth can be reduced Therefore, the proposed method has characteristics of high area-efficiency and low power consumption with high throughput. Finally, the examples of decoding results for the received data with channel noise and application result are provided to evaluate the efficiency of the proposed method.