Characteristics of the static shift are discussed by comparing the three-dimensional MT inversion with/without static shift parameterization. The galvanic distortion by small-scale shallow feature often leads severe distortion in inverted resistivity structures. The new inversion algorithm is applied to four numerical data sets contaminated by different amount of static shift. In real field data interpretations, we generally do not have any a-priori information about how much the data contains the static shift. In this study, we developed an algorithm for finding both Lagrangian multiplier for smoothness and the trade-off parameter for static shift, simultaneously in 3-D MT inversion. Applications of this inversion routine for the numerical data sets showed quite reasonable estimation of static shift parameters without any a-priori information. The inversion scheme is successfully applied to all the four data sets, even when the static shift does not obey the Gaussian distribution. Allowing the static shift parameters have non-zero degree of freedom to the inversion, we could get more accurate block resistivities as well as static shifts in the data. When inversion does not consider the static shift as inversion parameters (conventional MT inversion), the block resistivities on the surface are modified considerably to match possible static shift. The inhomogeneous blocks on the surface can generate the static shift at low frequencies. By those mechanisms, the conventional 3-D MT inversion can reconstruct the resistivity structures to some extent in the deeper parts even when moderate static shifts are in the data. As frequency increased, however, the galvanic distortion is not frequency independent any more, and thus the conventional inversion failed to fit the apparent resistivity and phase, especially when strong static shift is added. Even in such case, however, reasonable estimation of block resistivity as well as static shift parameters were obtained by 3-D MT inversion with static shift parameterization.
The most drought assessments are based on a drought index, which depends on univariate variables such as precipitation and soil moisture. However, there is a limitation in representing the drought conditions with single variables due to their complexity. It has been acknowledged that a multivariate drought index can more effectively describe the complex drought state. In this context, this study propose a Copula-based drought index that can jointly consider precipitation and soil moisture. Unlike precipitation data, long-term soil moisture data is not readily available so that this study utilized a Gaussian Mixture Non-Homogeneous Hidden Markov chain Model (GM-NHMM) model to simulate the soil moisture using the observed precipitation and temperature ranging from 1973 to 2014. The GM-NHMM model showed a better performance in terms of reproducing key statistics of soil moisture, compared to a multiple regression model. Finally, a bivariate frequency analysis was performed for the drought duration and severity, and it was confirmed that the recent droughts over Jeollabuk-do in 2015 have a 20-year return period.
Recently, advancements and commercialization in the field of maritime autonomous surface ships (MASS) has rapidly progressed. Concurrently, studies are also underway to develop methods for automatically surveying the condition of various on-board equipment remotely to ensure the navigational safety of MASS. One key issue that has gained prominence is the method to obtain values from analog gauges installed in various equipment through image processing. This approach has the advantage of enabling the non-contact detection of gauge values without modifying or changing already installed or planned equipment, eliminating the need for type approval changes from shipping classifications. The objective of this study was to identify a dynamically changing indicator needle within noisy images of analog gauges. The needle object must be identified because its position significantly affects the accurate reading of gauge values. An analog pressure gauge attached to an emergency fire pump model was used for image capture to identify the needle object. The acquired images were pre-processed through Gaussian filtering, thresholding, and morphological operations. The needle object was then identified through Hough Transform. The experimental results confirmed that the center and object of the indicator needle could be identified in images of noisy analog gauges. The findings suggest that the image processing method applied in this study can be utilized for shape identification in analog gauges installed on ships. This study is expected to be applicable as an image processing method for the automatic remote survey of MASS.
For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.
Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial
Purpose: Quantification of myocardial blood flow (MBF) using dynamic PET imaging has the potential to assess coronary artery disease. Rb-82 plays a key role in the clinical assessment of myocardial perfusion using PET. However, MBF could be overestimated due to the underestimation of left ventricular input function in the beginning of the acquisition when the scanner has non-linearity between count rate and activity concentration due to the scanner dead-time. Therefore, in this study, we evaluated the count rate linearity as a function of the activity concentration in PET data acquired in list mode. Materials & methods: A cylindrical phantom (diameter, 12 cm length, 10.5 cm) filled with 296 MBq F-18 solution and 800 mL of water was used to estimate the linearity of the Biograph 40 True Point PET/CT scanner. PET data was acquired with 10 min per frame of 1 bed duration in list mode for different activity concentration levels in 7 half-lives. The images were reconstructed by OSEM and FBP algorithms. Prompt, net true and random counts of PET data according to the activity concentration were measured. Total and background counts were measured by drawing ROI on the phantom images and linearity was measured using background correction. Results: The prompt count rates in list mode were linearly increased proportionally to the activity concentration. At a low activity concentration (<30 kBq/mL), the prompt net true and random count rates were increased with the activity concentration. At a high activity concentration (>30 kBq/mL), the increasing rate of the prompt net true rates was slightly decreased while the increasing rate of random counts was increased. There was no difference in the image intensity linearity between OSEM and FBP algorithms. Conclusion: The Biograph 40 True Point PET/CT scanner showed good linearity of count rate even at a high activity concentration (~370 kBq/mL).The result indicates that the scanner is useful for the quantitative analysis of data in heart dynamic studies using Rb-82, N-13, O-15 and F-18.
KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70