• Title/Summary/Keyword: measure matrix

Search Result 477, Processing Time 0.025 seconds

Development of Thermoplastic Carbon Composite Hybrid Bipolar Plate for Vanadium Redox Flow Batteries (VRFB) (바나듐 레독스 흐름전지용 열가소성 탄소 복합재료 하이브리드 분리판 개발)

  • Jun Woo Lim
    • Composites Research
    • /
    • v.36 no.6
    • /
    • pp.422-428
    • /
    • 2023
  • The electrical contact resistance between the bipolar plate (BP) and the carbon felt electrode (CFE), which are in contact by the stack clamping pressure, has a great impact on the stack efficiency because of the relatively low clamping pressure of the vanadium redox flow battery (VRFB) stack. In this study, a polyethylene (PE) composite-CFE hybrid bipolar plate structure is developed through a local heat welding process to reduce such contact resistance and improve cell performance. The PE matrix of the carbon fiber composite BP is locally melted to create a direct contact structure between the carbon fibers of CFE and the carbon fibers of BP, thereby reducing the electrical contact resistance. Area specific resistance (ASR) and gas permeability are measured to evaluate the performance of the PE composite-CFE hybrid bipolar plate. In addition, an acid aging test is performed to measure stack reliability. Finally, a VFRB unit cell charge/discharge test is performed to compare and analyze the performance of the developed PE composite-CFE hybrid BP and the conventional BP.

A comparison study of canonical methods: Application to -Omics data (오믹스 자료를 이용한 정준방법 비교)

  • Seungsoo Lee;Eun Jeong Min
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.2
    • /
    • pp.157-176
    • /
    • 2024
  • Integrative analysis for better understanding of complex biological systems gains more attention. Observing subjects from various perspectives and conducting integrative analysis of those multiple datasets enables a deeper understanding of the subject. In this paper, we compared two methods that simultaneously consider two datasets gathered from the same objects, canonical correlation analysis (CCA) and co-inertia analysis (CIA). Since CCA cannot handle the case when the data exhibit high-dimensionality, two strategies were considered instead: Utilization of a ridge constant (CCA-ridge) and substitution of covariance matrices of each data to identity matrix and then applying penalized singular value decomposition (CCA-PMD). To illustrate CIA and CCA, both extensions of CCA and CIA were applied to NCI60 cell line data. It is shown that both methods yield biologically meaningful and significant results by identifying important genes that enhance our comprehension of the data. Their results shows some dissimilarities arisen from the different criteria used to measure the relationship between two sets of data in each method. Additionally, CIA exhibits variations dependent on the weight matrices employed.

Development and Validation of Korean Composit Burn Index(KCBI) (한국형 산불피해강도지수(KCBI)의 개발 및 검증)

  • Lee, Hyunjoo;Lee, Joo-Mee;Won, Myoung-Soo;Lee, Sang-Woo
    • Journal of Korean Society of Forest Science
    • /
    • v.101 no.1
    • /
    • pp.163-174
    • /
    • 2012
  • CBI(Composite Burn Index) developed by USDA Forest Service is a index to measure burn severity based on remote sensing. In Korea, the CBI has been used to investigate the burn severity of fire sites for the last few years. However, it has been an argument on that CBI is not adequate to capture unique characteristics of Korean forests, and there has been a demand to develop KCBI(Korean Composite Burn Index). In this regard, this study aimed to develop KCBI by adjusting the CBI and to validate its applicability by using remote sensing technique. Uljin and Youngduk, two large fire sites burned in 2011, were selected as study areas, and forty-four sampling plots were assigned in each study area for field survey. Burn severity(BS) of the study areas were estimated by analyzing NDVI from SPOT images taken one month later of the fires. Applicability of KCBI was validated with correlation analysis between KCBI index values and NDVI values and their confusion matrix. The result showed that KCBI index values and NDVI values were closely correlated in both Uljin (r = -0.54 and p<0.01) and Youngduk (r = -0.61 and p<0.01). Thus this result supported that proposed KCBI is adequate index to measure burn severity of fire sites in Korea. There was a number of limitations, such as the low correlation coefficients between BS and KCBI and skewed distribution of KCBI sampling plots toward High and Extreme classes. Despite of these limitations, the proposed KCBI showed high potentials for estimating burn severity of fire sites in Korea, and could be improved by considering the limitations in further studies.

A Study on the Asia Container Ports Clustering Using Hierarchical Clustering(Single, Complete, Average, Centroid Linkages) Methods with Empirical Verification of Clustering Using the Silhouette Method and the Second Stage(Type II) Cross-Efficiency Matrix Clustering Model (계층적 군집분석(최단, 최장, 평균, 중앙연결)방법에 의한 아시아 컨테이너 항만의 클러스터링 측정 및 실루엣방법과 2단계(Type II) 교차효율성 메트릭스 군집모형을 이용한 실증적 검증에 관한 연구)

  • Park, Ro-Kyung
    • Journal of Korea Port Economic Association
    • /
    • v.37 no.1
    • /
    • pp.31-70
    • /
    • 2021
  • The purpose of this paper is to measure the clustering change and analyze empirical results, and choose the clustering ports for Busan, Incheon, and Gwangyang ports by using Hierarchical clustering(single, complete, average, and centroid), Silhouette, and 2SCE[the Second Stage(Type II) cross-efficiency] matrix clustering models on Asian container ports over the period 2009-2018. The models have chosen number of cranes, depth, birth length, and total area as inputs and container TEU as output. The main empirical results are as follows. First, ranking order according to the efficiency increasing ratio during the 10 years analysis shows Silhouette(0.4052 up), Hierarchical clustering(0.3097 up), and 2SCE(0.1057 up). Second, according to empirical verification of the Silhouette and 2SCE models, 3 Korean ports should be clustered with ports like Busan Port[ Dubai, Hong Kong, and Tanjung Priok], and Incheon Port and Gwangyang Port are required to cluster with most ports. Third, in terms of the ASEAN, it would be good to cluster like Busan (Singapore), Incheon Port (Tanjung Priok, Tanjung Perak, Manila, Tanjung Pelpas, Leam Chanbang, and Bangkok), and Gwangyang Port(Tanjung Priok, Tanjung Perak, Port Kang, Tanjung Pelpas, Leam Chanbang, and Bangkok). Third, Wilcoxon's signed-ranks test of models shows that all P values are significant at an average level of 0.852. It means that the average efficiency figures and ranking orders of the models are matched each other. The policy implication is that port policy makers and port operation managers should select benchmarking ports by introducing the models used in this study into the clustering of ports, compare and analyze the port development and operation plans of their ports, and introduce and implement the parts which required benchmarking quickly.

A Study on Containerports Clustering Using Artificial Neural Network(Multilayer Perceptron and Radial Basis Function), Social Network, and Tabu Search Models with Empirical Verification of Clustering Using the Second Stage(Type IV) Cross-Efficiency Matrix Clustering Model (인공신경망모형(다층퍼셉트론, 방사형기저함수), 사회연결망모형, 타부서치모형을 이용한 컨테이너항만의 클러스터링 측정 및 2단계(Type IV) 교차효율성 메트릭스 군집모형을 이용한 실증적 검증에 관한 연구)

  • Park, Ro-Kyung
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.9 no.6
    • /
    • pp.757-772
    • /
    • 2019
  • The purpose of this paper is to measure the clustering change and analyze empirical results, and choose the clustering ports for Busan, Incheon, and Gwangyang ports by using Artificial Neural Network, Social Network, and Tabu Search models on 38 Asian container ports over the period 2007-2016. The models consider number of cranes, depth, birth length, and total area as inputs and container throughput as output. Followings are the main empirical results. First, the variables ranking order which affects the clustering according to artificial neural network are TEU, birth length, depth, total area, and number of cranes. Second, social network analysis shows the same clustering in the benevolent and aggressive models. Third, the efficiency of domestic ports are worsened after clustering using social network analysis and tabu search models. Forth, social network and tabu search models can increase the efficiency by 37% compared to that of the general CCR model. Fifth, according to the social network analysis and tabu search models, 3 Korean ports could be clustered with Asian ports like Busan Port(Kobe, Osaka, Port Klang, Tanjung Pelepas, and Manila), Incheon Port(Shahid Rajaee, and Gwangyang), and Gwangyang Port(Aqaba, Port Sulatan Qaboos, Dammam, Khor Fakkan, and Incheon). Korean seaport authority should introduce port improvement plans by using the methods used in this paper.

Perceptional Change of a New Product, DMB Phone

  • Kim, Ju-Young;Ko, Deok-Im
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.3
    • /
    • pp.59-88
    • /
    • 2008
  • Digital Convergence means integration between industry, technology, and contents, and in marketing, it usually comes with creation of new types of product and service under the base of digital technology as digitalization progress in electro-communication industries including telecommunication, home appliance, and computer industries. One can see digital convergence not only in instruments such as PC, AV appliances, cellular phone, but also in contents, network, service that are required in production, modification, distribution, re-production of information. Convergence in contents started around 1990. Convergence in network and service begins as broadcasting and telecommunication integrates and DMB(digital multimedia broadcasting), born in May, 2005 is the symbolic icon in this trend. There are some positive and negative expectations about DMB. The reason why two opposite expectations exist is that DMB does not come out from customer's need but from technology development. Therefore, customers might have hard time to interpret the real meaning of DMB. Time is quite critical to a high tech product, like DMB because another product with same function from different technology can replace the existing product within short period of time. If DMB does not positioning well to customer's mind quickly, another products like Wibro, IPTV, or HSPDA could replace it before it even spreads out. Therefore, positioning strategy is critical for success of DMB product. To make correct positioning strategy, one needs to understand how consumer interprets DMB and how consumer's interpretation can be changed via communication strategy. In this study, we try to investigate how consumer perceives a new product, like DMB and how AD strategy change consumer's perception. More specifically, the paper segment consumers into sub-groups based on their DMB perceptions and compare their characteristics in order to understand how they perceive DMB. And, expose them different printed ADs that have messages guiding consumer think DMB in specific ways, either cellular phone or personal TV. Research Question 1: Segment consumers according to perceptions about DMB and compare characteristics of segmentations. Research Question 2: Compare perceptions about DMB after AD that induces categorization of DMB in direction for each segment. If one understand and predict a direction in which consumer perceive a new product, firm can select target customers easily. We segment consumers according to their perception and analyze characteristics in order to find some variables that can influence perceptions, like prior experience, usage, or habit. And then, marketing people can use this variables to identify target customers and predict their perceptions. If one knows how customer's perception is changed via AD message, communication strategy could be constructed properly. Specially, information from segmented customers helps to develop efficient AD strategy for segment who has prior perception. Research framework consists of two measurements and one treatment, O1 X O2. First observation is for collecting information about consumer's perception and their characteristics. Based on first observation, the paper segment consumers into two groups, one group perceives DMB similar to Cellular phone and the other group perceives DMB similar to TV. And compare characteristics of two segments in order to find reason why they perceive DMB differently. Next, we expose two kinds of AD to subjects. One AD describes DMB as Cellular phone and the other Ad describes DMB as personal TV. When two ADs are exposed to subjects, consumers don't know their prior perception of DMB, in other words, which subject belongs 'similar-to-Cellular phone' segment or 'similar-to-TV' segment? However, we analyze the AD's effect differently for each segment. In research design, final observation is for investigating AD effect. Perception before AD is compared with perception after AD. Comparisons are made for each segment and for each AD. For the segment who perceives DMB similar to TV, AD that describes DMB as cellular phone could change the prior perception. And AD that describes DMB as personal TV, could enforce the prior perception. For data collection, subjects are selected from undergraduate students because they have basic knowledge about most digital equipments and have open attitude about a new product and media. Total number of subjects is 240. In order to measure perception about DMB, we use indirect measurement, comparison with other similar digital products. To select similar digital products, we pre-survey students and then finally select PDA, Car-TV, Cellular Phone, MP3 player, TV, and PSP. Quasi experiment is done at several classes under instructor's allowance. After brief introduction, prior knowledge, awareness, and usage about DMB as well as other digital instruments is asked and their similarities and perceived characteristics are measured. And then, two kinds of manipulated color-printed AD are distributed and similarities and perceived characteristics for DMB are re-measured. Finally purchase intension, AD attitude, manipulation check, and demographic variables are asked. Subjects are given small gift for participation. Stimuli are color-printed advertising. Their actual size is A4 and made after several pre-test from AD professionals and students. As results, consumers are segmented into two subgroups based on their perceptions of DMB. Similarity measure between DMB and cellular phone and similarity measure between DMB and TV are used to classify consumers. If subject whose first measure is less than the second measure, she is classified into segment A and segment A is characterized as they perceive DMB like TV. Otherwise, they are classified as segment B, who perceives DMB like cellular phone. Discriminant analysis on these groups with their characteristics of usage and attitude shows that Segment A knows much about DMB and uses a lot of digital instrument. Segment B, who thinks DMB as cellular phone doesn't know well about DMB and not familiar with other digital instruments. So, consumers with higher knowledge perceive DMB similar to TV because launching DMB advertising lead consumer think DMB as TV. Consumers with less interest on digital products don't know well about DMB AD and then think DMB as cellular phone. In order to investigate perceptions of DMB as well as other digital instruments, we apply Proxscal analysis, Multidimensional Scaling technique at SPSS statistical package. At first step, subjects are presented 21 pairs of 7 digital instruments and evaluate similarity judgments on 7 point scale. And for each segment, their similarity judgments are averaged and similarity matrix is made. Secondly, Proxscal analysis of segment A and B are done. At third stage, get similarity judgment between DMB and other digital instruments after AD exposure. Lastly, similarity judgments of group A-1, A-2, B-1, and B-2 are named as 'after DMB' and put them into matrix made at the first stage. Then apply Proxscal analysis on these matrixes and check the positional difference of DMB and after DMB. The results show that map of segment A, who perceives DMB similar as TV, shows that DMB position closer to TV than to Cellular phone as expected. Map of segment B, who perceive DMB similar as cellular phone shows that DMB position closer to Cellular phone than to TV as expected. Stress value and R-square is acceptable. And, change results after stimuli, manipulated Advertising show that AD makes DMB perception bent toward Cellular phone when Cellular phone-like AD is exposed, and that DMB positioning move towards Car-TV which is more personalized one when TV-like AD is exposed. It is true for both segment, A and B, consistently. Furthermore, the paper apply correspondence analysis to the same data and find almost the same results. The paper answers two main research questions. The first one is that perception about a new product is made mainly from prior experience. And the second one is that AD is effective in changing and enforcing perception. In addition to above, we extend perception change to purchase intention. Purchase intention is high when AD enforces original perception. AD that shows DMB like TV makes worst intention. This paper has limitations and issues to be pursed in near future. Methodologically, current methodology can't provide statistical test on the perceptual change, since classical MDS models, like Proxscal and correspondence analysis are not probability models. So, a new probability MDS model for testing hypothesis about configuration needs to be developed. Next, advertising message needs to be developed more rigorously from theoretical and managerial perspective. Also experimental procedure could be improved for more realistic data collection. For example, web-based experiment and real product stimuli and multimedia presentation could be employed. Or, one can display products together in simulated shop. In addition, demand and social desirability threats of internal validity could influence on the results. In order to handle the threats, results of the model-intended advertising and other "pseudo" advertising could be compared. Furthermore, one can try various level of innovativeness in order to check whether it make any different results (cf. Moon 2006). In addition, if one can create hypothetical product that is really innovative and new for research, it helps to make a vacant impression status and then to study how to form impression in more rigorous way.

  • PDF

Glass Dissolution Rates From MCC-1 and Flow-Through Tests

  • Jeong, Seung-Young
    • Proceedings of the Korean Radioactive Waste Society Conference
    • /
    • 2004.06a
    • /
    • pp.257-258
    • /
    • 2004
  • The dose from radionuclides released from high-level radioactive waste (HLW) glasses as they corrode must be taken into account when assessing the performance of a disposal system. In the performance assessment (PA) calculations conducted for the proposed Yucca Mountain, Nevada, disposal system, the release of radionuclides is conservatively assumed to occur at the same rate the glass matrix dissolves. A simple model was developed to calculate the glass dissolution rate of HLW glasses in these PA calculations [1]. For the PA calculations that were conducted for Site Recommendation, it was necessary to identify ranges of parameter values that bounded the dissolution rates of the wide range of HLW glass compositions that will be disposed. The values and ranges of the model parameters for the pH and temperature dependencies were extracted from the results of SPFT, static leach tests, and Soxhlet tests available in the literature. Static leach tests were conducted with a range of glass compositions to measure values for the glass composition parameter. The glass dissolution rate depends on temperature, pH, and the compositions of the glass and solution, The dissolution rate is calculated using Eq. 1: $rate{\;}={\;}k_{o}10^{(ph){\eta})}{\cdot}e^{(-Ea/RT)}{\cdot}(1-Q/K){\;}+{\;}k_{long}$ where $k_{0},\;{\eta}$ and Eaare the parameters for glass composition, pH, $\eta$ and temperature dependence, respectively, and R is the gas constant. The term (1-Q/K) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_{0},\;{\eta}\;and\;E_{a}$ are the parameters for glass composition, pH, and temperature dependence, respectively, and R is the gas constant. The term (1-Q/C) is the affinity term, where Q is the ion activity product of the solution and K is the pseudo-equilibrium constant for the glass. Values of the parameters $k_0$, and Ea are determined under test conditions where the value of Q is maintained near zero, so that the value of the affinity term remains near 1. The dissolution rate under conditions in which the value of the affinity term is near 1 is referred to as the forward rate. This is the highest dissolution rate that can occur at a particular pH and temperature. The value of the parameter K is determined from experiments in which the value of the ion activity product approaches the value of K. This results in a decrease in the value of the affinity term and the dissolution rate. The highly dilute solutions required to measure the forward rate and extract values for $k_0$, $\eta$, and Ea can be maintained by conducting dynamic tests in which the test solution is removed from the reaction cell and replaced with fresh solution. In the single-pass flow-through (PFT) test method, this is done by continuously pumping the test solution through the reaction cell. Alternatively, static tests can be conducted with sufficient solution volume that the solution concentrations of dissolved glass components do not increase significantly during the test. Both the SPFT and static tests can ve conducted for a wide range of pH values and temperatures. Both static and SPFt tests have short-comings. the SPFT test requires analysis of several solutions (typically 6-10) at each of several flow rates to determine the glass dissolution rate at each pH and temperature. As will be shown, the rate measured in an SPFt test depends on the solution flow rate. The solutions in static tests will eventually become concentrated enough to affect the dissolution rate. In both the SPFt and static test methods. a compromise is required between the need to minimize the effects of dissolved components on the dissolution rate and the need to attain solution concentrations that are high enough to analyze. In the paper, we compare the results of static leach tests and SPFT tests conducted with simple 5-component glass to confirm the equivalence of SPFT tests and static tests conducted with pH buffer solutions. Tests were conducted over the range pH values that are most relevant for waste glass disssolution in a disposal system. The glass and temperature used in the tests were selected to allow direct comparison with SPFT tests conducted previously. The ability to measure parameter values with more than one test method and an understanding of how the rate measured in each test is affected by various test parameters provides added confidence to the measured values. The dissolution rate of a simple 5-component glass was measured at pH values of 6.2, 8.3, and 9.6 and $70^{\circ}C$ using static tests and single-pass flow-through (SPFT) tests. Similar rates were measured with the two methods. However, the measured rates are about 10X higher than the rates measured previously for a glass having the same composition using an SPFT test method. Differences are attributed to effects of the solution flow rate on the glass dissolution reate and how the specific surface area of crushed glass is estimated. This comparison indicates the need to standardize the SPFT test procedure.

  • PDF

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

A Study on the Evaluation Technique of Damage of Metal Matrix Composite Using X-Ray Fractography Method (X선 프렉토그래피기법을 이용한 금속복합재료의 피로손상 해석에 관한 연구)

  • Park, Young-Chul;Yun, Doo-Pyo;Park, Dong-Sung;Kim, Deug-Jin;Kim, Kwang-Young
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.18 no.3
    • /
    • pp.172-180
    • /
    • 1998
  • It is attempted to verify the Quantitative relationship between fracture mechanical parameters (${\Delta}K$, $K_{max}$) and X-ray parameters (residual stress, half-value breadth) of A12009-15v/o $SiC_w$ composite, and normalized SS41 steel. In this study, fatigue crack propagation test were carried out and X-ray diffraction was applied to fatigue fractured surface in order to investigate the change of residual stress and half-value breadth on fatigue fractured surface. And it is loaded prestrain to each tensile specimen, A12009-15v/o $SiC_w$ composite(0.3, 0.5, 1, 1.5, 2%) and normalized SS41 steel(0.63, 2.25, 7.50, 13.7, 20%), for investigating plastic strain rate using nondestructive measurement method. X-ray diffraction was applied to the prestrained tensile specimens in order to measure the change of residual stress and half-value breadth.

  • PDF

Multivariate conditional tail expectations (다변량 조건부 꼬리 기대값)

  • Hong, C.S.;Kim, T.W.
    • The Korean Journal of Applied Statistics
    • /
    • v.29 no.7
    • /
    • pp.1201-1212
    • /
    • 2016
  • Value at Risk (VaR) for market risk management is a favorite method used by financial companies; however, there are some problems that cannot be explained for the amount of loss when a specific investment fails. Conditional Tail Expectation (CTE) is an alternative risk measure defined as the conditional expectation exceeded VaR. Multivariate loss rates are transformed into a univariate distribution in real financial markets in order to obtain CTE for some portfolio as well as to estimate CTE. We propose multivariate CTEs using multivariate quantile vectors. A relationship among multivariate CTEs is also derived by extending univariate CTEs. Multivariate CTEs are obtained from bivariate and trivariate normal distributions; in addition, relationships among multivariate CTEs are also explored. We then discuss the extensibility to high dimension as well as illustrate some examples. Multivariate CTEs (using variance-covariance matrix and multivariate quantile vector) are found to have smaller values than CTEs transformed to univariate. Therefore, it can be concluded that the proposed multivariate CTEs provides smaller estimates that represent less risk than others and that a drastic investment using this CTE is also possible when a diversified investment strategy includes many companies in a portfolio.