• Title/Summary/Keyword: complex analytical method

Search Result 431, Processing Time 0.023 seconds

The Potential Energy Recovery and Thermal Degradation of Used Tire Using TGA (열분석법을 이용한 사용후 타이어의 열적 특성과 포텐셜 에너지의 회수)

  • Kim, Won-Il;Kim, Hyung-Jin;Hong, In-Kwon
    • Elastomers and Composites
    • /
    • v.34 no.2
    • /
    • pp.135-146
    • /
    • 1999
  • The thermal degradation kinetics of SBR and tire were studied using a conventional thermogravimetric analysis in the stream nitrogen at a heating rate of 5, 10, 15, $20^{\circ}C/min$, respectively. Thermogravimetric curves and their derivatives were analyzed using various analytical methods to determine the kinetic parameters. The degradation of the SBR and tire was found to be a complex process which has multi-stages. The Friedman method gave average activation energies for the SBR and tire of 247.53kJ/mol and 230.00kJ/mol, respectively. Mean-while, the Ozawa method Eave 254.80kJ/mol and 215.76kJ/mol. It would appear that either. Friedman's differential method or Ozawa's integral method provided satisfactory mathematical approaches to determine the kinetic parameters for the degradation of the SBR and tire. Approximately 86% and 55% of oil products were obtained at a final temperature of $700^{\circ}C$ and a heating rate of $20^{\circ}C/min$ for the SBR and tire respectively.

  • PDF

A Study of Reportable Range Setting through Concentrated Control Sample (약물검사에서 관리시료의 농축을 이용한 보고 가능 범위의 설정에 대한 연구)

  • Chang, Sang Wu;Kim, Nam Yong;Choi, Ho Sung;Park, Yong Won;Yun, Keun Young
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.36 no.1
    • /
    • pp.13-18
    • /
    • 2004
  • This study was designed to establish working range for reoportable range in own laboratory in order to cover the upper and lower limits of the range in test method. We experimented ten times during 10 days for setting of reportable range with between run for method evaluation. It is generally assumed that the analytical method produces a linear response and that the test results between those upper and lower limits are then reportable. CLIA recommends that laboratories verify the reportable range of all moderate and high complexity tests. The Clinical Laboratory Improvement Amendments(CLIA) and Laboratory Accreditation Program of the Korean Society for Laboratory Medicine states reportable range is only required for "modified" moderately complex tests. Linearity requirements have been eliminated from the CLIA regulations and from others accreditation agencies, many inspectors continue to feel that linearity studies are a part of good lab practice and should be encouraged. It is important to assess the useful reportable range of a laboratory method, i.e., the lowest and highest test results that are reliable and can be reported. Manufacturers make claims for the reportable range of their methods by stating the upper and lower limits of the range. Instrument manufacturers state an operating range and a reportable range. The commercial linearity material can be used to verify this range, if it adequately covers the stated linear interval. CLIA requirements for quality control, must demonstrate that, prior to reporting patient test results, it can obtain the performance specifications for accuracy, precision, and reportable range of patient test results, comparable to those established by the manufacturer. If applicable, the laboratory must also verify the reportable range of patient test results. The reportable range of patient test results is the range of test result values over which the laboratory can establish or verify the accuracy of the instrument, kit or test system measurement response. We need to define the usable reportable range of the method so that the experiments can be properly planned and valid data can be collected. The reportable range is usually defined as the range where the analytical response of the method is linear with respect to the concentration of the analyte being measured. In conclusion, experimental results on reportable range using concentrated control sample and zero calibrators covering from highest to lowest range were salicylate $8.8{\mu}g/dL$, phenytoin $0.67{\mu}g/dL$, phenobarbital $1.53{\mu}g/dL$, primidone $0.16{\mu}g/dL$, theophylline $0.2{\mu}g/dL$, vancomycine $1.3{\mu}g/dL$, valproic acid $3.2{\mu}g/dL$, digitoxin 0.17ng/dL, carbamazepine $0.36{\mu}g/dL$ and acetaminophen $0.7{\mu}g/dL$ at minimum level and salicylate $969.9{\mu}g/dL$, phenytoin $38.1{\mu}g/dL$, phenobarbital $60.4{\mu}g/dL$, primidone $24.57{\mu}g/dL$, theophylline $39.2{\mu}g/dL$, vancomycine $83.65{\mu}g/dL$, valproic acid $147.96{\mu}g/dL$, digitoxin 5.04ng/dL, carbamazepine $19.76{\mu}g/dL$, acetaminophen $300.92{\mu}g/dL$ at maximum level.

  • PDF

Application of Back Analysis for Tunnel Design by Modified In Situ Rock Model (현장암반 모델을 적용한 터널의 역해석)

  • Kim, Hak-Mun;Lee, Bong-Yeol;Hwang, Ui-Seok;Kim, Tae-Hun
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.2 no.3
    • /
    • pp.25-36
    • /
    • 2000
  • The purpose of this research work is to propose an analytical method of tunnel design based on reasonable site data. Therefore the proposed design method consists of monitoring data and Modified In Situ Rock Model. Also the Rock Mass Rating for very poor quality rock is very difficult to estimate, the balances between the ratings may no longer gives a reliable basis for the rock mass strength. But in reality Rock Mass Rating is only the property which can be obtained from face mapping records of the exposed tunnel face during construction stage. Evaluation of rock parameters for the actual design prior to tunnel construction should be corrected during tunnelling process in particularly complex ground conditions. This study intends to investigate application of in-situ rock model to soft rock tunnelling (weathered rock) by face mapping results and site measurement data that are obtained at the costraction site of Seoul Subway Tunnel. For the preparation of more reliable ground parameters, the Rock Mass Rating values for the weathered rocks were modified and readjusted in accordance with the measurement data. The modified input parameters obtained by the proposed method are used for the prediction of the tunnel behavior at subsequent construction stages. The results of this study revealed that more reasonable feed back tunnel analysis can be possible as suggested. Ample measurement data would be able to confirm the new proposed technique in this research work.

  • PDF

ADMM algorithms in statistics and machine learning (통계적 기계학습에서의 ADMM 알고리즘의 활용)

  • Choi, Hosik;Choi, Hyunjip;Park, Sangun
    • Journal of the Korean Data and Information Science Society
    • /
    • v.28 no.6
    • /
    • pp.1229-1244
    • /
    • 2017
  • In recent years, as demand for data-based analytical methodologies increases in various fields, optimization methods have been developed to handle them. In particular, various constraints required for problems in statistics and machine learning can be solved by convex optimization. Alternating direction method of multipliers (ADMM) can effectively deal with linear constraints, and it can be effectively used as a parallel optimization algorithm. ADMM is an approximation algorithm that solves complex original problems by dividing and combining the partial problems that are easier to optimize than original problems. It is useful for optimizing non-smooth or composite objective functions. It is widely used in statistical and machine learning because it can systematically construct algorithms based on dual theory and proximal operator. In this paper, we will examine applications of ADMM algorithm in various fields related to statistics, and focus on two major points: (1) splitting strategy of objective function, and (2) role of the proximal operator in explaining the Lagrangian method and its dual problem. In this case, we introduce methodologies that utilize regularization. Simulation results are presented to demonstrate effectiveness of the lasso.

Optimum Rake Processing for Multipath Fading in Direct-Sequence Spread-Spectrum Communication Systems (주파수대역 직접확산 통신시스템에서 다중경로 페이딩 보상을 위한 최적 레이크 신호처리에 관한 연구)

  • 장원석;이재천
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.10C
    • /
    • pp.995-1006
    • /
    • 2003
  • It is well know that in the wireless communication systems the transmitted signals can suffer from multipath fading due to the wave propagation characteristics and the obstacles over the paths, resulting in serious reduction in the power of the received signals. However, it is possible to take advantage of the inherent diversity imposed in the multipath reception if the underlying channel can be properly estimated. One of the diversity reception methods in this case is Rake processing. In this paper we study the Rake receivers for the direct-sequence spread-spectrum communication systems utilizing PN (pseudo noise) sequences to achieve spread spectrum. A conventional Rake receiver can use the finite-duration impulse (FIR) filter followed by the PN sequence demodulator, where the FIR filter coefficients are the reverse-ordered complex conjugate values of the fading channel impulse response estimates. Here, we propose a new Rake processing method by replacing the aforementioned PN code sequence with a new set of optimum demodulator coefficients. More specifically, the concept of the new optimum Rake processing is first introduced and then the optimum demodulator coefficients are theoretically derived. The performance obtained using the new optimum Rake processing is also calculated. The analytical results are verified by computer simulation. As a result, it is shown that the new optimum Rake processing method improves the MSE performance more than 10 dB over the conventional one using the fixed PN sequence demodulator. It is also shown that the new optimum Rake processing method improves the MSE performance about 10 dB over the Adaptive Correlator that performs the combining of the multipath components and PN demodulation concurrently. And finally, the MSE performance of the optimum Rake demodulator is very close to the MSE performance of OPSK demodulator under the AWGN channel.

Analysis of comic 'Monster' based on J. Lacan's psychoanalytic theory - Focused on desire theory - (자끄 라깡의 정신분석 이론으로 본 만화 '몬스터' 분석 - 욕망이론을 중심으로 -)

  • Park, Hye Ri
    • Cartoon and Animation Studies
    • /
    • s.50
    • /
    • pp.153-185
    • /
    • 2018
  • This study analyzed the comic "Monster" based on J. Lacan's psychoanalytic theory. J. Lacan advocated a new psychoanalytic theory through S. Freud's psychoanalytic theories and socio-cultural studies. The main theory of his theoretical background is the 'desire theory' which analyzed human desires. He distinguished human desires as desires and demands and had the basic proposition that 'human desire is the desire of the other'. J. Lacan (J. Lacan) studied in depth the relationship between oneself and others, which he defined by dividing it into an imaginary system (mirror stage), a symbolic system (Oedipus complex), and a real system (desiring subject). Based on these theories, the main purpose of this study is to analyze the comic "Monster" by Naoki Urasawa focused on the desire theory of J. Lacan based on psychoanalysis to examine what new meaning could be extracted. In order to analyze the comic "Monster", the qualitative research method was used and the method of analysis was the descriptive phenomenological method devised by Giorgi (1985). Through this analytical method, the background, characters, and symbols of "Monster" were analyzed and content analysis was performed based on the theory of Giorgi (1985) and the desire theory of J. Lacan As a result of the analysis of the meaning unit and components related to the desire theory through content analysis, the contents analysis was divided into four components: identification, reproduction of desire, alienation, and unique desire and freedom. The results of this study are summarized as follows. First, on the basis of psychoanalysis, "Monster" is classified into four elements, identification of twin characters, reproduction of desire, feeling of alienation and unique desire and freedom of characters based on desire theory of J. Lacan. Second, the characters analyzed by the desire theory of J. Lacan attempted to reproduce their desires through identification and projection of twins due to their traumatic experience when they were young. Also, the characters who felt alienated in the process of reproduction made a tragic ending to complete human desire and freedom to fill in their emptiness. This result shows that desire theory of J. Lacan based on psychoanalysis can be used as a new analytical theory, a comic analysis which suggested a new meaning. The results of this study suggest that a new field of research, a comic analysis using psychological theory, needs to be created and further studies in this field are required.

Effects of Geometric Characteristics on the Ultimate Behavior of Steel Cable-stayed Bridges (기하학적 특성이 강사장교의 극한 거동에 미치는 영향)

  • Kim, Seungjun;Shin, Do Hyoung;Choi, Byung Ho;Kang, Young Jong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.6A
    • /
    • pp.327-336
    • /
    • 2012
  • This study presents the effects of various geometric properties on the ultimate behavior of steel cable-stayed bridges. In general, cable-stayed bridges are well known as a very efficient structural system, because of those geometric characteristics, but at the same time, the structure also shows complex structural behavior including various nonlinearities which significantly affect to the ultimate behavior of the structure. In this study, the effects of various geometric properties of main members on the ultimate behavior under specific live load cases, which had been studied in previous studies, were investigated using a rational analytical method. In this parametric study, sectional dimensions of main members were considered as main geometric parameters. For the rational ultimate analysis under specific live load cases, the 2-step analysis method, which contains initial shape analysis and live load analysis, was used. As the analysis model, 920.0 m long steel cable-stayed bridges were used and two different types of cable arrangement were considered to study the effect of the cable arrangement types. Through this study, the effects of various geometric properties on the characteristics of the ultimate behavior of steel cable-stayed bridges were intensively investigated.

Robust Design Method for Complex Stochastic Inventory Model

  • Hwang, In-Keuk;Park, Dong-Jin
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1999.04a
    • /
    • pp.426-426
    • /
    • 1999
  • ;There are many sources of uncertainty in a typical production and inventory system. There is uncertainty as to how many items customers will demand during the next day, week, month, or year. There is uncertainty about delivery times of the product. Uncertainty exacts a toll from management in a variety of ways. A spurt in a demand or a delay in production may lead to stockouts, with the potential for lost revenue and customer dissatisfaction. Firms typically hold inventory to provide protection against uncertainty. A cushion of inventory on hand allows management to face unexpected demands or delays in delivery with a reduced chance of incurring a stockout. The proposed strategies are used for the design of a probabilistic inventory system. In the traditional approach to the design of an inventory system, the goal is to find the best setting of various inventory control policy parameters such as the re-order level, review period, order quantity, etc. which would minimize the total inventory cost. The goals of the analysis need to be defined, so that robustness becomes an important design criterion. Moreover, one has to conceptualize and identify appropriate noise variables. There are two main goals for the inventory policy design. One is to minimize the average inventory cost and the stockouts. The other is to the variability for the average inventory cost and the stockouts The total average inventory cost is the sum of three components: the ordering cost, the holding cost, and the shortage costs. The shortage costs include the cost of the lost sales, cost of loss of goodwill, cost of customer dissatisfaction, etc. The noise factors for this design problem are identified to be: the mean demand rate and the mean lead time. Both the demand and the lead time are assumed to be normal random variables. Thus robustness for this inventory system is interpreted as insensitivity of the average inventory cost and the stockout to uncontrollable fluctuations in the mean demand rate and mean lead time. To make this inventory system for robustness, the concept of utility theory will be used. Utility theory is an analytical method for making a decision concerning an action to take, given a set of multiple criteria upon which the decision is to be based. Utility theory is appropriate for design having different scale such as demand rate and lead time since utility theory represents different scale across decision making attributes with zero to one ranks, higher preference modeled with a higher rank. Using utility theory, three design strategies, such as distance strategy, response strategy, and priority-based strategy. for the robust inventory system will be developed.loped.

  • PDF

Incremental Maintenance of Horizontal Views Using a PIVOT Operation and a Differential File in Relational DBMSs (관계형 데이터베이스에서 PIVOT 연산과 차등 파일을 이용한 수평 뷰의 점진적인 관리)

  • Shin, Sung-Hyun;Kim, Jin-Ho;Moon, Yang-Sae;Kim, Sang-Wook
    • The KIPS Transactions:PartD
    • /
    • v.16D no.4
    • /
    • pp.463-474
    • /
    • 2009
  • To analyze multidimensional data conveniently and efficiently, OLAP (On-Line Analytical Processing) systems or e-business are widely using views in a horizontal form to represent measurement values over multiple dimensions. These views can be stored as materialized views derived from several sources in order to support accesses to the integrated data. The horizontal views can provide effective accesses to complex queries of OLAP or e-business. However, we have a problem of occurring maintenance of the horizontal views since data sources are distributed over remote sites. We need a method that propagates the changes of source tables to the corresponding horizontal views. In this paper, we address incremental maintenance of horizontal views that makes it possible to reflect the changes of source tables efficiently. We first propose an overall framework that processes queries over horizontal views transformed from source tables in a vertical form. Under the proposed framework, we propagate the change of vertical tables to the corresponding horizontal views. In order to execute this view maintenance process efficiently, we keep every change of vertical tables in a differential file and then modify the horizontal views with the differential file. Because the differential file is represented as a vertical form, its tuples should be converted to those in a horizontal form to apply them to the out-of-date horizontal view. With this mechanism, horizontal views can be efficiently refreshed with the changes in a differential file without accessing source tables. Experimental results show that the proposed method improves average performance by 1.2$\sim$5.0 times over the existing methods.

A Review on Alkalinity Analysis Methods Suitable for Korean Groundwater (우리나라 지하수에 적합한 알칼리도 분석법에 대한 고찰)

  • Kim, Kangjoo;Hamm, Se-Yeong;Kim, Rak-Hyeon;Kim, Hyunkoo
    • Economic and Environmental Geology
    • /
    • v.51 no.6
    • /
    • pp.509-520
    • /
    • 2018
  • Alkalinity is one of the basic variables, which determine geochemical characteristics of natural waters and participate in processes changing concentrations of various contaminants either directly or indirectly. However, not a few laboratories and researchers of Korea still use alkalinity-measurement methods not appropriate for groundwaters, and which becomes one of the major reasons for the poor ion balance errors of the geochemical analysis. This study was performed to review alkalinity-measurement methods, to discuss their advantages and disadvantages, and, thus, to help researchers and analytical specialists in analyzing alkalinity of groundwaters. The pH-titration-curve-inflection-point (PTC-IP) methods, which finds the alkalinity end point from the inflection point of the pH titration curve are revealed to be most accurate. Gran titration technique among them are likely to be most appropriate for accurate estimation of titrant volume to the end point. In contrast, other titration methods such as pH indicator method and pre-selected pH method, which are still commonly being used, are likely to cause erroneous results especially for groundwaters of low ionic strength and alkalinity.