• Title/Summary/Keyword: Case-Study

Search Result 59,583, Processing Time 0.104 seconds

Live Load Distribution in Prestressed Concrete I-Girder Bridges (I형 프리스트레스트 콘크리트 거더교의 활하중 분배)

  • Lee, Hwan-Woo;Kim, Kwang-Yang
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.21 no.4
    • /
    • pp.325-334
    • /
    • 2008
  • The standard prestressed concrete I-girder bridge (PSC I-girder bridge) is one of the most prevalent types for small and medium bridges in Korea. When determining the member forces in a section to assess the safety of girder in this type of bridge, the general practice is to use the simplified practical equations or the live load distribution factors proposed in design standards rather than the precise analysis through the finite element method or so. Meanwhile, the live load distribution factors currently used in Korean design practice are just a reflection of overseas research results or design standards without alterations. Therefore, it is necessary to develop an equation of the live load distribution factors fit for the design conditions of Korea, considering the standardized section of standard PSC I-girder bridges and the design strength of concrete. In this study, to develop an equation of the live load distribution factors, a parametric analysis and sensitivity analysis were carried out on the parameters such as width of bridge, span length, girder spacing, width of traffic lane, etc. As a result, the major variables to determine the size of distribution factors were girder spacing, overhang length and span length in case of external girders. For internal adjacent girders, the determinant factors were girder spacing, overhang length, span length and width of bridge. For internal girders, the factors were girder spacing, width of bridge and span length. Then, an equation of live load distribution factors was developed through the multiple linear regression analysis on the results of parametric analysis. When the actual practice engineers design a bridge with the equation of live load distribution factors developed here, they will determine the design of member forces ensuring the appropriate safety rate more easily. Moreover, in the preliminary design, this model is expected to save much time for the repetitive design to improve the structural efficiency of PSC I-girder bridges.

Statistical Errors in Papers Published in the Journal of the Korean Society for Therapeutic Radiology and Oncology (대한방사선종양학회지 게재 논문의 통계적 오류 현황)

  • Park, Hee-Chul;Choi, Doo-Ho;Ahn, Song-Vogue;Kang, Jin-Oh;Kim, Eun-Seog;Park, Won;Ahn, Seung-Do;Yang, Dae-Sik;Yun, Hyong-Geun;Chung, Eun-Ji;Chie, Eui-Kyu;Pyo, Hong-Ryull;Hong, Se-Mie
    • Radiation Oncology Journal
    • /
    • v.26 no.4
    • /
    • pp.289-294
    • /
    • 2008
  • Purpose: To improve the quality of the statistical analysis of papers published in the Journal of the Korean Society for Therapeutic Radiology and Oncology (JKOSTRO) by evaluating commonly encountered errors. Materials and Methods: Papers published in the JKOSTRO from January 2006 to December 2007 were reviewed for methodological and statistical validity using a modified version of Ahn's checklist. A statistician reviewed individual papers and evaluated the list items in the checklist for each paper. To avoid the potential assessment error by the statistician who lacks expertise in the field of radiation oncology; the editorial board of the JKOSTRO reviewed each checklist for individual articles. A frequency analysis of the list items was performed using SAS (version 9.0, SAS Institute, NC, USA) software. Results: A total of 73 papers including 5 case reports and 68 original articles were reviewed. Inferential statistics was used in 46 papers. The most commonly adopted statistical methodology was a survival analysis (58.7%). Only 19% of papers were free of statistical errors. Errors of omission were encountered in 34 (50.0%) papers. Errors of commission were encountered in 35 (51.5%) papers. Twenty-one papers (30.9%) had both errors of omission and commission. Conclusion: A variety of statistical errors were encountered in papers published in the JKOSTRO. The current study suggests that a more thorough review of the statistical analysis is needed for manuscripts submitted in the JKOSTRO.

Development of a Simple and Reproducible Method for Removal of Contaminants from Ginseng Protein Samples Prior to Proteomics Analysis (활성탄을 이용한 불순물제거에 의한 효과적인 인삼 조직 단백질체 분석 방법 개선 연구)

  • Gupta, Ravi;Kim, So Wun;Min, Chul Woo;Sung, Gi-Ho;Agrawal, Ganesh Kumar;Rakwal, Randeep;Jo, Ick Hyun;Bang, Kyong Hwan;Kim, Young-Chang;Kim, Kee-Hong;Kim, Sun Tae
    • Journal of Life Science
    • /
    • v.25 no.7
    • /
    • pp.826-832
    • /
    • 2015
  • This study describes the effects of activated charcoal on the removal of salts, detergents, and pigments from protein extracts of ginseng leaves and roots. Incubation of protein extracts with 5% (w/v) activated charcoal (100-400 mesh) for 30 min at 4℃ almost removed the salts and detergents including NP-40 as can be observed on SDS-PAGE. In addition, analysis of chlorophyll content showed significant depletion of chlorophyll (~33%) after activated charcoal treatment, suggesting potential effect of activated charcoal on removal of pigments too along with the salts and detergents. 2-DE analysis of activated charcoal treated protein samples showed better resolution of proteins, further indicating the efficacy of activated charcoal in clearing of protein samples. In case of root proteins, although not major differences were observed on SDS-PAGE, 2-DE gels showed better resolution of spots after charcoal treatment. In addition, both Hierarchical clustering (HCL) and Principle component analysis (PCA) clearly separated acetone sample from rest of the samples. Phenol and AC-phenol samples almost overlapped each other suggesting no major differences between these samples. Overall, these results showed that activated charcoal can be used in a simple manner to remove the salts, detergents and pigments from the protein extracts of various plant tissues.

Molecular Characterization and Phylogenetic Analysis of Season Influenza Virus Isolated in Busan during the 2006-2008 Seasons (부산지역에서 유행한 계절인플루엔자바이러스의 유전자 특성 및 계통분석('06-'08 절기))

  • Park, Yon-Koung;Kim, Nam-Ho;Choi, Seung-Hwa;Lee, Mi-Oak;Min, Sang-Kee;Kim, Seong-Joon;Cho, Kyung-Soon;Na, Young-Nan
    • Journal of Life Science
    • /
    • v.20 no.3
    • /
    • pp.365-373
    • /
    • 2010
  • To monitor newly emerged influenza virus variants and to investigate the prevalence pattern, our laboratory performed isolation of the viruses from surveillance sentinel hospitals. In the present study, we analysed influenza A/H1N1, A/H3N2, B viruses isolated in Busan during the 2006/07 and 2007/08 seasons by sequence analysis of the hemagglutinin (HA1 subunit) and neuraminidase (NA) genes. The isolates studied here were selected by the stratified random sample method from a total of 277 isolates, in which 15 were A/H1N1, 16 were A/H3N2 and 29 were B. Based on the phylogenetic tree, the HA1 gene showed that A/H1N1 isolates had a 96.7% to 97.7% homology with the A/Brisbane/59/2007, A/H3N2 isolates had a 98.4% to 99.7% homology with the A/Brisbane/10/2007, and B isolates had a 96.5% to 99.7% homology with the B/Florida/4/2006(Yamagata lineage), which are all the vaccine strains for the Northern Hemisphere in 2008~2009 season. In the case of the NA gene, A/H1N1 isolates had 97.8% to 98.5% homologies, A/H3N2 isolates had 98.9% to 99.4% homologies, and B isolates had 98.9% to 100% homologies with each vaccine strain in the 2008~2009 season, respectively. Characterization of the hemagglutinin gene revealed that amino acids at the receptor-binding site and N-linked glycosylation site were highly conserved. These results provide useful information for the control of influenza viruses in Busan and for a better understanding of vaccine strain selection.

A Knowledge Management System for Supporting Development of the Next Generation Information Appliances (차세대 정보가전 신제품 개발 지원을 위한 지식관리시스템 개발)

  • Park, Ji-Soo;Baek, Dong-Hyun
    • Information Systems Review
    • /
    • v.6 no.2
    • /
    • pp.137-159
    • /
    • 2004
  • The next generation information appliances are those that can be connected with other appliances through a wired or wireless network in order to make it possible for them to transmit and receive data between them and to be remotely controlled from inside or outside of the home. Many electronic companies have aggressively invested in developing new information appliances to take the initiative in upcoming home networking era. They require systematic methods for developing new information appliances and sharing the knowledge acquired from the methods. This paper stored the knowledge acquired from developing the information appliances and developed a knowledge management system that supports the companies to use the knowledge and develop their own information appliances. In order to acquire the knowledge, this paper applied two methods for User-Centered Design in stead of using the general ones for knowledge acquisition. This paper suggested new product ideas by analyzing and observing user actions and stored the knowledge in knowledge bases, which included Knowledge from Analyzing User Actions and Knowledge from Observing User Actions. Seven new product ideas, suggested from the User-Centered Design, were made into design mockups and their videos were produced to show the real situations where they would be used in home of the future, which were stored in the knowledge base of Knowledge from Producing New Emotive Life Videos. Finally, data on present development states of future homes in Europe and Japan and newspapers articles from domestic newspapers were collected and stored in the knowledge base of Knowledge from Surveying Technology Developments. This paper developed a web-based knowledge management system that supports the companies to use the acquired knowledge. Knowledge users can get the knowledge required for developing new information appliances and suggest their own product ideas by using the knowledge management system. This will make the results from this research not confined to a case study of product development but extended to playing a role of facilitating the development of the next generation information appliances.

Error Analysis of Image Velocimetry According to the Variation of the Interrogation Area (상관영역 크기 변화에 따른 영상유속계의 오차 분석)

  • Kim, Seojun;Yu, Kwonkyu;Yoon, Byungman
    • Journal of Korea Water Resources Association
    • /
    • v.46 no.8
    • /
    • pp.821-831
    • /
    • 2013
  • Recently image velocimetries, including particle image velocimetry (PIV) and surface image velocimetry (SIV), are often used to measure flow velocities in laboratories and rivers. The most difficult point in using image velocimetries may be how to determine the sizes of the interrogation areas and the measurement uncertainties. Especially, it is a little hard for unskilled users to use these instruments, since any standardized measuring techniques or measurement uncertainties are not well evaluated. Sometimes the user's skill and understanding on the instruments may make a wide gap between velocity measurement results. The present study aims to evaluate image velocimetry's uncertainties due to the changes in the sizes of interrogation areas and searching areas with the error analyses. For the purpose, we generated 12 series of artificial images with known velocity fields and various numbers and sizes of particles. The analysis results showed that the accuracy of velocity measurements of the image velocimetry was significantly affected by the change of the size of interrogation area. Generally speaking, the error was reduced as the size of interrogation areas became small. For the same sizes of interrogation areas, the larger particle sizes and the larger number of particles resulted smaller errors. Especially, the errors of the image velocimetries were more affected by the number of particles rather than the sizes of them. As the sizes of interrogation areas were increased, the differences between the maximum and the minimum errors seemed to be reduced. For the size of the interrogation area whose average errors were less than 5%, the differences between the maximum and the minimum errors seemed a little large. For the case, in other words, the uncertainty of the velocity measurements of the image velocimetry was large. In the viewpoint of the particle density, the size of the interrogation area was small for large particle density cases. For the cases of large number of particle and small particle density, however, the minimum size of interrogation area became smaller.

Dynamic Traffic Assignment Using Genetic Algorithm (유전자 알고리즘을 이용한 동적통행배정에 관한 연구)

  • Park, Kyung-Chul;Park, Chang-Ho;Chon, Kyung-Soo;Rhee, Sung-Mo
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.8 no.1 s.15
    • /
    • pp.51-63
    • /
    • 2000
  • Dynamic traffic assignment(DTA) has been a topic of substantial research during the past decade. While DTA is gradually maturing, many aspects of DTA still need improvement, especially regarding its formulation and solution algerian Recently, with its promise for In(Intelligent Transportation System) and GIS(Geographic Information System) applications, DTA have received increasing attention. This potential also implies higher requirement for DTA modeling, especially regarding its solution efficiency for real-time implementation. But DTA have many mathematical difficulties in searching process due to the complexity of spatial and temporal variables. Although many solution algorithms have been studied, conventional methods cannot iud the solution in case that objective function or constraints is not convex. In this paper, the genetic algorithm to find the solution of DTA is applied and the Merchant-Nemhauser model is used as DTA model because it has a nonconvex constraint set. To handle the nonconvex constraint set the GENOCOP III system which is a kind of the genetic algorithm is used in this study. Results for the sample network have been compared with the results of conventional method.

  • PDF

Experimental Studies on the Properties of Epoxy Resin Mortars (에폭시 수지 모르터의 특성에 관한 실험적 연구)

  • 연규석;강신업
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.26 no.1
    • /
    • pp.52-72
    • /
    • 1984
  • This study was performed to obtain the basic data which can be applied to the use of epoxy resin mortars. The data was based on the properties of epoxy resin mortars depending upon various mixing ratios to compare those of cement mortar. The resin which was used at this experiment was Epi-Bis type epoxy resin which is extensively being used as concrete structures. In the case of epoxy resin mortar, mixing ratios of resin to fine aggregate were 1: 2, 1: 4, 1: 6, 1: 8, 1:10, 1 :12 and 1:14, but the ratio of cement to fine aggregate in cement mortar was 1 : 2.5. The results obtained are summarized as follows; 1.When the mixing ratio was 1: 6, the highest density was 2.01 g/cm$^3$, being lower than 2.13 g/cm$^3$ of that of cement mortar. 2.According to the water absorption and water permeability test, the watertightness was shown very high at the mixing ratios of 1: 2, 1: 4 and 1: 6. But then the mixing ratio was less than 1 : 6, the watertightness considerably decreased. By this result, it was regarded that optimum mixing ratio of epoxy resin mortar for watertight structures should be richer mixing ratio than 1: 6. 3.The hardening shrinkage was large as the mixing ratio became leaner, but the values were remarkably small as compared with cement mortar. And the influence of dryness and moisture was exerted little at richer mixing ratio than 1: 6, but its effect was obvious at the lean mixing ratio, 1: 8, 1:10,1:12 and 1:14. It was confirmed that the optimum mixing ratio for concrete structures which would be influenced by the repeated dryness and moisture should be rich mixing ratio higher than 1: 6. 4.The compressive, bending and splitting tensile strenghs were observed very high, even the value at the mixing ratio of 1:14 was higher than that of cement mortar. It showed that epoxy resin mortar especially was to have high strength in bending and splitting tensile strength. Also, the initial strength within 24 hours gave rise to high value. Thus it was clear that epoxy resin was rapid hardening material. The multiple regression equations of strength were computed depending on a function of mixing ratios and curing times. 5.The elastic moduli derived from the compressive stress-strain curve were slightly smaller than the value of cement mortar, and the toughness of epoxy resin mortar was larger than that of cement mortar. 6.The impact resistance was strong compared with cement mortar at all mixing ratios. Especially, bending impact strength by the square pillar specimens was higher than the impact resistance of flat specimens or cylinderic specimens. 7.The Brinell hardness was relatively larger than that of cement mortar, but it gradually decreased with the decline of mixing ratio, and Brinell hardness at mixing ratio of 1 :14 was much the same as cement mortar. 8.The abrasion rate of epoxy resin mortar at all mixing ratio, when Losangeles abation testing machine revolved 500 times, was very low. Even mixing ratio of 1 :14 was no more than 31.41%, which was less than critical abrasion rate 40% of coarse aggregate for cement concrete. Consequently, the abrasion rate of epoxy resin mortar was superior to cement mortar, and the relation between abrasion rate and Brinell hardness was highly significant as exponential curve. 9.The highest bond strength of epoxy resin mortar was 12.9 kg/cm$^2$ at the mixing ratio of 1:2. The failure of bonded flat steel specimens occurred on the part of epoxy resin mortar at the mixing ratio of 1: 2 and 1: 4, and that of bonded cement concrete specimens was fond on the part of combained concrete at the mixing ratio of 1 : 2 ,1: 4 and 1: 6. It was confirmed that the optimum mixing ratio for bonding of steel plate, and of cement concrete should be rich mixing ratio above 1 : 4 and 1 : 6 respectively. 10.The variations of color tone by heating began to take place at about 60˚C, and the ultimate change occurred at 120˚C. The compressive, bending and splitting tensile strengths increased with rising temperature up to 80˚ C, but these rapidly decreased when temperature was above 800 C. Accordingly, it was evident that the resistance temperature of epoxy resin mortar was about 80˚C which was generally considered lower than that of the other concrete materials. But it is likely that there is no problem in epoxy resin mortar when used for unnecessary materials of high temperature resistance. The multiple regression equations of strength were computed depending on a function of mixing ratios and heating temperatures. 11.The susceptibility to chemical attack of cement mortar was easily affected by inorganic and organic acid. and that of epoxy resin mortar with mixing ratio of 1: 4 was of great resistance. On the other hand, when mixing ratio was lower than 1 : 8 epoxy resin mortar had very poor resistance, especially being poor resistant to organicacid. Therefore, for the structures requiring chemical resistance optimum mixing of epoxy resin mortar should be rich mixing ratio higher than 1: 4.

  • PDF

Comparison of Clinical Results and Second-Look Arthroscopy after Anterior Cruciate Ligament Reconstruction using Hamstring Tendon Autograft, Mixed graft and Tibialis Tendon Allograft (자가슬괵건, 혼합건 및 동종 경골건을 이용하여 실시한 전방십자인대 재건술후 임상결과 및 이차관절경 검사 비교)

  • Cho, Jin-Ho
    • Journal of Korean Orthopaedic Sports Medicine
    • /
    • v.10 no.1
    • /
    • pp.18-26
    • /
    • 2011
  • Purpose: This study is to compare the clinical results of ACL reconstruction between three groups using hamstring tendon autograft, mixed and tibialis tendon allograft. Materials and Methods: Between August 2003 and August 2008, we analyzed 169 cases of ACL reconstruction, 66 cases used hamstring tendon autograft, 42 cases used mixed graft and 61 cases used tibialis tendon allograft, with a minimum follow-up of 12 months. For the clinical evaluation, we evaluated the Lysholm score, Telos stress test device and IKDC score. Results: The average side to side difference in Telos stress test decreased from $7.5{\pm}1.0$ mm to $1.6{\pm}1.0$ mm in autograft group, from $7.6{\pm}1.1$ mm to $1.4{\pm}1.1$ mm in mixed graft group and from $7.4{\pm}1.3$ mm to $2.5{\pm}1.3$ mm in allograft group. The average Lysholm knee score improved from 58.6 to 92.3 in autograft group, from 60.6 to 92.6 in mixed graft group and from 55.3 to 91.5 in allograft group. There was no significant difference between three groups in clinical results. At second look arthroscopy, tension of ligament and synovial coverage were good result in autograft and mixed graft than allograft group. Conclusion: All hamstring tendon autograft, mixed graft and tibialis tendon allograft groups showed satisfactory clinical results, with no significant difference in outcomes between the groups. Both hamstring tendon autograft and mixed graft showed good synovial coverage in second look arthroscopy. So mixed graft will be considered as good alternative in case of shorter or thin harvested hamstring tendon.

  • PDF

A Comparative Study on Thermal and Belt Press Dewatering for Waterworks Sludge Rduction (열 탈수와 벨트프레스 탈수장치의 현장적용에 따른 탈수성 비교연구)

  • Lee, Jung-Eun
    • Journal of Korean Society of Environmental Engineers
    • /
    • v.28 no.10
    • /
    • pp.1016-1023
    • /
    • 2006
  • The water content of dewatered cake produced from belt press dewatering equipment was about 75 wt% which was some high to handle it, so the equipment contained a limit at the economical and environmental aspect. The thermal dewatering equipment built as an alternative to overcome several problems was set up at the sludge treatment field and estimated some feasibility as comparison with the dewatering performance of belt press. First, dewatering properties of waterworks sludge was analyzed by monthly. The sludge of a water shortage season contained a high organic content which led to be difficult to dewater the cake, the other side the sludge of rainwater season was ease to dewater because of low organic content. According to the results to analysis the water content of dewatered cake produced from two equipments on the base of the seasonal dewatering properties, the water content of dewatered cake produced from thermal dewatering for sludge of water shortage season was $41.6{\sim}48.3$ wt% and $71{\sim}84$ wt% from belt press. In the case of rainwater season, the water content of dewatered cake produced from thermal dewatering was $34{\sim}37.7$ wt% and $57{\sim}70$ wt% from belt press. It was understood that thereduction of water content of cake by thermal dewatering was larger than belt press. The economical aspect for two equipments was evaluated on considering the reduction of cake treatment amount as the decrease of water content of cake. When putting the cost index of thermal dewatering into 100, belt press was 121. This meant that thermal dewater was more economical than belt press by about 20% in the side of construction and operation. In conclusion, thermal dewatering equipment was estimated by producing the low water content dewatered cake as well as being operated with low coat.