• Title/Summary/Keyword: Computer experiments

Search Result 3,946, Processing Time 0.032 seconds

A STUDY ON THE GALVANIC CORROSION OF TITANIUM USING THE IMMERSION AND ELECTROCHEMICAL METHOD (침적법과 전기화학법을 이용한 티타늄의 갈바닉 부식에 관한 연구)

  • Kay, Kee-Sung;Chung, Chae-Heon;Kang, Dong-Wan;Kim, Byeong-Ok;Hwang, Ho-Gil;Ko, Yeong-Mu
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.33 no.3
    • /
    • pp.584-609
    • /
    • 1995
  • The purpose of this study was to evaluate the difference of the galvanic corrosion behaviour of the titanium in contact with gold alloy, silva-palladium alloy, and nickel-chromium alloy using the immersion and electrochemical method. And the effects of galvallit couples between titanium and the dental alloys were assessed for their usefulness as materials for superstructure. The immersion method was performed by measuring the amount of metal elementsreleased by Inductivey coupled plasma emission spectroscopy(ICPES) The specimen of fifteen titanium plates, the five gold alloy, five silver-palladium, five nickel-chromium plates, and twenty acrylic resin plates ware fabricated, and also the specimen of sixty titanium plugs, the thirty gold alloy, thirty silver-palladium, and nickelc-hromium plugs were made. Thereafter, each plug of gold alloy, silver-palladium, and nickel-chromium inserted into the the titanium and acrylic resin plate, and also titanium plug inserted into the acrylic resin plate. The combination specimens uf galvanic couples immersed in 70m1 artificial saliva solution, and also specimens of four type alloy(that is, titanium, gold, silver-palladium and nickel-chromium alloy) plugs were immersed solely in 70m1 artificial sativa solution. The amount of metal elements released was observed during 21 weeks in the interval of each seven week. The electrochemical method was performed using computer-controlled potentiosta(Autostat 251. Sycopel Sicentific Ltd., U.K). The wax patterns(diameter 11.0mm, thickness,in 1.5mm) of four dental casting alloys were casted by centrifugal method and embedded in self-curing acrylic resin to be about $1.0cm^2$ of exposed surface area. Embedded specimens were polished with silicone carbide paper to #2,000, and ultrasonically cleaned. The working electrode is the specimen of four dental casting alloys, the reference electrode is a saturated calmel electrode(SCE) and the ounter electrode is made of platinum plate. In the artificial saliva solution, the potential scanning was carried out starting from-700mV(SCE) TO +1,000mV(SCE) and the scan rate was 75mV/min. Each polarization curve of alloy was recorded automatically on a logrithmic graphic paper by XY recorder. From the polarization curves of each galvanic couple, corrosion potential and corrosion rates, that is, corrosion density were compared and order of corrosion tendency was determined. From the experiments, the following results were obtained : 1. In the case of immersing titanium, gold alloy, silver-palladium alloy, and nickel-chromium alloysolely in the artificial saliva solution(group 1, 2, 3, and 4), the total amount of metal elements released was that group 4 was greater about 2, 3 times than group 3, and about 7.8 times than group 2. In the case of group 1, the amount of titanium released was not found after 8 week(p<0.001). 2. In the case of galvanic couples of titanium in contact with alloy(group 5, 6), the total amount of metal elements released of group 5 and 6 was less than that of group 7, 8, 9, and 10(p<0.05). 3. In the case of galvanic couples of titanium in contact with silver-palladium alloy(group 7, 8), the total amount of metal elements released of group 7 was greater about twice than that of group 5, and that of group 8 was about 14 times than that of group 6(p<0.05). 4. In the case of galvanic couples of titanium in contact with nickel-chromium alloy(group 9, 10), the total amount of metal elements released of group 9 and 10 was greater about 1.8-3.2 times than that of group 7 and 8, and was greater about 4.3~25 times than that of group 5 and 6(p<0.05). 5. In the effect of galvanic corrosion according to the difference of the area ratio of cathode and anode, the total amount of metal elements released was that group 5 was greater about 4 times than group 6, group 8 was greater about twice than group 7, and group 10 was greater about 1.5 times than group 9(p<0.05). 6. In the effect of galvanic corrosion according to the elasped time during 21 week in the interval of each 7 week, the amount of metal elements released was decreased markedly in the case of galvanic couples of the titanium in contact with gold alloy and silver-palladium alloy but the total amount of nickel and beryllium released was not decreased markedly in the case of galvanic couples of the titanium in contact with nickel-chromium alloy(p<0.05). 7. In the case of galvanic couples of titanium in contact with gold alloy, galvanic current was lower than any other galvanic couple. 8. In the case of galvanic couples of titanium in contact with nickel-chromium alloy, galvanic current was highest among other galvanic couples.

  • PDF

Simulation of Pension Finance and Its Economic Effects (연금재정(年金財政) 시뮬레이션과 경제적(經濟的) 파급효과(波及效果))

  • Min, Jae-sung;Kim, Yong-ha
    • KDI Journal of Economic Policy
    • /
    • v.13 no.1
    • /
    • pp.115-134
    • /
    • 1991
  • The role of pension plans in the macroeconomy has been a subject of much interest for some years. It has come to be recognized that pension plans may alter basic macroeconomic behavior patterns. The net effects on both savings and labor supply are thus matters for speculation. The aim of the present paper is to provide quantitative results which may be helpful in attaching orders of magnitude to some of the possible effects. We are not concerned with the providing empirical evidence relating to actual behavior, but rather with deriving the macroeconomic implications for a alternative possibilities. The pension plan interacts with the economy and the population in a number of ways. Demographic variables may thus affect both the economic burden of a national pension plan and the ability of the economy to sustain the burden. The tax transfer process associated with the pension plan may have implications for national patterns of saving and consumption. The existence of a pension plan may have implications also for the size of the labor force, inasmuch as labor force participation rates may be affected. Changes in technology and the associated changes in average productivity levels bear directly on the size of the national income, and hence on the pension contribution base. The vehicle for the analysis is a hypothetical but broadly realistic simulation model of an economic- demographic system into which is inserted a national pension plan. All income, expenditure, and related aggregates are in real terms. The economy is basically neoclassical; full employment is assumed, output is generated by a Cobb-Douglas production process, and factors receive their marginal products. The model was designed for use in computer simulation experiments. The simulation results suggest a number of general conclusions. These may be summarized as follows; - The introduction of a national pension plan (funded system) tends to increase the rate of economic growth until cost exceeds revenue. - A scheme with full wage indexing is more expensive than one in which pensions are merely price indexed. - The rate of technical progress is not a critical element in determining the economic burden of the pension scheme. - Raising the rate of benefits affects its economic burden, and raising the age of eligibility may decrease the burden substantially. - The level of fertility is an element in determining the long-run burden. A sustained low fertility rate increases the proportion of the aged in total population and increases the burden of the pension plan. High fertility has inverse effects.

  • PDF

Analysis of Quantization Noise in Magnetic Resonance Imaging Systems (자기공명영상 시스템의 양자화잡음 분석)

  • Ahn C.B.
    • Investigative Magnetic Resonance Imaging
    • /
    • v.8 no.1
    • /
    • pp.42-49
    • /
    • 2004
  • Purpose : The quantization noise in magnetic resonance imaging (MRI) systems is analyzed. The signal-to-quantization noise ratio (SQNR) in the reconstructed image is derived from the level of quantization in the signal in spatial frequency domain. Based on the derived formula, the SQNRs in various main magnetic fields with different receiver systems are evaluated. From the evaluation, the quantization noise could be a major noise source determining overall system signal-to-noise ratio (SNR) in high field MRI system. A few methods to reduce the quantization noise are suggested. Materials and methods : In Fourier imaging methods, spin density distribution is encoded by phase and frequency encoding gradients in such a way that it becomes a distribution in the spatial frequency domain. Thus the quantization noise in the spatial frequency domain is expressed in terms of the SQNR in the reconstructed image. The validity of the derived formula is confirmed by experiments and computer simulation. Results : Using the derived formula, the SQNRs in various main magnetic fields with various receiver systems are evaluated. Since the quantization noise is proportional to the signal amplitude, yet it cannot be reduced by simple signal averaging, it could be a serious problem in high field imaging. In many receiver systems employing analog-to-digital converters (ADC) of 16 bits/sample, the quantization noise could be a major noise source limiting overall system SNR, especially in a high field imaging. Conclusion : The field strength of MRI system keeps going higher for functional imaging and spectroscopy. In high field MRI system, signal amplitude becomes larger with more susceptibility effect and wider spectral separation. Since the quantization noise is proportional to the signal amplitude, if the conversion bits of the ADCs in the receiver system are not large enough, the increase of signal amplitude may not be fully utilized for the SNR enhancement due to the increase of the quantization noise. Evaluation of the SQNR for various systems using the formula shows that the quantization noise could be a major noise source limiting overall system SNR, especially in three dimensional imaging in a high field imaging. Oversampling and off-center sampling would be an alternative solution to reduce the quantization noise without replacement of the receiver system.

  • PDF

Potential Element Retention by Weathered Pulverised Fuel Ash : II. Column Leaching Experiments (풍화 석탄연소 고형폐기물(Pulverised Fuel Ash)의 중금속 제거가능성 : II. 주상용출실험)

  • Lee, Sanghoon
    • Economic and Environmental Geology
    • /
    • v.28 no.3
    • /
    • pp.259-269
    • /
    • 1995
  • Column leaching tests were conducted using fresh and weathered pulverised fuel ash of some 17 and 40 years old from two major British power plants, with deionised water and simulated synthetic industrial leachate. The former was to see the leaching behaviour of weathered ash and the latter was to see if the formation of secondary products from water and PFA interaction and ameliorating effect in removing metals from industrial leachates. Fresh PFA liberates elevated concentrations of surface-enriched inorganics, including Ca, Na, K, B, $Cr_{total}$, Li Mo, Se and $SO^{2-}_4$. This might indicate their association with the surface of PFA particles. In the column leaching tests using weathered ash and deionised water, elements are not readily leached but are released more slowly, showing relatively constant concentrations. For the case of weathered ash, some readily soluble surface-enriched elements appears to have been liberated in their early stage of leaching and the liberation of glass associated elements are thought to be more important function in controlling the element concentration. The result from column leaching tests exceed for a number of elements when compared with various Water Standards and suggests the leachate from PFA disposal mound needs dilution to achieve target concentrations. PF A shows element retention effect for many elements, including B, Fe, Zn, Hg, Ni, Li and Mo, in the order of fresh Drax ash > weathered Drax ash > Weathered Meaford ash in retaining capacity. Geochemical modelling using a computer program WATEQ4F reveals some solubility controlling secondary solid products. These include $CaSO_4{\cdot}2H_2O$ for Ca, $Al(OH)_3$ for Al and $Fe(OH)_3$ for Fe.

  • PDF

Improving Bidirectional LSTM-CRF model Of Sequence Tagging by using Ontology knowledge based feature (온톨로지 지식 기반 특성치를 활용한 Bidirectional LSTM-CRF 모델의 시퀀스 태깅 성능 향상에 관한 연구)

  • Jin, Seunghee;Jang, Heewon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.253-266
    • /
    • 2018
  • This paper proposes a methodology applying sequence tagging methodology to improve the performance of NER(Named Entity Recognition) used in QA system. In order to retrieve the correct answers stored in the database, it is necessary to switch the user's query into a language of the database such as SQL(Structured Query Language). Then, the computer can recognize the language of the user. This is the process of identifying the class or data name contained in the database. The method of retrieving the words contained in the query in the existing database and recognizing the object does not identify the homophone and the word phrases because it does not consider the context of the user's query. If there are multiple search results, all of them are returned as a result, so there can be many interpretations on the query and the time complexity for the calculation becomes large. To overcome these, this study aims to solve this problem by reflecting the contextual meaning of the query using Bidirectional LSTM-CRF. Also we tried to solve the disadvantages of the neural network model which can't identify the untrained words by using ontology knowledge based feature. Experiments were conducted on the ontology knowledge base of music domain and the performance was evaluated. In order to accurately evaluate the performance of the L-Bidirectional LSTM-CRF proposed in this study, we experimented with converting the words included in the learned query into untrained words in order to test whether the words were included in the database but correctly identified the untrained words. As a result, it was possible to recognize objects considering the context and can recognize the untrained words without re-training the L-Bidirectional LSTM-CRF mode, and it is confirmed that the performance of the object recognition as a whole is improved.

Development of Control Algorithm for Greenhouse Cooling Using Two-fluid Fogging System (이류체 포그 냉방시스템의 제어알고리즘 개발)

  • Nam, Sang-Woon;Kim, Young-Shik;Sung, In-Mo
    • Journal of Bio-Environment Control
    • /
    • v.22 no.2
    • /
    • pp.138-145
    • /
    • 2013
  • In order to develop the efficient control algorithm of the two-fluid fogging system, cooling experiments for the many different types of fogging cycles were conducted in tomato greenhouses. It showed that the cooling effect was 1.2 to $4.0^{\circ}C$ and the cooling efficiency was 8.2 to 32.9% on average. The cooling efficiency with fogging interval was highest in the case of the fogging cycle of 90 seconds. The cooling efficiency showed a tendency to increase as the fogging time increased and the stopping time decreased. As the spray rate of fog in the two-fluid fogging system increased, there was a tendency for the cooling efficiency to improve. However, as the inside air approaches its saturation level, even though the spray rate of fog increases, it does not lead to further evaporation. Thus, it can be inferred that increasing the spray rate of fog before the inside air reaches the saturation level could make higher the cooling efficiency. As cooling efficiency increases, the saturation deficit of inside air decreased and the difference between absolute humidity of inside and outside air increased. The more fog evaporated, the difference between absolute humidity of inside and outside air tended to increase and as the result, the discharge of vapor due to ventilation occurs more easily, which again lead to an increase in the evaporation rate and ultimately increase in the cooling efficiency. Regression analysis result on the saturation deficit of inside air showed that the fogging time needed to change of saturation deficit of $10g{\cdot}kg^{-1}$ was 120 seconds and stopping time was 60 seconds. But in order to decrease the amplitude of temperature and to increase the cooling efficiency, the fluctuation range of saturation deficit was set to $5g{\cdot}kg^{-1}$ and we decided that the fogging-stopping time of 60-30 seconds was more appropriate. Control types of two-fluid fogging systems were classified as computer control or simple control, and their control algorithms were derived. We recommend that if the two-fluid fogging system is controlled by manipulating only the set point of temperature, humidity, and on-off time, it would be best to set up the on-off time at 60-30 seconds in time control, the lower limit of air temperature at 30 to $32^{\circ}C$ and the upper limit of relative humidity at 85 to 90%.

Selectively Partial Encryption of Images in Wavelet Domain (웨이블릿 영역에서의 선택적 부분 영상 암호화)

  • ;Dujit Dey
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.6C
    • /
    • pp.648-658
    • /
    • 2003
  • As the usage of image/video contents increase, a security problem for the payed image data or the ones requiring confidentiality is raised. This paper proposed an image encryption methodology to hide the image information. The target data of it is the result from quantization in wavelet domain. This method encrypts only part of the image data rather than the whole data of the original image, in which three types of data selection methodologies were involved. First, by using the fact that the wavelet transform decomposes the original image into frequency sub-bands, only some of the frequency sub-bands were included in encryption to make the resulting image unrecognizable. In the data to represent each pixel, only MSBs were taken for encryption. Finally, pixels to be encrypted in a specific sub-band were selected randomly by using LFSR(Linear Feedback Shift Register). Part of the key for encryption was used for the seed value of LFSR and in selecting the parallel output bits of the LFSR for random selection so that the strength of encryption algorithm increased. The experiments have been performed with the proposed methods implemented in software for about 500 images, from which the result showed that only about 1/1000 amount of data to the original image can obtain the encryption effect not to recognize the original image. Consequently, we are sure that the proposed are efficient image encryption methods to acquire the high encryption effect with small amount of encryption. Also, in this paper, several encryption scheme according to the selection of the sub-bands and the number of bits from LFSR outputs for pixel selection have been proposed, and it has been shown that there exits a relation of trade-off between the execution time and the effect of the encryption. It means that the proposed methods can be selectively used according to the application areas. Also, because the proposed methods are performed in the application layer, they are expected to be a good solution for the end-to-end security problem, which is appearing as one of the important problems in the networks with both wired and wireless sections.

The Evaluation of Reconstructed Images in 3D OSEM According to Iteration and Subset Number (3D OSEM 재구성 법에서 반복연산(Iteration) 횟수와 부분집합(Subset) 개수 변경에 따른 영상의 질 평가)

  • Kim, Dong-Seok;Kim, Seong-Hwan;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.17-24
    • /
    • 2011
  • Purpose: Presently in the nuclear medicine field, the high-speed image reconstruction algorithm like the OSEM algorithm is widely used as the alternative of the filtered back projection method due to the rapid development and application of the digital computer. There is no to relate and if it applies the optimal parameter be clearly determined. In this research, the quality change of the Jaszczak phantom experiment and brain SPECT patient data according to the iteration times and subset number change try to be been put through and analyzed in 3D OSEM reconstruction method of applying 3D beam modeling. Materials and Methods: Patient data from August, 2010 studied and analyzed against 5 patients implementing the brain SPECT until september, 2010 in the nuclear medicine department of ASAN medical center. The phantom image used the mixed Jaszczak phantom equally and obtained the water and 99mTc (500 MBq) in the dual head gamma camera Symbia T2 of Siemens. When reconstructing each image altogether with patient data and phantom data, we changed iteration number as 1, 4, 8, 12, 24 and 30 times and subset number as 2, 4, 8, 16 and 32 times. We reconstructed in reconstructed each image, the variation coefficient for guessing about noise of images and image contrast, FWHM were produced and compared. Results: In patients and phantom experiment data, a contrast and spatial resolution of an image showed the tendency to increase linearly altogether according to the increment of the iteration times and subset number but the variation coefficient did not show the tendency to be improved according to the increase of two parameters. In the comparison according to the scan time, the image contrast and FWHM showed altogether the result of being linearly improved according to the iteration times and subset number increase in projection per 10, 20 and 30 second image but the variation coefficient did not show the tendency to be improved. Conclusion: The linear relationship of the image contrast improved in 3D OSEM reconstruction method image of applying 3D beam modeling through this experiment like the existing 1D and 2D OSEM reconfiguration method according to the iteration times and subset number increase could be confirmed. However, this is simple phantom experiment and the result of obtaining by the some patients limited range and the various variables can be existed. So for generalizing this based on this results of this experiment, there is the excessiveness and the evaluation about 3D OSEM reconfiguration method should be additionally made through experiments after this.

  • PDF

True Orthoimage Generation from LiDAR Intensity Using Deep Learning (딥러닝에 의한 라이다 반사강도로부터 엄밀정사영상 생성)

  • Shin, Young Ha;Hyung, Sung Woong;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.4
    • /
    • pp.363-373
    • /
    • 2020
  • During last decades numerous studies generating orthoimage have been carried out. Traditional methods require exterior orientation parameters of aerial images and precise 3D object modeling data and DTM (Digital Terrain Model) to detect and recover occlusion areas. Furthermore, it is challenging task to automate the complicated process. In this paper, we proposed a new concept of true orthoimage generation using DL (Deep Learning). DL is rapidly used in wide range of fields. In particular, GAN (Generative Adversarial Network) is one of the DL models for various tasks in imaging processing and computer vision. The generator tries to produce results similar to the real images, while discriminator judges fake and real images until the results are satisfied. Such mutually adversarial mechanism improves quality of the results. Experiments were performed using GAN-based Pix2Pix model by utilizing IR (Infrared) orthoimages, intensity from LiDAR data provided by the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) through the ISPRS (International Society for Photogrammetry and Remote Sensing). Two approaches were implemented: (1) One-step training with intensity data and high resolution orthoimages, (2) Recursive training with intensity data and color-coded low resolution intensity images for progressive enhancement of the results. Two methods provided similar quality based on FID (Fréchet Inception Distance) measures. However, if quality of the input data is close to the target image, better results could be obtained by increasing epoch. This paper is an early experimental study for feasibility of DL-based true orthoimage generation and further improvement would be necessary.

A Study on a Quantified Structure Simulation Technique for Product Design Based on Augmented Reality (제품 디자인을 위한 증강현실 기반 정량구조 시뮬레이션 기법에 대한 연구)

  • Lee, Woo-Hun
    • Archives of design research
    • /
    • v.18 no.3 s.61
    • /
    • pp.85-94
    • /
    • 2005
  • Most of product designers use 3D CAD system as a inevitable design tool nowadays and many new products are developed through a concurrent engineering process. However, it is very difficult for novice designers to get the sense of reality from modeling objects shown in the computer screens. Such a intangibility problem comes from the lack of haptic interactions and contextual information about the real space because designers tend to do 3D modeling works only in a virtual space of 3D CAD system. To address this problem, this research investigate the possibility of a interactive quantified structure simulation for product design using AR(augmented reality) which can register a 3D CAD modeling object on the real space. We built a quantified structure simulation system based on AR and conducted a series of experiments to measure how accurately human perceive and adjust the size of virtual objects under varied experimental conditions in the AR environment. The experiment participants adjusted a virtual cube to a reference real cube within 1.3% relative error(5.3% relative StDev). The results gave the strong evidence that the participants can perceive the size of a virtual object very accurately. Furthermore, we found that it is easier to perceive the size of a virtual object in the condition of presenting plenty of real reference objects than few reference objects, and using LCD panel than HMD. We tried to apply the simulation system to identify preference characteristics for the appearance design of a home-service robot as a case study which explores the potential application of the system. There were significant variances in participants' preferred characteristics about robot appearance and that was supposed to come from the lack of typicality of robot image. Then, several characteristic groups were segmented by duster analysis. On the other hand, it was interesting finding that participants have significantly different preference characteristics between robot with arm and armless robot and there was a very strong correlation between the height of robot and arm length as a human body.

  • PDF