• Title/Summary/Keyword: Speed-Based

Search Result 11,821, Processing Time 0.046 seconds

Dosimetric Analysis of Respiratory-Gated RapidArc with Varying Gating Window Times (호흡연동 래피드아크 치료 시 빔 조사 구간 설정에 따른 선량 변화 분석)

  • Yoon, Mee Sun;Kim, Yong-Hyeob;Jeong, Jae-Uk;Nam, Taek-Keun;Ahn, Sung-Ja;Chung, Woong-Ki;Song, Ju-Young
    • Progress in Medical Physics
    • /
    • v.26 no.2
    • /
    • pp.87-92
    • /
    • 2015
  • The gated RapidArc may produce a dosimetric error due to the stop-and-go motion of heavy gantry which can misalign the gantry restart position and reduce the accuracy of important factors in RapidArc delivery such as MLC movement and gantry speed. In this study, the effect of stop-and-go motion in gated RapidArc was analyzed with varying gating window time, which determines the total number of stop-and-go motions. Total 10 RapidArc plans for treatment of liver cancer were prepared. The RPM gating system and the moving phantom were used to set up the accurate gating window time. Two different delivery quality assurance (DQA) plans were created for each RapidArc plan. One is the portal dosimetry plan and the other is MapCHECK2 plan. The respiratory cycle was set to 4 sec and DQA plans were delivered with three different gating conditions: no gating, 1-sec gating window, and 2-sec gating window. The error between calculated dose and measured dose was evaluated based on the pass rate calculated using the gamma evaluation method with 3%/3 mm criteria. The average pass rates in the portal dosimetry plans were $98.72{\pm}0.82%$, $94.91{\pm}1.64%$, and $98.23{\pm}0.97%$ for no gating, 1-sec gating, and 2-sec gating, respectively. The average pass rates in MapCHECK2 plans were $97.80{\pm}0.91%$, $95.38{\pm}1.31%$, and $97.50{\pm}0.96%$ for no gating, 1-sec gating, and 2-sec gating, respectively. We verified that the dosimetric accuracy of gated RapidArc increases as gating window time increases and efforts should be made to increase gating window time during the RapidArc treatment process.

A Study on 21st Century Fashion Market in Korea (21세기 한국패션시장에 대한 연구)

  • Kim, Hye-Young
    • The Journal of Natural Sciences
    • /
    • v.10 no.1
    • /
    • pp.209-216
    • /
    • 1998
  • The results of the study of diving the 21st century's Korea fashion market into consumer market, fashion market, and a new marketing strategy are as follows. The 21st consumer market is First, a fashion democracy phenomenon. As many people try to leave unconditional fashion following, consumer show a phenomenon to choose and create their own fashion by subjective judgements. Second, a phenomenon of total fashion pursuit. Consumer in the future are likely to put their goals not in differentiating small item products, but considering various fashion elements based on their individuality and sense of value. Third, world quality-oriented. With the improvement of life level, it accomplishes to emphasize consumers' fashion mind on the world wide popular use of materials, quality, design and brand image. Fourth, with the entrance of neo-rationalism, consumers show increasing trends to emphasize wisdom, solidity in goods strategy pursuing high quality fashion and to demand resonable prices. Fifth, concept-oriented. Consumers are changing into pursuing concept appropriate to individual life scene. Prospecting the composition of the 21st century's fashion market, First, sportive casual zone will draw attention more than any other zone. This is because interest in sports will grow according to the increase of leisure time and the expasion of time and space in the 21st century, and also ecology will become the important issue of sports sense because of human beings's natural habit toward nature. Second, the down aging phenomenon will accelerate its speed as a big trend. Third, a retro phenomenon, a concept contrary to digital and high-tech, will become another big trend for its remake, antique, and classic concept in fashion market with ecology trend. New marketing strategy to cope with changing fashion market is as follows. First, with the trend of borderless concept, borders between apparels are becoming vague, for example, they offer custom-made products to consumers. Second, as more enterprises take the way of gorilla and guerrilla where guerrillas who aim at niche market show up will develop. Basically, they think highly of individual creative study, and pursue the scene adherence with high sensitiveness. However this polarization becomes mutually-supplementing relationship showing gorilla's guerilla movement, and guerilla's gorilla high-tech. Third with the development of value retailing, enterprises pursuing mass merchandising of groups called category killers are expanded and amplified to new product fields, and expand business' share. Fourth, using outsourcing, the trend to use exterior function leaving each enterprise's strength by inspecting its own work is gradually strong. Fifth, with the expansion of none store sale, the entrance of the internet and the CD-ROM sales added to communication sales such as catalogues are specified. An eminent American think tank expect that 5-5% of the total sale of clothes and home goods in 2010 will be done by none store sale. Accordingly, to overcome the problems, First international, global level marketing, Second, the improvement of technology, Third, knowledge-creating marketing are needed.

  • PDF

PST Member Behavior Analysis Based on Three-Dimensional Finite Element Analysis According to Load Combination and Thickness of Grouting Layer (하중조합과 충전층 두께에 따른 3차원 유한요소 해석에 의한 PST 부재의 거동 분석)

  • Seo, Hyun-Su;Kim, Jin-Sup;Kwon, Min-Ho
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.22 no.6
    • /
    • pp.53-62
    • /
    • 2018
  • Follofwing the accelerating speed-up of trains and rising demand for large-volume transfer capacity, not only in Korea, but also around the world, track structures for trains have been improving consistently. Precast concrete slab track (PST), a concrete structure track, was developed as a system that can fulfil new safety and economic requirements for railroad traffic. The purpose of this study is to provide the information required for the development and design of the system in the future, by analyzing the behavior of each structural member of the PST system. The stress distribution result for different combinations of appropriate loads according to the KRL-2012 train load and KRC code was analyzed by conducting a three-dimensional finite element analysis, while the result for different thicknesses of the grouting layer is also presented. Among the structural members, the largest stress took place on the grouting layer. The stress changed sensitively following the thickness and the combination of loads. When compared with a case of applying only a vertical KRL-2012 load, the stress increased by 3.3 times and 14.1 times on a concrete panel and HSB, respectively, from the starting load and temperature load. When the thickness of the grouting layer increased from 20 mm to 80 mm, the stress generated on the concrete panel decreased by 4%, while the stress increased by 24% on the grouting layer. As for the cracking condition, tension cracking was caused locally on the grouting layer. Such a result indicates that more attention should be paid to the flexure and tension behavior from horizontal loads rather than from vertical loads when developing PST systems. In addition, the safety of each structural member must be ensured by maintaining the thickness of the grouting layer at 40 mm or more.

THERMAL ANALYSIS OF THE DUAL CURED RESIN CEMENTS ACCORDING TO CURING CONDITION (중합조건에 따른 dual cured resin cement의 열분석적 연구)

  • Lee, In-Bog;Chung, Kwan-Hee;Um, Chung-Moon
    • Restorative Dentistry and Endodontics
    • /
    • v.24 no.2
    • /
    • pp.265-285
    • /
    • 1999
  • The purposes of this investigation were to observe the reaction kinetics of five commercial dual cured resin cements (Bistite, Dual, Scotchbond, Duolink and Duo) when cured under varying thicknesses of porcelain inlays by chemical or light activation and to evaluate the effect of the porcelain disc on the rate of polymerization of dual cured resin cement during light exposure by using thermal analysis. Thermogravimetric analysis(TGA) was used to evaluate the weight change as a function of temperature during a thermal program from $25{\sim}800^{\circ}C$ at rate of $10^{\circ}C$/min and to measure inorganic filler weight %. Differential scanning calorimetry(DSC) was used to evaluate the heat of cure(${\Delta}H$), maximum rate of heat output and peak heat flow time in dual cured resin cement systems when the polymerization reaction occured by chemical cure only or by light exposure through 0mm, 1mm, 2mm and 4mm thickness of porcelain discs. In 4mm thickness of porcelain disc, the exposure time was varied from 40s to 60s to investigate the effect of the exposure time on polymerization reaction. To investigate the effect on the setting of dual cured resin cements of absorption of polymerizing light by porcelain materials used as inlays and onlays, the change of the intensity of the light attenuated by 1mm, 2mm and 4mm thickness of porcelain discs was measured using curing radiometer. The results were as follows 1. The heat of cure of resin cements was 34~60J/gm and significant differences were observed between brands (P<0.001). Inverse relationship was present between the heat of reaction and filler weight % the heat of cure decreased with increasing filler content (R=-0.967). The heat of reaction by light cure was greater than by chemical cure in Bistite, Scotchbond and Duolink(P<0.05), but there was no statistically significant difference in Dual and Duo(P>0.05). 2. The polymerization rate of chemical cure and light cure of five commercially available dual cured resin cements was found to vary greatly with brand. Setting time based on peak heat flow time was shortest in Duo during chemical cure, and shortest in Dual during light cure. Cure speed by light exposure was 5~20 times faster than by chemical cure in dual cured resin cements. The dual cured resin cements differed markedly in the ratio of light and chemical activated catalysts. 3. The peak heat flow time increased by 1.51, 1.87, and 3.24 times as light cure was done through 1mm, 2mm and 4mm thick porcelain discs. Exposure times recommended by the manufacturers were insufficient to compensate for the attenuation of light by the 4mm thick porcelain disc. 4. A strong inverse relationship was observed between peak heat flow and peak time in chemical cure(R=0.951), and a strong positive correlations hip was observed between peak heat flow and the heat of cure in light cure(R=0.928). There was no correlationship present between filler weight % or heat of cure and peak time. 5. The thermal decomposition of resin cements occured primarily between $300^{\circ}C$ and $480^{\circ}C$ with maximum decomposition rates at $335^{\circ}C$ and $440^{\circ}C$.

  • PDF

The Risk Assessment of the Fire Occurrence According to Urban Facilities in Jinju-si (진주시 도시시설물별 화재발생 위험도 평가)

  • Bae, Gyu Han;Won, Tae Hong;Yoo, Hwan Hee
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.24 no.1
    • /
    • pp.43-50
    • /
    • 2016
  • Urbanization in Korea has increased significantly and subsequently, various facilities have been concentrated in urban areas at high speed in accordance with a growing urban population. Accordingly, damages have occurred due to a variety of disasters. In particular, fire damage among the social disasters caused the most severe damage in urban areas along with traffic accidents. 44,432 cases of fire occurred in 2015 in Korea. Due to these accidents, 253 were killed and property damage of 4,50 billion won was generated. However, despite the efforts to reduce a variety of damage, fire danger still remains high. In this regard, this study collected fire data, generated from 2007 to 2014 through the Jinju Fire Department and the National Fire Data System(NFDS) and calculated fire risk by analyzing the clustering of fire cases and facilities in Jinju-si based on the current DB of facilities, offered by the Ministry of Government Administration and Home Affairs. As a result, the risk ratings of fire occurrence were classified as four stages under the standards of the US Society of Fire Protection Engineers(SEPE). Business facilities, entertainment facilities, and automobile facilities were classified as the highest A grade, detached houses, Apartment houses, education facilities, sales facilities, accommodation, set of facilities, medical facilities, industrial facilities, and life service facilities were classified as U grade, and other facilities were classified as EU grade. Finally, hazardous production facilities were classified as BEU grade, the lowest grade. In addition, in the case of setting the standard with loss of life, the highest risk facility was the hazardous production facilities, while in the case of setting the standard with property damage, a set of facilities and industrial facilities showed the highest risk. In this regard, this study is expected to be effectively utilized to establish the fire reduction measures against facilities, distributed in urban space by calculating risk grades regarding the generation frequency, casualties, and property damage, through the classification of fire, occurred in the city, according to the facilities.

Simulation of Detailed Wind Flow over a Locally Heated Mountain Area Using a Computational Fluid Dynamics Model, CFD_NIMR_SNU - a fire case at Mt. Hwawang - (계산유체역학모형 CFD_NIMR_SNU를 이용한 국지적으로 가열된 산악지역의 상세 바람 흐름 모사 - 화왕산 산불 사례 -)

  • Koo, Hae-Jung;Choi, Young-Jean;Kim, Kyu-Rang;Byon, Jae-Young
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.4
    • /
    • pp.192-205
    • /
    • 2009
  • The unexpected wind over the Mt. Hwawang on 9 February 2009 was deadly when many spectators were watching a traditional event to burn dried grasses and the fire went out of control due to the wind. We analyzed the fatal wind based on wind flow simulations over a digitized complex terrain of the mountain with a localized heating area using a three dimensional computational fluid dynamics model, CFD_NIMR_SNU (Computational Fluid Dynamics_National Institute of Meteorological Research_Seoul National University). Three levels of fire intensity were simulated: no fire, $300^{\circ}C$ and $600^{\circ}C$ of surface temperature at the site on fire. The surface heat accelerated vertical wind speed by as much as $0.7\;m\;s^{-1}$ (for $300^{\circ}C$) and $1.1\;m\;s^{-1}$ (for $600^{\circ}C$) at the center of the fire. Turbulent kinetic energy was increased by the heat itself and by the increased mechanical force, which in turn was generated by the thermal convection. The heating together with the complex terrain and strong boundary wind induced the unexpected high wind conditions with turbulence at the mountain. The CFD_NIMR_SNU model provided valuable analysis data to understand the consequences of the fatal mountain fire. It is suggested that the place of fire was calm at the time of the fire setting due to the elevated terrain of the windward side. The suppression of wind was easily reversed when there was fire, which caused updraft of hot air by the fire and the strong boundary wind. The strong boundary wind in conjunction with the fire event caused the strong turbulence, resulting in many fire casualties. The model can be utilized in turbulence forecasting over a small area due to surface fire in conjunction with a mesoscale weather model to help fire prevention at the field.

The Performance Bottleneck of Subsequence Matching in Time-Series Databases: Observation, Solution, and Performance Evaluation (시계열 데이타베이스에서 서브시퀀스 매칭의 성능 병목 : 관찰, 해결 방안, 성능 평가)

  • 김상욱
    • Journal of KIISE:Databases
    • /
    • v.30 no.4
    • /
    • pp.381-396
    • /
    • 2003
  • Subsequence matching is an operation that finds subsequences whose changing patterns are similar to a given query sequence from time-series databases. This paper points out the performance bottleneck in subsequence matching, and then proposes an effective method that improves the performance of entire subsequence matching significantly by resolving the performance bottleneck. First, we analyze the disk access and CPU processing times required during the index searching and post processing steps through preliminary experiments. Based on their results, we show that the post processing step is the main performance bottleneck in subsequence matching, and them claim that its optimization is a crucial issue overlooked in previous approaches. In order to resolve the performance bottleneck, we propose a simple but quite effective method that processes the post processing step in the optimal way. By rearranging the order of candidate subsequences to be compared with a query sequence, our method completely eliminates the redundancy of disk accesses and CPU processing occurred in the post processing step. We formally prove that our method is optimal and also does not incur any false dismissal. We show the effectiveness of our method by extensive experiments. The results show that our method achieves significant speed-up in the post processing step 3.91 to 9.42 times when using a data set of real-world stock sequences and 4.97 to 5.61 times when using data sets of a large volume of synthetic sequences. Also, the results show that our method reduces the weight of the post processing step in entire subsequence matching from about 90% to less than 70%. This implies that our method successfully resolves th performance bottleneck in subsequence matching. As a result, our method provides excellent performance in entire subsequence matching. The experimental results reveal that it is 3.05 to 5.60 times faster when using a data set of real-world stock sequences and 3.68 to 4.21 times faster when using data sets of a large volume of synthetic sequences compared with the previous one.

Design of MAHA Supercomputing System for Human Genome Analysis (대용량 유전체 분석을 위한 고성능 컴퓨팅 시스템 MAHA)

  • Kim, Young Woo;Kim, Hong-Yeon;Bae, Seungjo;Kim, Hag-Young;Woo, Young-Choon;Park, Soo-Jun;Choi, Wan
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.2
    • /
    • pp.81-90
    • /
    • 2013
  • During the past decade, many changes and attempts have been tried and are continued developing new technologies in the computing area. The brick wall in computing area, especially power wall, changes computing paradigm from computing hardwares including processor and system architecture to programming environment and application usage. The high performance computing (HPC) area, especially, has been experienced catastrophic changes, and it is now considered as a key to the national competitiveness. In the late 2000's, many leading countries rushed to develop Exascale supercomputing systems, and as a results tens of PetaFLOPS system are prevalent now. In Korea, ICT is well developed and Korea is considered as a one of leading countries in the world, but not for supercomputing area. In this paper, we describe architecture design of MAHA supercomputing system which is aimed to develop 300 TeraFLOPS system for bio-informatics applications like human genome analysis and protein-protein docking. MAHA supercomputing system is consists of four major parts - computing hardware, file system, system software and bio-applications. MAHA supercomputing system is designed to utilize heterogeneous computing accelerators (co-processors like GPGPUs and MICs) to get more performance/$, performance/area, and performance/power. To provide high speed data movement and large capacity, MAHA file system is designed to have asymmetric cluster architecture, and consists of metadata server, data server, and client file system on top of SSD and MAID storage servers. MAHA system softwares are designed to provide user-friendliness and easy-to-use based on integrated system management component - like Bio Workflow management, Integrated Cluster management and Heterogeneous Resource management. MAHA supercomputing system was first installed in Dec., 2011. The theoretical performance of MAHA system was 50 TeraFLOPS and measured performance of 30.3 TeraFLOPS with 32 computing nodes. MAHA system will be upgraded to have 100 TeraFLOPS performance at Jan., 2013.

Integrated Rotary Genetic Analysis Microsystem for Influenza A Virus Detection

  • Jung, Jae Hwan;Park, Byung Hyun;Choi, Seok Jin;Seo, Tae Seok
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2013.08a
    • /
    • pp.88-89
    • /
    • 2013
  • A variety of influenza A viruses from animal hosts are continuously prevalent throughout the world which cause human epidemics resulting millions of human infections and enormous industrial and economic damages. Thus, early diagnosis of such pathogen is of paramount importance for biomedical examination and public healthcare screening. To approach this issue, here we propose a fully integrated Rotary genetic analysis system, called Rotary Genetic Analyzer, for on-site detection of influenza A viruses with high speed. The Rotary Genetic Analyzer is made up of four parts including a disposable microchip, a servo motor for precise and high rate spinning of the chip, thermal blocks for temperature control, and a miniaturized optical fluorescence detector as shown Fig. 1. A thermal block made from duralumin is integrated with a film heater at the bottom and a resistance temperature detector (RTD) in the middle. For the efficient performance of RT-PCR, three thermal blocks are placed on the Rotary stage and the temperature of each block is corresponded to the thermal cycling, namely $95^{\circ}C$ (denature), $58^{\circ}C$ (annealing), and $72^{\circ}C$ (extension). Rotary RT-PCR was performed to amplify the target gene which was monitored by an optical fluorescent detector above the extension block. A disposable microdevice (10 cm diameter) consists of a solid-phase extraction based sample pretreatment unit, bead chamber, and 4 ${\mu}L$ of the PCR chamber as shown Fig. 2. The microchip is fabricated using a patterned polycarbonate (PC) sheet with 1 mm thickness and a PC film with 130 ${\mu}m$ thickness, which layers are thermally bonded at $138^{\circ}C$ using acetone vapour. Silicatreated microglass beads with 150~212 ${\mu}L$ diameter are introduced into the sample pretreatment chambers and held in place by weir structure for construction of solid-phase extraction system. Fig. 3 shows strobed images of sequential loading of three samples. Three samples were loaded into the reservoir simultaneously (Fig. 3A), then the influenza A H3N2 viral RNA sample was loaded at 5000 RPM for 10 sec (Fig. 3B). Washing buffer was followed at 5000 RPM for 5 min (Fig. 3C), and angular frequency was decreased to 100 RPM for siphon priming of PCR cocktail to the channel as shown in Figure 3D. Finally the PCR cocktail was loaded to the bead chamber at 2000 RPM for 10 sec, and then RPM was increased up to 5000 RPM for 1 min to obtain the as much as PCR cocktail containing the RNA template (Fig. 3E). In this system, the wastes from RNA samples and washing buffer were transported to the waste chamber, which is fully filled to the chamber with precise optimization. Then, the PCR cocktail was able to transport to the PCR chamber. Fig. 3F shows the final image of the sample pretreatment. PCR cocktail containing RNA template is successfully isolated from waste. To detect the influenza A H3N2 virus, the purified RNA with PCR cocktail in the PCR chamber was amplified by using performed the RNA capture on the proposed microdevice. The fluorescence images were described in Figure 4A at the 0, 40 cycles. The fluorescence signal (40 cycle) was drastically increased confirming the influenza A H3N2 virus. The real-time profiles were successfully obtained using the optical fluorescence detector as shown in Figure 4B. The Rotary PCR and off-chip PCR were compared with same amount of influenza A H3N2 virus. The Ct value of Rotary PCR was smaller than the off-chip PCR without contamination. The whole process of the sample pretreatment and RT-PCR could be accomplished in 30 min on the fully integrated Rotary Genetic Analyzer system. We have demonstrated a fully integrated and portable Rotary Genetic Analyzer for detection of the gene expression of influenza A virus, which has 'Sample-in-answer-out' capability including sample pretreatment, rotary amplification, and optical detection. Target gene amplification was real-time monitored using the integrated Rotary Genetic Analyzer system.

  • PDF

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF