• Title/Summary/Keyword: computing speed

Search Result 898, Processing Time 0.043 seconds

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Greedy Precedent Frame Transmission Technique in VOD System (VoD 시스템에서 탐욕적 선행 전송 기법)

  • Lee, Joa-Hyoung;Jung, In-Bum
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.3
    • /
    • pp.603-612
    • /
    • 2010
  • Recently, with the advance of computing and networking technique, the high speed internet becomes widespread, however, it is still hard job to do streaming the media which requires high network bandwidth over the internet. Previous VoD system researches for streaming over the internet mainly proposed techniques that controls the QoS(Quality of Service) of the media in proportion to the network status. Though, this could be the solution for the service provider while the service user who wants constant QoS may not satisfy with variable QoS. In the paper, we propose greedy precedent frame transmission technique, GPFT, for guarantee of constant QoS. In GPFT, Streaming VoD server prefetches precedent frames and transmits the frame greedily by increasing the frame transmission rate while the available network bandwidth is high. The GPFT uses the prefetched precedent frames to guarantee the QoS while the available network bandwidth is low. The experiment result shows that the proposed GPFT could guarantee the constant QoS by prefetching the frames adaptively to the network bandwidth with the characteristic of video stream.

Performance Analysis of Soft Handoff Methods using Simulation (시뮬레이션을 이용한 소프트 핸드오프 방식의 성능 분석)

  • Han, Kyung-Sook;Kim, Tae-Jung
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.4
    • /
    • pp.411-420
    • /
    • 2000
  • The performance of soft handoffs of CDMA mobile communication systems is potentially determined by several factors such as handoff-related system parameters (T_ADD, T_DROP, T_COMP, T_TDROP), mobile stations' mobility, service areas, capacity of base stations. Due to the importance of handoffs in mobile communications, several methods have been proposed and tested through computer simulations to prove the efficiency of proposed methods. Different assumptions on the above mentioned factors often produce different simulation results. Therefore, the credibility of a simulation result is directly determined by the objectivity of the assumptions made by the simulation. This paper proposes a new soft handoff method that controls handoff delay time based on a mobile station's speed, and compares it with the current method of CDMA systems. The simulation results showed that the new method is much more efficient for mobile stations that are free in their moving direction and space than for those restricted in their moving direction and space. In addition, the results showed that even the same handoff method may produces different simulation results depending on whether a service area is modeled as two-dimensional space or three-dimensional space. These results indicate the importance of suitable models of user mobility, especially the movement types and space allowed for mobile stations, which have been neglected in simulation studies of mobile communications.

  • PDF

Performance Analysis of TCAM-based Jumping Window Algorithm for Snort 2.9.0 (Snort 2.9.0 환경을 위한 TCAM 기반 점핑 윈도우 알고리즘의 성능 분석)

  • Lee, Sung-Yun;Ryu, Ki-Yeol
    • Journal of Internet Computing and Services
    • /
    • v.13 no.2
    • /
    • pp.41-49
    • /
    • 2012
  • Wireless network support and extended mobile network environment with exponential growth of smart phone users allow us to utilize the network anytime or anywhere. Malicious attacks such as distributed DOS, internet worm, e-mail virus and so on through high-speed networks increase and the number of patterns is dramatically increasing accordingly by increasing network traffic due to this internet technology development. To detect the patterns in intrusion detection systems, an existing research proposed an efficient algorithm called the jumping window algorithm and analyzed approximately 2,000 patterns in Snort 2.1.0, the most famous intrusion detection system. using the algorithm. However, it is inappropriate from the number of TCAM lookups and TCAM memory efficiency to use the result proposed in the research in current environment (Snort 2.9.0) that has longer patterns and a lot of patterns because the jumping window algorithm is affected by the number of patterns and pattern length. In this paper, we simulate the number of TCAM lookups and the required TCAM size in the jumping window with approximately 8,100 patterns from Snort-2.9.0 rules, and then analyse the simulation result. While Snort 2.1.0 requires 16-byte window and 9Mb TCAM size to show the most effective performance as proposed in the previous research, in this paper we suggest 16-byte window and 4 18Mb-TCAMs which are cascaded in Snort 2.9.0 environment.

Intelligent Railway Detection Algorithm Fusing Image Processing and Deep Learning for the Prevent of Unusual Events (철도 궤도의 이상상황 예방을 위한 영상처리와 딥러닝을 융합한 지능형 철도 레일 탐지 알고리즘)

  • Jung, Ju-ho;Kim, Da-hyeon;Kim, Chul-su;Oh, Ryum-duck;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.109-116
    • /
    • 2020
  • With the advent of high-speed railways, railways are one of the most frequently used means of transportation at home and abroad. In addition, in terms of environment, carbon dioxide emissions are lower and energy efficiency is higher than other transportation. As the interest in railways increases, the issue related to railway safety is one of the important concerns. Among them, visual abnormalities occur when various obstacles such as animals and people suddenly appear in front of the railroad. To prevent these accidents, detecting rail tracks is one of the areas that must basically be detected. Images can be collected through cameras installed on railways, and the method of detecting railway rails has a traditional method and a method using deep learning algorithm. The traditional method is difficult to detect accurately due to the various noise around the rail, and using the deep learning algorithm, it can detect accurately, and it combines the two algorithms to detect the exact rail. The proposed algorithm determines the accuracy of railway rail detection based on the data collected.

A Study on Procurement Audit Integration Real Time Monitoring System Using Process Mining Under Big Data Environment (빅 데이터 환경하에서 프로세스 마이닝을 이용한 구매 감사 통합 실시간 모니터링 시스템에 대한 연구)

  • Yoo, Young-Seok;Park, Han-Gyu;Back, Seung-Hoon;Hong, Sung-Chan
    • Journal of Internet Computing and Services
    • /
    • v.18 no.3
    • /
    • pp.71-83
    • /
    • 2017
  • In recent years, by utilizing the greatest strengths of process mining, the various research activities have been actively progressed to use auditing work of business organization. On the other hand, there is insufficient research on systematic and efficient analysis of massive data generated under big data environment using process mining, and proactive monitoring of risk management from audit side, which is one of important management activities of corporate organization. In this study, we intend to realize Hadoop-based internal audit integrated real-time monitoring system in order to detect the abnormal symptoms in prevent accidents in advance. Through the integrated real-time monitoring system for purchasing audit, we intend to realize strengthen the delivery management of purchasing materials ordered, reduce cost of purchase, manage competitive companies, prevent fraud, comply with regulations, and adhere to internal control accounting system. As a result, we can provide information that can be immediately executed due to enhanced purchase audit integrated real-time monitoring by analyzing data efficiently using process mining via Hadoop-based systems. From an integrated viewpoint, it is possible to manage the business status, by processing a large amount of work at a high speed faster than the continuous monitoring, the effectiveness of the quality improvement of the purchase audit and the innovation of the purchase process appears.

Single Trace Analysis against HyMES by Exploitation of Joint Distributions of Leakages (HyMES에 대한 결합 확률 분포 기반 단일 파형 분석)

  • Park, ByeongGyu;Kim, Suhri;Kim, Hanbit;Jin, Sunghyun;Kim, HeeSeok;Hong, Seokhie
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.28 no.5
    • /
    • pp.1099-1112
    • /
    • 2018
  • The field of post-quantum cryptography (PQC) is an active area of research as cryptographers look for public-key cryptosystems that can resist quantum adversaries. Among those categories in PQC, code-based cryptosystem provides high security along with efficiency. Recent works on code-based cryptosystems focus on the side-channel resistant implementation since previous works have indicated the possible side-channel vulnerabilities on existing algorithms. In this paper, we recovered the secret key in HyMES(Hybrid McEliece Scheme) using a single power consumption trace. HyMES is a variant of McEliece cryptosystem that provides smaller keys and faster encryption and decryption speed. During the decryption, the algorithm computes the parity-check matrix which is required when computing the syndrome. We analyzed HyMES using the fact that the joint distributions of nonlinear functions used in this process depend on the secret key. To the best of our knowledge, we were the first to propose the side-channel analysis based on joint distributions of leakages on public-key cryptosystem.

FAST : A Log Buffer Scheme with Fully Associative Sector Translation for Efficient FTL in Flash Memory (FAST :플래시 메모리 FTL을 위한 완전연관섹터변환에 기반한 로그 버퍼 기법)

  • Park Dong-Joo;Choi Won-Kyung;Lee Sang-Won
    • The KIPS Transactions:PartA
    • /
    • v.12A no.3 s.93
    • /
    • pp.205-214
    • /
    • 2005
  • Flash memory is at high speed used as storage of personal information utilities, ubiquitous computing environments, mobile phones, electronic goods, etc. This is because flash memory has the characteristics of low electronic power, non-volatile storage, high performance, physical stability, portability, and so on. However, differently from hard disks, it has a weak point that overwrites on already written block of flash memory is impossible to be done. In order to make an overwrite possible, an erase operation on the written block should be performed before the overwrite, which lowers the performance of flash memory highly. In order to solve this problem the flash memory controller maintains a system software module called the flash translation layer(FTL). Of many proposed FTL schemes, the log block buffer scheme is best known so far. This scheme uses a small number of log blocks of flash memory as a write buffer, which reduces the number of erase operations by overwrites, leading to good performance. However, this scheme shows a weakness of low page usability of log blocks. In this paper, we propose an enhanced log block buffer scheme, FAST(Full Associative Sector Translation), which improves the page usability of each log block by fully associating sectors to be written by overwrites to the entire log blocks. We also show that our FAST scheme outperforms the log block buffer scheme.

An Approach of Scalable SHIF Ontology Reasoning using Spark Framework (Spark 프레임워크를 적용한 대용량 SHIF 온톨로지 추론 기법)

  • Kim, Je-Min;Park, Young-Tack
    • Journal of KIISE
    • /
    • v.42 no.10
    • /
    • pp.1195-1206
    • /
    • 2015
  • For the management of a knowledge system, systems that automatically infer and manage scalable knowledge are required. Most of these systems use ontologies in order to exchange knowledge between machines and infer new knowledge. Therefore, approaches are needed that infer new knowledge for scalable ontology. In this paper, we propose an approach to perform rule based reasoning for scalable SHIF ontologies in a spark framework which works similarly to MapReduce in distributed memories on a cluster. For performing efficient reasoning in distributed memories, we focus on three areas. First, we define a data structure for splitting scalable ontology triples into small sets according to each reasoning rule and loading these triple sets in distributed memories. Second, a rule execution order and iteration conditions based on dependencies and correlations among the SHIF rules are defined. Finally, we explain the operations that are adapted to execute the rules, and these operations are based on reasoning algorithms. In order to evaluate the suggested methods in this paper, we perform an experiment with WebPie, which is a representative ontology reasoner based on a cluster using the LUBM set, which is formal data used to evaluate ontology inference and search speed. Consequently, the proposed approach shows that the throughput is improved by 28,400% (157k/sec) from WebPie(553/sec) with LUBM.

Accelerated Loarning of Latent Topic Models by Incremental EM Algorithm (점진적 EM 알고리즘에 의한 잠재토픽모델의 학습 속도 향상)

  • Chang, Jeong-Ho;Lee, Jong-Woo;Eom, Jae-Hong
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.12
    • /
    • pp.1045-1055
    • /
    • 2007
  • Latent topic models are statistical models which automatically captures salient patterns or correlation among features underlying a data collection in a probabilistic way. They are gaining an increased popularity as an effective tool in the application of automatic semantic feature extraction from text corpus, multimedia data analysis including image data, and bioinformatics. Among the important issues for the effectiveness in the application of latent topic models to the massive data set is the efficient learning of the model. The paper proposes an accelerated learning technique for PLSA model, one of the popular latent topic models, by an incremental EM algorithm instead of conventional EM algorithm. The incremental EM algorithm can be characterized by the employment of a series of partial E-steps that are performed on the corresponding subsets of the entire data collection, unlike in the conventional EM algorithm where one batch E-step is done for the whole data set. By the replacement of a single batch E-M step with a series of partial E-steps and M-steps, the inference result for the previous data subset can be directly reflected to the next inference process, which can enhance the learning speed for the entire data set. The algorithm is advantageous also in that it is guaranteed to converge to a local maximum solution and can be easily implemented just with slight modification of the existing algorithm based on the conventional EM. We present the basic application of the incremental EM algorithm to the learning of PLSA and empirically evaluate the acceleration performance with several possible data partitioning methods for the practical application. The experimental results on a real-world news data set show that the proposed approach can accomplish a meaningful enhancement of the convergence rate in the learning of latent topic model. Additionally, we present an interesting result which supports a possible synergistic effect of the combination of incremental EM algorithm with parallel computing.