• Title/Summary/Keyword: computer based estimation

Search Result 1,366, Processing Time 0.031 seconds

Deep Learning-Based Lighting Estimation for Indoor and Outdoor (딥러닝기반 실내와 실외 환경에서의 광원 추출)

  • Lee, Jiwon;Seo, Kwanggyoon;Lee, Hanui;Yoo, Jung Eun;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.3
    • /
    • pp.31-42
    • /
    • 2021
  • We propose a deep learning-based method that can estimate an appropriate lighting of both indoor and outdoor images. The method consists of two networks: Crop-to-PanoLDR network and LDR-to-HDR network. The Crop-to-PanoLDR network predicts a low dynamic range (LDR) environment map from a single partially observed normal field of view image, and the LDR-to-HDR network transforms the predicted LDR image into a high dynamic range (HDR) environment map which includes the high intensity light information. The HDR environment map generated through this process is applied when rendering virtual objects in the given image. The direction of the estimated light along with ambient light illuminating the virtual object is examined to verify the effectiveness of the proposed method. For this, the results from our method are compared with those from the methods that consider either indoor images or outdoor images only. In addition, the effect of the loss function, which plays the role of classifying images into indoor or outdoor was tested and verified. Finally, a user test was conducted to compare the quality of the environment map created in this study with those created by existing research.

Deep Learning-Based Prediction of the Quality of Multiple Concurrent Beams in mmWave Band (밀리미터파 대역 딥러닝 기반 다중빔 전송링크 성능 예측기법)

  • Choi, Jun-Hyeok;Kim, Mun-Suk
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.13-20
    • /
    • 2022
  • IEEE 802.11ay Wi-Fi is the next generation wireless technology and operates in mmWave band. It supports the MU-MIMO (Multiple User Multiple Input Multiple Output) transmission in which an AP (Access Point) can transmit multiple data streams simultaneously to multiple STAs (Stations). To this end, the AP should perform MU-MIMO beamforming training with the STAs. For efficient MU-MIMO beamforming training, it is important for the AP to estimate signal strength measured at each STA at which multiple beams are used simultaneously. Therefore, in the paper, we propose a deep learning-based link quality estimation scheme. Our proposed scheme estimates the signal strength with high accuracy by utilizing a deep learning model pre-trained for a certain indoor or outdoor propagation scenario. Specifically, to estimate the signal strength of the multiple concurrent beams, our scheme uses the signal strengths of the respective single beams, which can be obtained without additional signaling overhead, as the input of the deep learning model. For performance evaluation, we utilized a Q-D (Quasi-Deterministic) Channel Realization open source software and extensive channel measurement campaigns were conducted with NIST (National Institute of Standards and Technology) to implement the millimeter wave (mmWave) channel. Our simulation results demonstrate that our proposed scheme outperforms comparison schemes in terms of the accuracy of the signal strength estimation.

Uncertainty Calculation Algorithm for the Estimation of the Radiochronometry of Nuclear Material (핵물질 연대측정을 위한 불확도 추정 알고리즘 연구)

  • JaeChan Park;TaeHoon Jeon;JungHo Song;MinSu Ju;JinYoung Chung;KiNam Kwon;WooChul Choi;JaeHak Cheong
    • Journal of Radiation Industry
    • /
    • v.17 no.4
    • /
    • pp.345-357
    • /
    • 2023
  • Nuclear forensics has been understood as a mendatory component in the international society for nuclear material control and non-proliferation verification. Radiochronometry of nuclear activities for nuclear forensics are decay series characteristics of nuclear materials and the Bateman equation to estimate when nuclear materials were purified and produced. Radiochronometry values have uncertainty of measurement due to the uncertainty factors in the estimation process. These uncertainties should be calculated using appropriate evaluation methods that are representative of the accuracy and reliability. The IAEA, US, and EU have been researched on radiochronometry and uncertainty of measurement, although the uncertainty calculation method using the Bateman equation is limited by the underestimation of the decay constant and the impossibility of estimating the age of more than one generation, so it is necessary to conduct uncertainty calculation research using computer simulation such as Monte Carlo method. This highlights the need for research using computational simulations, such as the Monte Carlo method, to overcome these limitations. In this study, we have analyzed mathematical models and the LHS (Latin Hypercube Sampling) methods to enhance the reliability of radiochronometry which is to develop an uncertainty algorithm for nuclear material radiochronometry using Bateman Equation. We analyzed the LHS method, which can obtain effective statistical results with a small number of samples, and applied it to algorithms that are Monte Carlo methods for uncertainty calculation by computer simulation. This was implemented through the MATLAB computational software. The uncertainty calculation model using mathematical models demonstrated characteristics based on the relationship between sensitivity coefficients and radiative equilibrium. Computational simulation random sampling showed characteristics dependent on random sampling methods, sampling iteration counts, and the probability distribution of uncertainty factors. For validation, we compared models from various international organizations, mathematical models, and the Monte Carlo method. The developed algorithm was found to perform calculations at an equivalent level of accuracy compared to overseas institutions and mathematical model-based methods. To enhance usability, future research and comparisons·validations need to incorporate more complex decay chains and non-homogeneous conditions. The results of this study can serve as foundational technology in the nuclear forensics field, providing tools for the identification of signature nuclides and aiding in the research, development, comparison, and validation of related technologies.

3D Shape Reconstruction using the Focus Estimator Value from Multi-Focus Cell Images (다초점 세포 영상으로부터 추정된 초점 값을 이용한 3차원 형태 복원)

  • Choi, Yea-Jun;Lee, Dong-Woo;Kim, Myoung-Hee;Choi, Soo-Mi
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.4
    • /
    • pp.31-40
    • /
    • 2017
  • As 3D cell culture has recently become possible, it has been able to observe a 3D shape of cell and volume. Generally, 3D information of a cell should be observed with a special microscope such as a confocal microscope or an electron microscope. However, a confocal microscope is more expensive than a conventional microscope and takes longer time to capture images. Therefore, there is a need for a method that can reconstruct the 3D shape of cells using a common microscope. In this paper, we propose a method of reconstructing 3D cells using the focus estimator value from multi-focal fluorescence images of cells. Initially, 3D cultured cells are captured with an optical microscope by changing the focus. Then the approximate position of the cells is assigned as ROI (Region Of Interest) using the circular Hough transform in the images. The MSBF (Modified Sliding Band Filter) is applied to the obtained ROI to extract the outlines of the cell clusters, and the focus estimator values are computed based on the extracted outlines. Using the computed focus estimator values and the numerical aperture (NA) of the microscope, we extract the outline of the cell cluster considering the depth and reconstruct the cells into 3D based on the extracted outline. The reconstruction results are examined by comparing with the combined in-focus portions of the cell images.

An Efficient Vehicle Image Compensation Algorithm based on Histogram Equalization (히스토그램 균등화 기반의 효율적인 차량용 영상 보정 알고리즘)

  • Hong, Sung-Il;Lin, Chi-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.3
    • /
    • pp.2192-2200
    • /
    • 2015
  • In this paper, we propose an efficient vehicle image compensation algorithm based on Histogram Equalization. The proposed a vehicle image compensation algorithm was elimination to the vehicle image shake using motion compensation and motion estimation. And, algorithm was calculated the histogram of pixel values from each sub-image by dividing the image as the constant size areas in order to image enhancement. Also, it had enhancement to the image by adjusting the gradient. The proposed algorithm was evaluate the difference between of performance and time, image by applied to the IP, and were confirmed the image enhancement with removing of vehicle camera image shake. In this paper, the proposed vehicle image enhancement algorithm was demonstrated effectiveness when compared to existing vehicle image stabilization, because the elimination of shake for the vehicle images used real-time processing without using a memory. And it was obtained the reduction effect of the computation time by the calculated through block matching, and obtained the better restoration result for naturalness of the image with the lowest noise.

New Sequential Clustering Combination for Rule Generation System (규칙 생성 시스템을 위한 새로운 연속 클러스터링 조합)

  • Kim, Sung Suk;Choi, Ho Jin
    • Journal of Internet Computing and Services
    • /
    • v.13 no.5
    • /
    • pp.1-8
    • /
    • 2012
  • In this paper, we propose a new clustering combination based on numerical data driven for rule generation mechanism. In large and complicated space, a clustering method can obtain limited performance results. To overcome the single clustering method problem, hybrid combined methods can solve problem to divided simple cluster estimation. Fundamental structure of the proposed method is combined by mountain clustering and modified Chen clustering to extract detail cluster information in complicated data distribution of non-parametric space. It has automatic rule generation ability with advanced density based operation when intelligent systems including neural networks and fuzzy inference systems can be generated by clustering results. Also, results of the mechanism will be served to information of decision support system to infer the useful knowledge. It can extend to healthcare and medical decision support system to help experts or specialists. We show and explain the usefulness of the proposed method using simulation and results.

Identification of Fuzzy Inference System Based on Information Granulation

  • Huang, Wei;Ding, Lixin;Oh, Sung-Kwun;Jeong, Chang-Won;Joo, Su-Chong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.4
    • /
    • pp.575-594
    • /
    • 2010
  • In this study, we propose a space search algorithm (SSA) and then introduce a hybrid optimization of fuzzy inference systems based on SSA and information granulation (IG). In comparison with "conventional" evolutionary algorithms (such as PSO), SSA leads no.t only to better search performance to find global optimization but is also more computationally effective when dealing with the optimization of the fuzzy models. In the hybrid optimization of fuzzy inference system, SSA is exploited to carry out the parametric optimization of the fuzzy model as well as to realize its structural optimization. IG realized with the aid of C-Means clustering helps determine the initial values of the apex parameters of the membership function of fuzzy model. The overall hybrid identification of fuzzy inference systems comes in the form of two optimization mechanisms: structure identification (such as the number of input variables to be used, a specific subset of input variables, the number of membership functions, and polyno.mial type) and parameter identification (viz. the apexes of membership function). The structure identification is developed by SSA and C-Means while the parameter estimation is realized via SSA and a standard least square method. The evaluation of the performance of the proposed model was carried out by using four representative numerical examples such as No.n-linear function, gas furnace, NO.x emission process data, and Mackey-Glass time series. A comparative study of SSA and PSO demonstrates that SSA leads to improved performance both in terms of the quality of the model and the computing time required. The proposed model is also contrasted with the quality of some "conventional" fuzzy models already encountered in the literature.

The Study for ENHPP Software Reliability Growth Model based on Burr Coverage Function (Burr 커버리지 함수에 기초한 ENHPP소프트웨어 신뢰성장모형에 관한 연구)

  • Kim, Hee-Cheul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.4
    • /
    • pp.33-42
    • /
    • 2007
  • Accurate predictions of software release times, and estimation of the reliability and availability of a software product require quantification of a critical element of the software testing process : test coverage. This model called Enhanced non-homogeneous poission process(ENHPP). In this paper, exponential coverage and S-shaped model was reviewed, proposes the Kappa coverage model, which maked out efficiency application for software reliability. Algorithm to estimate the parameters used to maximum likelihood estimator and bisection method, model selection based on SSE statistics and Kolmogorov distance, for the sake of efficient model, was employed. From the analysis of mission time, the result of this comparative study shows the excellent performance of Burr coverage model rather than exponential coverage and S-shaped model using NTDS data. This analysis of failure data compared with the Kappa coverage model and the existing model(using arithmetic and Laplace trend tests, bias tests) is presented.

  • PDF

Super-Pixels Generation based on Fuzzy Similarity (퍼지 유사성 기반 슈퍼-픽셀 생성)

  • Kim, Yong-Gil;Moon, Kyung-Il
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.2
    • /
    • pp.147-157
    • /
    • 2017
  • In recent years, Super-pixels have become very popular for use in computer vision applications. Super-pixel algorithm transforms pixels into perceptually feasible regions to reduce stiff features of grid pixel. In particular, super-pixels are useful to depth estimation, skeleton works, body labeling, and feature localization, etc. But, it is not easy to generate a good super-pixel partition for doing these tasks. Especially, super-pixels do not satisfy more meaningful features in view of the gestalt aspects such as non-sum, continuation, closure, perceptual constancy. In this paper, we suggest an advanced algorithm which combines simple linear iterative clustering with fuzzy clustering concepts. Simple linear iterative clustering technique has high adherence to image boundaries, speed, memory efficient than conventional methods. But, it does not suggest good compact and regular property to the super-pixel shapes in context of gestalt aspects. Fuzzy similarity measures provide a reasonable graph in view of bounded size and few neighbors. Thus, more compact and regular pixels are obtained, and can extract locally relevant features. Simulation shows that fuzzy similarity based super-pixel building represents natural features as the manner in which humans decompose images.

Spectrum Based Detector in Non-white Noise Environment (비백색 잡음 환경에 적합한 스펙트럼 기반 탐지기)

  • Yu, Seog-Kun;Joo, Eon-Kyeong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.10
    • /
    • pp.8-13
    • /
    • 2009
  • The MF(matched filter) is the optimum signal detector that maximizes the output instantaneous signal power to average noise power ratio in white noise environment. But it cannot give the optimum detection performance if the background noise is not white. So, the whitening process preceding the matched filter is needed in the conventional detector which results in a PWMF(pre-whitening matched filter). Its performance is mainly affected by the estimation accuracy of non-white noise model which is used in the whitening procedure. To estimate more accurate model to improve performance, the computational complexity is increased. Therefore, a spectrum based detector which shows better performance than the PWMF under the similar complexity condition or less complexity under the similar performance condition is proposed in this paper. And its performance and complexity are analyzed and compared with the conventional PWMF.