• Title/Summary/Keyword: matrix geometric mean

Search Result 30, Processing Time 0.029 seconds

A Control System for Avoiding Collisions between Autonomous Warfare Vehicles and Infantry (군용 무인차량과 보병의 충돌방지를 위한 제어시스템)

  • Nam, Sea-Hyeon;Chung, You-Chung
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.48 no.3
    • /
    • pp.74-82
    • /
    • 2011
  • This paper describes a control system for positioning the real-time locations of the autonomous warfare vehicles and infantry, and for avoiding collisions between them. The control system utilizes the low-cost RSSI (Received Signal Strength Indication) for positioning the locations of the wireless devices. The mathematical mean filtering processes are applied to the calculation of the RSS matrix to improve the performance for positioning the wireless devices in the multi-path propagation environment. A fuzzy rule is proposed to recover and replace the broken packets occurring in the wireless communication. The gradient and geometric triangulation algorithms are proposed to trace the real-time locations of wireless devices, based on the distances between them. The estimated location results of the geometric triangulation algorithm are compared with the results of the GPS and the gradient algorithm.

A Pilot Establishment of the Job-Exposure Matrix of Lead Using the Standard Process Code of Nationwide Exposure Databases in Korea

  • Ju-Hyun Park;Sangjun Choi;Dong-Hee Koh;Dae Sung Lim;Hwan-Cheol Kim;Sang-Gil Lee;Jihye Lee;Ji Seon Lim;Yeji Sung;Kyoung Yoon Ko;Donguk Park
    • Safety and Health at Work
    • /
    • v.13 no.4
    • /
    • pp.493-499
    • /
    • 2022
  • Background: The purpose of this study is to construct a job-exposure matrix for lead that accounts for industry and work processes within industries using a nationwide exposure database. Methods: We used the work environment measurement data (WEMD) of lead monitored nationwide from 2015 to 2016. Industrial hygienists standardized the work process codes in the database to 37 standard process and extracted key index words for each process. A total of 37 standardized process codes were allocated to each measurement based on an automated key word search based on the degree of agreement between the measurement information and the standard process index. Summary statistics, including the arithmetic mean, geometric mean, and 95th percentile level (X95), was calculated according to industry, process, and industry process. Using statistical parameters of contrast and precision, we compared the similarity of exposure groups by industry, process, and industry process. Results: The exposure intensity of lead was estimated for 583 exposure groups combined with 128 industry and 35 process. The X95 value of the "casting" process of the "manufacture of basic precious and non-ferrous metals" industry was 53.29 ㎍/m3, exceeding the occupational exposure limit of 50 ㎍/m3. Regardless of the limitation of the minimum number of samples in the exposure group, higher contrast was observed when the exposure groups were by industry process than by industry or process. Conclusion: We evaluated the exposure intensities of lead by combination of industry and process. The results will be helpful in determining more accurate information regarding exposure in lead-related epidemiological studies.

Design and Performance Evaluation of an Assemble-To-Order System (주문- 조립시스템의 설계 및 성능평가)

  • 박찬우;이효성
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.27 no.4
    • /
    • pp.41-65
    • /
    • 2002
  • We study a multi-component production/inventory system in which individual components are made to meet various demand types. We assume that the demands arrive according to a Poisson process, but there is a fixed probability that a demand requests a particular kit of different components. Each component is produced by a flow line with several stations in which the processing times of each station follow a two-stage Coxian distribution. The production of each component is operated by an independent base-stock policy with blocking. We assume that the time needed to assemble final products follows a general distribution and the capacity of an assembling facility is sufficiently large. The objective of this study is to obtain key performance measures such as the distribution of the number of each orders for each final product and the mean time of fulfilling a customer order. The basic principle of the proposed approximation method is to decompose the original system into a set of subsystems, each subsystem being associated with a flow line. Each subsystem is analyzed in isolation using a Marie's method. An iterative procedure is then used to determine the unknown parameters of each subsystem. Numerical results show that the accuracy of the approximation method is acceptable.

A Study on the Evaluation & Selection of Multimedia Authoring Tools using the AHP (AHP 기법을 이용한 멀티미디어 저작도구 평가 및 선정에 관한 연구)

  • Sim Sang-Chun;Kim Yong-Kyeom
    • Korean Management Science Review
    • /
    • v.21 no.2
    • /
    • pp.191-213
    • /
    • 2004
  • This study addresses the evaluation criteria of multimedia authoring tools(MATs), the way which decision-maker can exclude outlier matrix from group using the concept of the compatibility index, and how AHP can be applied to the selection of MATs in a group decision-making environment. The AHP technique allows an evaluator to quantify the relative importance of elements at each hierarchy and to calculate the composite relative weights for each product(i.e. MATs). This decision process allows setting to priority of the MATs based on the AHP. The results indicated that technical ability of MATs was the most significant factor in affecting their decision, trailed by managerial efficiency and vendor support. To the experts, multimedia data support was the less important than development interface. Also, the results indicated that product A(0.510) was their first choice, trailed by product C(0.286) and B(0.204). Assessing the composite relative weights revealed that expert group's members were consistent with the rankings of decision variables(evaluation criteria variables) statistically in selecting their MATs. Therefore, we believed that expert group's members have achieved sufficient agreement to permit the use of geometric mean to average the group's preference without obscuring the differences of individual opinions.

An accurate substructural synthesis approach to random responses

  • Ying, Z.G.;Zhu, W.Q.;Ye, S.Q.;Ni, Y.Q.
    • Structural Engineering and Mechanics
    • /
    • v.39 no.1
    • /
    • pp.47-75
    • /
    • 2011
  • An accurate substructural synthesis method including random responses synthesis, frequency-response functions synthesis and mid-order modes synthesis is developed based on rigorous substructure description, dynamic condensation and coupling. An entire structure can firstly be divided into several substructures according to different functions, geometric and dynamic characteristics. Substructural displacements are expressed exactly by retained mid-order fixed-interfacial normal modes and residual constraint modes. Substructural interfacial degree-of-freedoms are eliminated by interfacial displacements compatibility and forces equilibrium between adjacent substructures. Then substructural mode vibration equations are coupled to form an exact-condensed synthesized structure equation, from which structural mid-order modes are calculated accurately. Furthermore, substructural frequency-response function equations are coupled to yield an exact-condensed synthesized structure vibration equation in frequency domain, from which the generalized structural frequency-response functions are obtained. Substructural frequency-response functions are calculated separately by using the generalized frequency-response functions, which can be assembled into an entire-structural frequency-response function matrix. Substructural power spectral density functions are expressed by the exact-synthesized substructural frequency-response functions, and substructural random responses such as correlation functions and mean-square responses can be calculated separately. The accuracy and capacity of the proposed substructure synthesis method is verified by numerical examples.

Experimental Study on the Sediment Sorting Processes of the Bed Surface by Geomorphic Changes in the Alluvial Channels with Mixed Grain Size (실내실험에 의한 혼합사로 구성된 하상 표층에서 지형변동에 따른 유사의 분급 특성 분석)

  • Jang, Chang-Lae
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.12
    • /
    • pp.1213-1225
    • /
    • 2014
  • The development of bars and sediment sorting processes in the braided channels with the mixed grain sizes are investigated experimentally in this study. The sediment in the steep slope channels discharges with highly fluctuation. However, it discharges with relatively periodic cycles in the mild slope channels. The characteristics and amplitudes of the dominant bars are examined by double fourier analysis. The dimensionless sediment particle size decreases as the longitudinal bed elevation increases. However, the size increases as the longitudinal bed elevation decreases. As the dimensionless critical tractive force in the surface layer ratio to the force in the subsurface layer increases, the surface geometric mean size of sediments and the dimensionless sediment particle size decrease. This means that coarse matrix is formed with the dimensionless tractive force by the sediment selective sorting.

Bio-Equivalence Analysis using Linear Mixed Model (선형혼합모형을 활용한 생물학적 동등성 분석)

  • An, Hyungmi;Lee, Youngjo;Yu, Kyung-Sang
    • The Korean Journal of Applied Statistics
    • /
    • v.28 no.2
    • /
    • pp.289-294
    • /
    • 2015
  • Linear mixed models are commonly used in the clinical pharmaceutical studies to analyze repeated measures such as the crossover study data of bioequivalence studies. In these models, random effects describe the correlation between repeated outcomes and variance-covariance matrix explain within-subject variabilities. Bioequivalence analysis verifies whether a 90% confidence interval for geometric mean ratio of Cmax and AUC between reference drug and test drug is included in the bioequivalence margin [0.8, 1.25] performed using linear mixed models with period, sequence and treatment effects as fixed and sequence nested subject effects as random. A Levofloxacin study is referred to for an example of real data analysis.

Evaluation of MR-SENSE Reconstruction by Filtering Effect and Spatial Resolution of the Sensitivity Map for the Simulation-Based Linear Coil Array (선형적 위상배열 코일구조의 시뮬레이션을 통한 민감도지도의 공간 해상도 및 필터링 변화에 따른 MR-SENSE 영상재구성 평가)

  • Lee, D.H.;Hong, C.P.;Han, B.S.;Kim, H.J.;Suh, J.J.;Kim, S.H.;Lee, C.H.;Lee, M.W.
    • Journal of Biomedical Engineering Research
    • /
    • v.32 no.3
    • /
    • pp.245-250
    • /
    • 2011
  • Parallel imaging technique can provide several advantages for a multitude of MRI applications. Especially, in SENSE technique, sensitivity maps were always required in order to determine the reconstruction matrix, therefore, a number of difference approaches using sensitivity information from coils have been demonstrated to improve of image quality. Moreover, many filtering methods were proposed such as adaptive matched filter and nonlinear diffusion technique to optimize the suppression of background noise and to improve of image quality. In this study, we performed SENSE reconstruction using computer simulations to confirm the most suitable method for the feasibility of filtering effect and according to changing order of polynomial fit that were applied on variation of spatial resolution of sensitivity map. The image was obtained at 0.32T(Magfinder II, Genpia, Korea) MRI system using spin-echo pulse sequence(TR/TE = 500/20 ms, FOV = 300 mm, matrix = $128{\times}128$, thickness = 8 mm). For the simulation, obtained image was multiplied with four linear-array coil sensitivities which were formed of 2D-gaussian distribution and the image was complex white gaussian noise was added. Image processing was separated to apply two methods which were polynomial fitting and filtering according to spatial resolution of sensitivity map and each coil image was subsampled corresponding to reduction factor(r-factor) of 2 and 4. The results were compared to mean value of geomety factor(g-factor) and artifact power(AP) according to r-factor 2 and 4. Our results were represented while changing of spatial resolution of sensitivity map and r-factor, polynomial fit methods were represented the better results compared with general filtering methods. Although our result had limitation of computer simulation study instead of applying to experiment and coil geometric array such as linear, our method may be useful for determination of optimal sensitivity map in a linear coil array.

Performance Analysis of a Packet Voice Multiplexer Using the Overload Control Strategy by Bit Dropping (Bit-dropping에 의한 Overload Control 방식을 채용한 Packet Voice Multiplexer의 성능 분석에 관한 연구)

  • 우준석;은종관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.1
    • /
    • pp.110-122
    • /
    • 1993
  • When voice is transmitted through packet switching network, there needs a overload control, that is, a control for the congestion which lasts short periods and occurrs in local extents. In this thesis, we analyzed the performance of the statistical packet voice multiplexer using the overload control strategy by bit dropping. We assume that the voice is coded accordng to (4,2) embedded ADPCM and that the voice packet is generated and transmitted according to the procedures in the CCITT recomendation G. 764. For the performance analysis, we must model the superposed packet arrival process to the multiplexer as exactly as possible. It is well known that interarrival times of the packets are highly correlated and for this reason MMPP is more suited for the modelling in the viewpoint of accuracy. Hence the packet arrival process in modeled as MMPP and the matrix geometric method is used for the performance analysis. Performance analysis is similar to the MMPP IG II queueing system. But the overload control makes the service time distribution G dependent on system status or queue length in the multiplexer. Through the performance analysis we derived the probability generating function for the queue length and using this we derived the mean and standard deviation of the queue length and waiting time. The numerical results are verified through the simulation and the results show that the values embedded in the departure times and that in the arbitrary times are almost the same. Results also show bit dropping reduces the mean and the variation of the queue length and those of the waiting time.

  • PDF

Volumetric accuracy of cone-beam computed tomography

  • Park, Cheol-Woo;Kim, Jin-ho;Seo, Yu-Kyeong;Lee, Sae-Rom;Kang, Ju-Hee;Oh, Song-Hee;Kim, Gyu-Tae;Choi, Yong-Suk;Hwang, Eui-Hwan
    • Imaging Science in Dentistry
    • /
    • v.47 no.3
    • /
    • pp.165-174
    • /
    • 2017
  • Purpose: This study was performed to investigate the influence of object shape and distance from the center of the image on the volumetric accuracy of cone-beam computed tomography (CBCT) scans, according to different parameters of tube voltage and current. Materials and Methods: Four geometric objects(cylinder, cube, pyramid, and hexagon) with predefined dimensions were fabricated. The objects consisted of Teflon-perfluoroalkoxy embedded in a hydrocolloid matrix (Dupli-Coe-Loid TM; GC America Inc., Alsip, IL, USA), encased in an acrylic resin cylinder assembly. An Alphard Vega Dental CT system (Asahi Roentgen Ind. Co., Ltd, Kyoto, Japan) was used to acquire CBCT images. OnDemand 3D (CyberMed Inc., Seoul, Korea) software was used for object segmentation and image analysis. The accuracy was expressed by the volume error (VE). The VE was calculated under 3 different exposure settings. The measured volumes of the objects were compared to the true volumes for statistical analysis. Results: The mean VE ranged from -4.47% to 2.35%. There was no significant relationship between an object's shape and the VE. A significant correlation was found between the distance of the object to the center of the image and the VE. Tube voltage affected the volume measurements and the VE, but tube current did not. Conclusion: The evaluated CBCT device provided satisfactory volume measurements. To assess volume measurements, it might be sufficient to use serial scans with a high resolution, but a low dose. This information may provide useful guidance for assessing volume measurements.