• Title/Summary/Keyword: Minimum Variance Method

Search Result 190, Processing Time 0.026 seconds

A Watermarking Method Based on the Informed Coding and Embedding Using Trellis Code and Entropy Masking (Trellis 부호 및 엔트로피 마스킹을 이용한 정보부호화 기반 워터마킹)

  • Lee, Jeong-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.12
    • /
    • pp.2677-2684
    • /
    • 2009
  • In this paper, we study a watermarking method based on the informed coding and embedding by means of trellis code and entropy masking. An image is divided as $8{\times}8$ block with no overlapping and the discrete cosine transform(DCT) is applied to each block. Then the 16 medium-frequency AC terms of each block are extracted. Next it is compared with gaussian random vectors having zero mean and unit variance. As these processing, the embedding vectors with minimum value of linear combination between linear correlation and Watson distance can be obtained by Viterbi algorithm at each stage of trellis coding. For considering the image characteristics, we apply different weight value between the linear correlation and the Watson distance using the entropy masking. To evaluate the performance of proposed method, the average bit error rate of watermark message is calculated from different several images. By the experiments the proposed method is improved in terms of the average bit error rate.

Calculating the collapse margin ratio of RC frames using soft computing models

  • Sadeghpour, Ali;Ozay, Giray
    • Structural Engineering and Mechanics
    • /
    • v.83 no.3
    • /
    • pp.327-340
    • /
    • 2022
  • The Collapse Margin Ratio (CMR) is a notable index used for seismic assessment of the structures. As proposed by FEMA P695, a set of analyses including the Nonlinear Static Analysis (NSA), Incremental Dynamic Analysis (IDA), together with Fragility Analysis, which are typically time-taking and computationally unaffordable, need to be conducted, so that the CMR could be obtained. To address this issue and to achieve a quick and efficient method to estimate the CMR, the Artificial Neural Network (ANN), Response Surface Method (RSM), and Adaptive Neuro-Fuzzy Inference System (ANFIS) will be introduced in the current research. Accordingly, using the NSA results, an attempt was made to find a fast and efficient approach to derive the CMR. To this end, 5016 IDA analyses based on FEMA P695 methodology on 114 various Reinforced Concrete (RC) frames with 1 to 12 stories have been carried out. In this respect, five parameters have been used as the independent and desired inputs of the systems. On the other hand, the CMR is regarded as the output of the systems. Accordingly, a double hidden layer neural network with Levenberg-Marquardt training and learning algorithm was taken into account. Moreover, in the RSM approach, the quadratic system incorporating 20 parameters was implemented. Correspondingly, the Analysis of Variance (ANOVA) has been employed to discuss the results taken from the developed model. Additionally, the essential parameters and interactions are extracted, and input parameters are sorted according to their importance. Moreover, the ANFIS using Takagi-Sugeno fuzzy system was employed. Finally, all methods were compared, and the effective parameters and associated relationships were extracted. In contrast to the other approaches, the ANFIS provided the best efficiency and high accuracy with the minimum desired errors. Comparatively, it was obtained that the ANN method is more effective than the RSM and has a higher regression coefficient and lower statistical errors.

Multiple Targets Detection by using CLEAN Algorithm in Matched Field Processing (정합장처리에서 CLEAN알고리즘을 이용한 다중 표적 탐지)

  • Lim Tae-Gyun;Lee Sang-Hak;Cha Young-Wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.10 no.9
    • /
    • pp.1545-1550
    • /
    • 2006
  • In this paper, we propose a method for applying the CLEAN algorithm to an minimum variance distortionless response(MVDR) to estimate the location of multiple targets distributed in the ocean. The CLEAN algorithm is easy to implement in a linear processor, yet not in a nonlinear processor. In the proposed method, the CSDM of a Dirty map is separated into the CSDM of a Clean beam and the CSDM of the Residual, then an individual ambiguity surface(AMS) is generated. As such, the CLEAN algorithm can be applied to an MVDR, a nonlinear processor. To solve the ill-conditioned problem related to the matrix inversiion by an MVDR when using the CLEAN algorithm, Singular value decomposition(SVD) is carried out, then the reciprocal of small eigenvalues is replaced with zero. Experimental results show that the proposed method improves the performance of an MVDR.

Region-of-Interest Detection using the Energy from Vocal Fold Image (성대 영상에서 에너지를 이용한 관심 영역 추출)

  • Kim, Eom-Jun;Sung, Mee-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.8
    • /
    • pp.804-814
    • /
    • 2000
  • In this paper, we propose an effective method to detect the regions of interests in the Videostrobokymography System. Videostrobokymography system is a medical image processing system for extracting automatically the diagnosis parameters from the irregular vibratory movements of the vocal fold. We detect the regions of interests through three steps. In the first step, we remove the noise in the input image and we find the minimum energy value in each frame. In the second step, we computed the edge by everage value for the one line. In the third step, the regions of interests can be extracted by using the Merge Algorithm which uses the variance of luminance as the feature points. We experimented this method for the vocal fold images of nineteen patients. In consequence, the regions of interests are detected in most vocal fold images. The method proposed in this study is efficient enough to extract the region of interests in the vocal fold images with the frame rate of 40 frames/second and the resolution of 200${\times}$280 pixels.

  • PDF

Factor Analysis for Exploratory Research in the Distribution Science Field (유통과학분야에서 탐색적 연구를 위한 요인분석)

  • Yim, Myung-Seong
    • Journal of Distribution Science
    • /
    • v.13 no.9
    • /
    • pp.103-112
    • /
    • 2015
  • Purpose - This paper aims to provide a step-by-step approach to factor analytic procedures, such as principal component analysis (PCA) and exploratory factor analysis (EFA), and to offer a guideline for factor analysis. Authors have argued that the results of PCA and EFA are substantially similar. Additionally, they assert that PCA is a more appropriate technique for factor analysis because PCA produces easily interpreted results that are likely to be the basis of better decisions. For these reasons, many researchers have used PCA as a technique instead of EFA. However, these techniques are clearly different. PCA should be used for data reduction. On the other hand, EFA has been tailored to identify any underlying factor structure, a set of measured variables that cause the manifest variables to covary. Thus, it is needed for a guideline and for procedures to use in factor analysis. To date, however, these two techniques have been indiscriminately misused. Research design, data, and methodology - This research conducted a literature review. For this, we summarized the meaningful and consistent arguments and drew up guidelines and suggested procedures for rigorous EFA. Results - PCA can be used instead of common factor analysis when all measured variables have high communality. However, common factor analysis is recommended for EFA. First, researchers should evaluate the sample size and check for sampling adequacy before conducting factor analysis. If these conditions are not satisfied, then the next steps cannot be followed. Sample size must be at least 100 with communality above 0.5 and a minimum subject to item ratio of at least 5:1, with a minimum of five items in EFA. Next, Bartlett's sphericity test and the Kaiser-Mayer-Olkin (KMO) measure should be assessed for sampling adequacy. The chi-square value for Bartlett's test should be significant. In addition, a KMO of more than 0.8 is recommended. The next step is to conduct a factor analysis. The analysis is composed of three stages. The first stage determines a rotation technique. Generally, ML or PAF will suggest to researchers the best results. Selection of one of the two techniques heavily hinges on data normality. ML requires normally distributed data; on the other hand, PAF does not. The second step is associated with determining the number of factors to retain in the EFA. The best way to determine the number of factors to retain is to apply three methods including eigenvalues greater than 1.0, the scree plot test, and the variance extracted. The last step is to select one of two rotation methods: orthogonal or oblique. If the research suggests some variables that are correlated to each other, then the oblique method should be selected for factor rotation because the method assumes all factors are correlated in the research. If not, the orthogonal method is possible for factor rotation. Conclusions - Recommendations are offered for the best factor analytic practice for empirical research.

The effect of screw tightening techniques on the detorque value in internal connection implant superstructure (내부연결 임플란트 상부구조물에서 나사조임술식이 풀림토크값에 미치는 영향)

  • Choi, Jung-Han
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.48 no.4
    • /
    • pp.243-250
    • /
    • 2010
  • Purpose: This study evaluated the effect of different screw tightening sequences and methods on detorque values in a well-fitting implant superstructure. Materials and methods: A fully edentulous mandibular master model and a metal framework directly connected to four parallel implants (Astra Tech) with a passive fit to each other were fabricated. Six stone casts were made with a splinted impression technique to represent a 'well-fitting' situation with the metal framework. Detorque values were measured twice after screw tightening using 20 Ncm. Detorque values and minimum detorque values for three screw tightening sequences (1-2-3-4, 2-4-3-1, and 2-3-1-4) and for two tightening methods (two-step and one-step) were analyzed using multi-way analysis of variance and two-way analysis of variance, respectively, at a .05 level of significance. Results: The mean detorque values for screw tightening sequences ranged from 12.8 Ncm (2-4-3-1) to 13.1 Ncm (2-3-1-4), and for screw tightening methods were 13.1 Ncm (two-step) and 11.8 Ncm (one-step). The mean of mimimum detorque values for screw tightening sequences were 11.1 Ncm (1-2-3-4) and 11.2 Ncm (2-4-3-1 and 2-3-1-4), and for screw tightening methods were 11.2 Ncm (two-step) and 9.9 Ncm (one-step). No statistically significant differences among three screw tightening sequences were found for detorque values and for mimimum detorque values. But, statistically significant differences between two screw tightening methods were found for two values. Two-step screw tightening method showed higher detorque value (P = .0003) and higher minimum detorque value (P = .0035) than one-step method. Conclusion: Within the limitations of this study, the screw tightening sequence was not a critical factor for the detorque values in a well-fitting implant superstructure by the splinted impression technique. But, two-step screw tightening method showed greater detorque values than one-step method.

A Study on the Lineament Analysis Along Southwestern Boundary of Okcheon Zone Using the Remote Sensing and DEM Data (원격탐사자료와 수치표고모형을 이용한 옥천대 남서경계부의 선구조 분석 연구)

  • Kim, Won Kyun;Lee, Youn Soo;Won, Joong-Sun;Min, Kyung Duck;Lee, Younghoon
    • Economic and Environmental Geology
    • /
    • v.30 no.5
    • /
    • pp.459-467
    • /
    • 1997
  • In order to examine the primary trends and characteristics of geological lineaments along the southwestern boundary of Okcheon zone, we carried out the analysis of geological lineament trends over six selected sub-areas using Landsat-5 TM images and digital elevation model. The trends of lineaments is determined by a minimum variance method, and the resulting geological lineament map can be obtained through generalized Hough transform. We have corrected look direction biases reduces the interpretability of remotely sensed image. An approach of histogram modification is also adopted to extract drainage pattern specifically in alluvial plains. The lineament extracting method adopted in this study is very effective to analyze geological lineaments, and that helps estimate geological trends associated various with the tectonic events. In six sub-areas, the general trends of lineaments are characterized NW, NNW, NS-NNE, and NE directions. NW trends in Cretaceous volcanic rocks and Jurassic granite areas may represent tension joints that developed by rejuvenated end of the Early Cretaceous left-lateral strike-slip motion along the Honam Shear Zone, while NE and NS-NNE trends correspond to fault directions which are parallel to the above Shear Zone. NE and NW trends in Granitic Gneiss are parallel to the direction of schitosity, and NS-NNE and NE trends are interpreted the lineation by compressive force which acted by right-lateral strike-slip fault from late Triassic to Jurassic. And in foliated Granite, NE and NNE trends are coincided with directions of ductile foliation and Honam Shear Zone, and NW-NNW trends may be interpreted direction of another compressional foliation (Triassic to Early Jurassic) or end of the Early Cretaceous tensional joints. We interpreted NS-NNE direction lineation is related with the rejuvenated Chugaryung Fault System.

  • PDF

Bioequivalence of Kerora Intramuscular Injections to Tarasyn Intramuscular Injections (Ketorolac Tromethamine 30 mg) (타라신 근주(케토롤락트로메타민 30 mg)에 대한 케로라 근주의 생물학적 동등성)

  • Chung, Youn-Bok;Lee, Jun-Seup;Han, Kun
    • Journal of Pharmaceutical Investigation
    • /
    • v.29 no.1
    • /
    • pp.67-72
    • /
    • 1999
  • A bioequivalence study of the $Kerola^{\circledR}$ intramuscular injections (Dongkwang Pharmaceutical Co., Korea) to the $Tarasyn^{\circledR}$ intramuscular injections (Roche Co., Korea), formulations of ketorolac tromethamine (KTR), was conducted. Sixteen healthy Korean male subjects were received each formulation at the dose of 30 mg as KTR in a $2{\times}2$ crossover study. There was an one-week washout period between the doses. Plasma concentrations of KTR were monitored by a HPLC method. AUC was calculated by the linear trapezoidal method. $C_{max}$ and $T_{max}$ were compiled from the plasma drug concentration-time data. Analysis of variance (ANOVA) revealed that there are no differences in AUC, $C_{max}$ and $T_{max}$ between the formulations. The differences between the formulations in these parameters were all far less than 20% (i.e., 3.65, 2.59 and 4.35% for AUC, $C_{max}$ and $T_{max}$ respectively). Minimum detectable differences (%) at ${\alpha}=0.1$ and $1-{\beta}=0.8$ were 12.87, 13.44, 20.62%, for AUC, $C_{max}$ and $T_{max}$, respectively. The 90% confidence intervals for these parameters were also within 20%. These results satisfy the bioequivalence criteria of the Korea Food and Drug Administration (KFDA) guidelines (No. 1998-86). Therefore, these results indicate that the two formulations of KTR are bioequivalent.

  • PDF

Bioequivalence Evaluation of Senafen Tablet and Airtal Tablet Containing Aceclofenac 100 mg (아세클로페낙(100mg) 제제인 세나펜 정과 에어할 정의 생물학적동등성 평가)

  • 박은우;송우헌;차영주;최영욱
    • Biomolecules & Therapeutics
    • /
    • v.6 no.4
    • /
    • pp.423-428
    • /
    • 1998
  • Aceclofenac is an orally effective non-steroidal anti-inflammatory agent of the phenylacetic acid derivative. Bioequivalence study of two aceclofenac preparations, the test drug (Senafe $n_{R}$: Daewon Phar-maceutical Company) and the reference drug (Airta $l_{R}$: Daewoong Pharmaceutical Company), was conducted according to the guidelines of Korea Food and Drug Administration (KFDA). Sixteen healthy male volunteers, 24$\pm$4 years old and 63.9$\pm$6.9 kg of body weight in average, were divided randomly into two groups and administered the drug orally at the dose of 100 mg as aceclofenac in a 2$\times$2 crossover study. Plasma concentrations of aceclofenac were monitored by HPLC method for 12 hr after administration. AU $Co_{-12h}$ (area under the plasma concentration-time curve from initial to 12 hr) was calculated by the linear trapezoidal method. $C_{max}$ (maximum plasma drug concentration) and $T_{max}$ (time to reach $C_{msx}$) were compiled directly from the plasma drug concentration-time data. Student's t-test indicated no significant differences between the formulations in these parameters. Analysis of variance (ANOVA) revealed that there are no differences in AU $Co_{12h}$, $C_{max}$ and $T_{max}$ between the formulations. The apparent differences between the formulations were far less than 20% (e.g., 0.25, 0.01 and 7.32 for AU $Co_{-12h}$, $C_{max}$. and $T_{max}$, respectively). Minimum detectable differences (%) between the formulations at $\alpha$=0.05 and 1-$\beta$=0.8 were less than 20% (e.g., 14.65, 12.47 and 15.46 for AU $Co_{-l2h}$, $C_{max}$ and $T_{max}$, respectively). The 90% confidence intervals for these parameters were also within $\pm$ 20% (e.g.,-10.19~10.68, -8.87~8.89 and -3.69~ 18.33 for AU $Co_{-12h}$, $C_{msx}$ and $T_{max}$, respectively). These results satisfy the bioequivalence criteria of KFDA guidelines, indicating that two formulations of aceclofenac are bioequivalent.quivalent.ivalent.ent.t.ent.

  • PDF

Keyword Spotting on Hangul Document Images Using Character Feature Models (문자 별 특징 모델을 이용한 한글 문서 영상에서 키워드 검색)

  • Park, Sang-Cheol;Kim, Soo-Hyung;Choi, Deok-Jai
    • The KIPS Transactions:PartB
    • /
    • v.12B no.5 s.101
    • /
    • pp.521-526
    • /
    • 2005
  • In this Paper, we propose a keyword spotting system as an alternative to searching system for poor quality Korean document images and compare the Proposed system with an OCR-based document retrieval system. The system is composed of character segmentation, feature extraction for the query keyword, and word-to-word matching. In the character segmentation step, we propose an effective method to remove the connectivity between adjacent characters and a character segmentation method by making the variance of character widths minimum. In the query creation step, feature vector for the query is constructed by a combination of a character model by typeface. In the matching step, word-to-word matching is applied base on a character-to-character matching. We demonstrated that the proposed keyword spotting system is more efficient than the OCR-based one to search a keyword on the Korean document images, especially when the quality of documents is quite poor and point size is small.