• Title/Summary/Keyword: MD algorithm

Search Result 190, Processing Time 0.027 seconds

Reliability-based Design Optimization using Multiplicative Decomposition Method (곱분해기법을 이용한 신뢰성 기반 최적설계)

  • Kim, Tae-Kyun;Lee, Tae-Hee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.22 no.4
    • /
    • pp.299-306
    • /
    • 2009
  • Design optimization is a method to find optimum point which minimizes the objective function while satisfying design constraints. The conventional optimization does not consider the uncertainty originated from modeling or manufacturing process, so optimum point often locates on the boundaries of constraints. Reliability based design optimization includes optimization technique and reliability analysis that calculates the reliability of the system. Reliability analysis can be classified into simulation method, fast probability integration method, and moment-based reliability method. In most generally used MPP based reliability analysis, which is one of fast probability integration method, if many MPP points exist, cost and numerical error can increase in the process of transforming constraints into standard normal distribution space. In this paper, multiplicative decomposition method is used as a reliability analysis for RBDO, and sensitivity analysis is performed to apply gradient based optimization algorithm. To illustrate whole process of RBDO mathematical and engineering examples are illustrated.

Design of Computer Access Devices for Severly Motor-disability Using Bio-potentials (생체전위를 이용한 중증 운동장애자들을 위한 컴퓨터 접근제어장치 설계)

  • Jung, Sung-Jae;Kim, Myung-Dong;Park, Chan-Won;Kim, Il-Hwan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.11
    • /
    • pp.502-510
    • /
    • 2006
  • In this paper, we describe implementation of a computer access device for the severly motor-disability. Many people with severe motor disabilities need an augmentative communication technology. Those who are totally paralyzed, or 'locked-in' cannot use conventional augmentative technologies, all of which require some measure of muscle control. The forehead is often the last site to suffer degradation in cases of severe disability and degenerative disease. For example, In ALS(Amyotrophic Lateral Sclerosis) and MD(Muscular dystrophy) the ocular motorneurons and ocular muscles are usually spared permitting at least gross eye movements, but not precise eye pointing. We use brain and body forehead bio-potentials in a novel way to generate multiple signals for computer control inputs. A bio-amplifier within this device separates the forehead signal into three frequency channels. The lowest channel is responsive to bio-potentials resulting from an eye motion, and second channel is the band pass derived between 0.5 and 45Hz, falling within the accepted Electroencephalographic(EEG) range. A digital processing station subdivides this region into eleven components frequency bands using FFT algorithm. The third channel is defined as an Electromyographic(EMG) signal. It responds to contractions of facial muscles and is well suited to discrete on/off switch closures, keyboard commands. These signals are transmitted to a PC that analyzes in a time series and a frequency region and discriminates user's intentions. That software graphically displays user's bio-potential signals in the real time, therefore user can see their own bio-potentials and control their physiological signals little by little after some training sessions. As a result, we confirmed the performance and availability of the developed system with experimental user's bio-potentials.

A Dynamic Transmission Rate Allocation Algorithm for Multiplexing Delay-sensitive VBR-coded Streams (VBR로 부호화된 지연 민감 서비스의 다중화를 위한 동적인 전송률 할당 알고리즘)

  • 김진수;유국열;이문노
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.7B
    • /
    • pp.628-637
    • /
    • 2003
  • This paper describes a novel multiplexing scheme for delay-sensitive multiple VBR-coded bit streams in live multimedia service offered to high-speed networks. The primary goal of multiplexing in this paper is to keep delay limits of each bit streams and to enhance network resource utilization when they no multiplexed and transmitted over network. For this aim, this paper presents a dynamical control scheme which does not cause violation of any delay constraints to each bit steam. The scheme is based on the assumption that recent behavior of the each bit scream has high correlation with near-term future behavior. Such property is used to make as flat as possible by both temporal averaging on a stream-by-stream and spatial averaging is introduced when multiple VBR-coded bit streams are multiplexed. The effectiveness of the scheme is evaluated by several simulation using an MPEG-coded video trace of Star_wars and it is shown that the proposed scheme can effectively reduce the feat rate md coefficient of variation of the multiplexed transmission rate.

An Effective Control Method for Improving Integrity of Mobile Phone Forensics (모바일 포렌식의 무결성 보장을 위한 효과적인 통제방법)

  • Kim, Dong-Guk;Jang, Seong-Yong;Lee, Won-Young;Kim, Yong-Ho;Park, Chang-Hyun
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.19 no.5
    • /
    • pp.151-166
    • /
    • 2009
  • To prove the integrity of digital evidence on the investigation procedure, the data which is using the MD 5(Message Digest 5) hash-function algorithm has to be discarded, if the integrity was damaged on the investigation. Even though a proof restoration of the deleted area is essential for securing the proof regarding a main phase of a case, it was difficult to secure the decisive evidence because of the damaged evidence data due to the difference between the overall hash value and the first value. From this viewpoint, this paper proposes the novel model for the mobile forensic procedure, named as "E-Finder(Evidence Finder)", to ,solve the existing problem. The E-Finder has 5 main phases and 15 procedures. We compared E-Finder with NIST(National Institute of Standards and Technology) and Tata Elxsi Security Group. This paper thus achieved the development and standardization of the investigation methodology for the mobile forensics.

Analysis of Success Factors of OTT Original Contents Through BigData, Netflix's 'Squid Game Season 2' Proposal (빅데이터를 통한 OTT 오리지널 콘텐츠의 성공요인 분석, 넷플릭스의 '오징어게임 시즌2' 제언)

  • Ahn, Sunghun;Jung, JaeWoo;Oh, Sejong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.18 no.1
    • /
    • pp.55-64
    • /
    • 2022
  • This study analyzes the success factors of OTT original content through big data, and intends to suggest scenarios, casting, fun, and moving elements when producing the next work. In addition, I would like to offer suggestions for the success of 'Squid Game Season 2'. The success factor of 'Squid Game' through big data is first, it is a simple psychological experimental game. Second, it is a retro strategy. Third, modern visual beauty and color. Fourth, it is simple aesthetics. Fifth, it is the platform of OTT Netflix. Sixth, Netflix's video recommendation algorithm. Seventh, it induced Binge-Watch. Lastly, it can be said that the consensus was high as it was related to the time to think about 'death' and 'money' in a pandemic situation. The suggestions for 'Squid Game Season 2' are as follows. First, it is a fusion of famous traditional games of each country. Second, it is an AI-based planned MD product production and sales strategy. Third, it is casting based on artificial intelligence big data. Fourth, secondary copyright and copyright sales strategy. The limitations of this study were analyzed only through external data. Data inside the Netflix platform was not utilized. In this study, if AI big data is used not only in the OTT field but also in entertainment and film companies, it will be possible to discover better business models and generate stable profits.

Error Resilient Video Coding Techniques Using Multiple Description Scheme (다중 표현을 이용한 에러에 강인한 동영상 부호화 방법)

  • 김일구;조남익
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.17-31
    • /
    • 2004
  • This paper proposes an algorithm for the robust transmission of video in error Prone environment using multiple description codingby optimal split of DCT coefficients and rate-distortionoptimization framework. In MDC, a source signal is split Into several coded streams, which is called descriptions, and each description is transmitted to the decoder through different channel. Between descriptions, structured correlations are introduced at the encoder, and the decoder exploits this correlation to reconstruct the original signal even if some descriptions are missing. It has been shown that the MDC is more resilient than the singe description coding(SDC) against severe packet loss ratecondition. But the excessive redundancy in MDC, i.e., the correlation between the descriptions, degrades the RD performance under low PLR condition. To overcome this Problem of MDC, we propose a hybrid MDC method that controls the SDC/MDC switching according to channel condition. For example, the SDC is used for coding efficiency at low PLR condition and the MDC is used for the error resilience at high PLR condition. To control the SDC/MDC switching in the optimal way, RD optimization framework are used. Lagrange optimization technique minimizes the RD-based cost function, D+M, where R is the actually coded bit rate and D is the estimated distortion. The recursive optimal pet-pixel estimatetechnique is adopted to estimate accurate the decoder distortion. Experimental results show that the proposed optimal split of DCT coefficients and SD/MD switching algorithm is more effective than the conventional MU algorithms in low PLR conditions as well as In high PLR condition.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.

Recognition method using stereo images-based 3D information for improvement of face recognition (얼굴인식의 향상을 위한 스테레오 영상기반의 3차원 정보를 이용한 인식)

  • Park Chang-Han;Paik Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.30-38
    • /
    • 2006
  • In this paper, we improved to drops recognition rate according to distance using distance and depth information with 3D from stereo face images. A monocular face image has problem to drops recognition rate by uncertainty information such as distance of an object, size, moving, rotation, and depth. Also, if image information was not acquired such as rotation, illumination, and pose change for recognition, it has a very many fault. So, we wish to solve such problem. Proposed method consists of an eyes detection algorithm, analysis a pose of face, md principal component analysis (PCA). We also convert the YCbCr space from the RGB for detect with fast face in a limited region. We create multi-layered relative intensity map in face candidate region and decide whether it is face from facial geometry. It can acquire the depth information of distance, eyes, and mouth in stereo face images. Proposed method detects face according to scale, moving, and rotation by using distance and depth. We train by using PCA the detected left face and estimated direction difference. Simulation results with face recognition rate of 95.83% (100cm) in the front and 98.3% with the pose change were obtained successfully. Therefore, proposed method can be used to obtain high recognition rate with an appropriate scaling and pose change according to the distance.

Evaluation of Size for Crack around Rivet Hole Using Lamb Wave and Neural Network (초음파 판파와 신경회로망 기법을 적용한 리뱃홀 부위의 균열 크기 평가)

  • Choi, Sang-Woo;Lee, Joon-Hyun
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.21 no.4
    • /
    • pp.398-405
    • /
    • 2001
  • The rivet joint has typical structural feature that can be initiation site for the fatigue crack due to the combination of local stress concentration around rivet hole and the moisture trapping. From a viewpoint of structural assurance, it is crucial to evaluate the size of crack around the rivet holes by appropriate nondestructive evaluation techniques. Lamb wave that is one of guided waves, offers a more efficient tool for nondestructive inspection of plates. The neural network that is considered to be the most suitable for pattern recognition has been used by researchers in NDE field to classify different types of flaws and flaw sizes. In this study, clack size evaluation around the rivet hole using the neural network based on the back-propagation algorithm has been tarried out by extracting some features from the ultrasonic Lamb wave for A12024-T3 skin panel of aircraft. Special attention was paid to reduce the coupling effect between the transducer and the specimen by extracting some features related to time md frequency component data in ultrasonic waveform. It was demonstrated clearly that features extracted from the time and frequency domain data of Lamb wave signal were very useful to determine crack size initiated from rivet hole through neural network.

  • PDF

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).