• Title/Summary/Keyword: 신호 최적화

Search Result 947, Processing Time 0.029 seconds

Study on Effect of Varience of Physiological Responses in Color Foot Reflexology Using Color Light (컬러광을 활용한 발반사요법이 인체 생리적 반응 변화에 미치는 영향에 관한 연구)

  • Jin, Hye-Ryeon;Yu, Mi;Park, Kyung-Jun;Kim, Nam-Gyun;Chung, Sung-Whan;Kim, Dong-Wook
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.187-196
    • /
    • 2010
  • Recently, people have been suffering from stress-related fatigue and psychological disorders. Most people depend on medicine for pain relief; many treat pain also through alternative medicine or replacement therapy. However, drug therapy has many side effects, including increased stress after the therapy. In comparison, alternative therapies such as massage and foot reflexology are less damaging to the body, and such therapies can be provided without physical or psychological discomfort. In this regard, the author had previously co-developed color foot reflexology, which combines the merits of color therapy and foot reflexology; color foot reflexology has been shown to have beneficial effects without undue pain. This study investigates the effects of color foot reflexology on the physiological response of the body by comparing the body’s response to the signal with that to the placebo. Healthy adult subjects were selected for the experiment, which was conducted under optimal experimental conditions and design. The results indicated that when stimulated, parasympathetic nerves increased in HRV and that blood pressure, pulse, body heat, peripheral blood flow were dramatically activated. However, the results for the placebo indicated minimal changes or irregular outcomes. The results provide strong evidence for the beneficial effects of the color foot reflexology instrument on the autonomic nervous system and on the physiological response of the body. Future research is warranted to verify the results of the current study by examining patients suffering from diseases and disorders arising from irregular physiological functions in the context of the foot.

  • PDF

Timing Driven Analytic Placement for FPGAs (타이밍 구동 FPGA 분석적 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.7
    • /
    • pp.21-28
    • /
    • 2017
  • Practical models for FPGA architectures which include performance- and/or density-enhancing components such as carry chains, wide function multiplexers, and memory/multiplier blocks are being applied to academic FPGA placement tools which used to rely on simple imaginary models. Previously the techniques such as pre-packing and multi-layer density analysis are proposed to remedy issues related to such practical models, and the wire length is effectively minimized during initial analytic placement. Since timing should be optimized rather than wire length, most previous work takes into account the timing constraints. However, instead of the initial analytic placement, the timing-driven techniques are mostly applied to subsequent steps such as placement legalization and iterative improvement. This paper incorporates the timing driven techniques, which check if the placement meets the timing constraints given in the standard SDC format, and minimize the detected violations, with the existing analytic placer which implements pre-packing and multi-layer density analysis. First of all, a static timing analyzer has been used to check the timing of the wire-length minimized placement results. In order to minimize the detected violations, a function to minimize the largest arrival time at end points is added to the objective function of the analytic placer. Since each clock has a different period, the function is proposed to be evaluated for each clock, and added to the objective function. Since this function can unnecessarily reduce the unviolated paths, a new function which calculates and minimizes the largest negative slack at end points is also proposed, and compared. Since the existing legalization which is non-timing driven is used before the timing analysis, any improvement on timing is entirely due to the functions added to the objective function. The experiments on twelve industrial examples show that the minimum arrival time function improves the worst negative slack by 15% on average whereas the minimum worst negative slack function improves the negative slacks by additional 6% on average.

Advanced Hybrid EER Transmitter for WCDMA Application Using Efficiency Optimized Power Amplifier and Modified Bias Modulator (효율이 특화된 전력 증폭기와 개선된 바이어스 모듈레이터로 구성되는 진보된 WCDMA용 하이브리드 포락선 제거 및 복원 전력 송신기)

  • Kim, Il-Du;Woo, Young-Yun;Hong, Sung-Chul;Kim, Jang-Heon;Moon, Jung-Hwan;Jun, Myoung-Su;Kim, Jung-Joon;Kim, Bum-Man
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.18 no.8
    • /
    • pp.880-886
    • /
    • 2007
  • We have proposed a new "hybrid" envelope elimination and restoration(EER) transmitter architecture using an efficiency optimized power amplifier(PA) and modified bias modulator. The efficiency of the PA at the average drain voltage is very important for the overall transmitter efficiency because the PA operates mostly at the average power region of the modulation signal. Accordingly, the efficiency of the PA has been optimized at the region. Besides, the bias modulator has been accompanied with the emitter follower for the minimization of memory effect. A saturation amplifier, class $F^{-1}$ is built using a 5-W PEP LDMOSFET for forward-link single-carrier wideband code-division multiple-access(WCDMA) at 1-GHz. For the interlock experiment, the bias modulator has been built with the efficiency of 64.16% and peak output voltage of 31.8 V. The transmitter with the proposed PA and bias modulator has been achieved an efficiency of 44.19%, an improvement of 8.11%. Besides, the output power is enhanced to 32.33 dBm due to the class F operation and the PAE is 38.28% with ACLRs of -35.9 dBc at 5-MHz offset. These results show that the proposed architecture is a very good candidate for the linear and efficient high power transmitter.

The Flow-rate Measurements in a Multi-phase Flow Pipeline by Using a Clamp-on Sealed Radioisotope Cross Correlation Flowmeter (투과 감마선 계측신호의 Cross correlation 기법 적용에 의한 다중상 유체의 유량측정)

  • Kim, Jin-Seop;Kim, Jong-Bum;Kim, Jae-Ho;Lee, Na-Young;Jung, Sung-Hee
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.1
    • /
    • pp.13-20
    • /
    • 2008
  • The flow rate measurements in a multi-phase flow pipeline were evaluated quantitatively by means of a clamp-on sealed radioisotope based on a cross correlation signal processing technique. The flow rates were calculated by a determination of the transit time between two sealed gamma sources by using a cross correlation function following FFT filtering, then corrected with vapor fraction in the pipeline which was measured by the ${\gamma}$-ray attenuation method. The pipeline model was manufactured by acrylic resin(ID. 8 cm, L=3.5 m, t=10 mm), and the multi-phase flow patterns were realized by an injection of compressed $N_2$ gas. Two sealed gamma sources of $^{137}Cs$ (E=0.662 MeV, ${\Gamma}$ $factor=0.326\;R{\cdot}h^{-1}{\cdot}m^2{\cdot}Ci^{-1}$) of 20 mCi and 17 mCi, and radiation detectors of $2"{\times}2"$ NaI(Tl) scintillation counter (Eberline, SP-3) were used for this study. Under the given conditions(the distance between two sources: 4D(D; inner diameter), N/S ratio: $0.12{\sim}0.15$, sampling time ${\Delta}t$: 4msec), the measured flow rates showed the maximum. relative error of 1.7 % when compared to the real ones through the vapor content corrections($6.1\;%{\sim}9.2\;%$). From a subsequent experiment, it was proven that the closer the distance between the two sealed sources is, the more precise the measured flow rates are. Provided additional studies related to the selection of radioisotopes their activity, and an optimization of the experimental geometry are carried out, it is anticipated that a radioisotope application for flow rate measurements can be used as an important tool for monitoring multi-phase facilities belonging to petrochemical and refinery industries and contributes economically in the light of maintenance and control of them.

Performance Evaluation of Reconstruction Algorithms for DMIDR (DMIDR 장치의 재구성 알고리즘 별 성능 평가)

  • Kwak, In-Suk;Lee, Hyuk;Moon, Seung-Cheol
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.2
    • /
    • pp.29-37
    • /
    • 2019
  • Purpose DMIDR(Discovery Molecular Imaging Digital Ready, General Electric Healthcare, USA) is a PET/CT scanner designed to allow application of PSF(Point Spread Function), TOF(Time of Flight) and Q.Clear algorithm. Especially, Q.Clear is a reconstruction algorithm which can overcome the limitation of OSEM(Ordered Subset Expectation Maximization) and reduce the image noise based on voxel unit. The aim of this paper is to evaluate the performance of reconstruction algorithms and optimize the algorithm combination to improve the accurate SUV(Standardized Uptake Value) measurement and lesion detectability. Materials and Methods PET phantom was filled with $^{18}F-FDG$ radioactivity concentration ratio of hot to background was in a ratio of 2:1, 4:1 and 8:1. Scan was performed using the NEMA protocols. Scan data was reconstructed using combination of (1)VPFX(VUE point FX(TOF)), (2)VPHD-S(VUE Point HD+PSF), (3)VPFX-S (TOF+PSF), (4)QCHD-S-400((VUE Point HD+Q.Clear(${\beta}-strength$ 400)+PSF), (5)QCFX-S-400(TOF +Q.Clear(${\beta}-strength$ 400)+PSF), (6)QCHD-S-50(VUE Point HD+Q.Clear(${\beta}-strength$ 50)+PSF) and (7)QCFX-S-50(TOF+Q.Clear(${\beta}-strength$ 50)+PSF). CR(Contrast Recovery) and BV(Background Variability) were compared. Also, SNR(Signal to Noise Ratio) and RC(Recovery Coefficient) of counts and SUV were compared respectively. Results VPFX-S showed the highest CR value in sphere size of 10 and 13 mm, and QCFX-S-50 showed the highest value in spheres greater than 17 mm. In comparison of BV and SNR, QCFX-S-400 and QCHD-S-400 showed good results. The results of SUV measurement were proportional to the H/B ratio. RC for SUV is in inverse proportion to the H/B ratio and QCFX-S-50 showed highest value. In addition, reconstruction algorithm of Q.Clear using 400 of ${\beta}-strength$ showed lower value. Conclusion When higher ${\beta}-strength$ was applied Q.Clear showed better image quality by reducing the noise. On the contrary, lower ${\beta}-strength$ was applied Q.Clear showed that sharpness increase and PVE(Partial Volume Effect) decrease, so it is possible to measure SUV based on high RC comparing to conventional reconstruction conditions. An appropriate choice of these reconstruction algorithm can improve the accuracy and lesion detectability. In this reason, it is necessary to optimize the algorithm parameter according to the purpose.

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

Development of deep learning network based low-quality image enhancement techniques for improving foreign object detection performance (이물 객체 탐지 성능 개선을 위한 딥러닝 네트워크 기반 저품질 영상 개선 기법 개발)

  • Ki-Yeol Eom;Byeong-Seok Min
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.99-107
    • /
    • 2024
  • Along with economic growth and industrial development, there is an increasing demand for various electronic components and device production of semiconductor, SMT component, and electrical battery products. However, these products may contain foreign substances coming from manufacturing process such as iron, aluminum, plastic and so on, which could lead to serious problems or malfunctioning of the product, and fire on the electric vehicle. To solve these problems, it is necessary to determine whether there are foreign materials inside the product, and may tests have been done by means of non-destructive testing methodology such as ultrasound ot X-ray. Nevertheless, there are technical challenges and limitation in acquiring X-ray images and determining the presence of foreign materials. In particular Small-sized or low-density foreign materials may not be visible even when X-ray equipment is used, and noise can also make it difficult to detect foreign objects. Moreover, in order to meet the manufacturing speed requirement, the x-ray acquisition time should be reduced, which can result in the very low signal- to-noise ratio(SNR) lowering the foreign material detection accuracy. Therefore, in this paper, we propose a five-step approach to overcome the limitations of low resolution, which make it challenging to detect foreign substances. Firstly, global contrast of X-ray images are increased through histogram stretching methodology. Second, to strengthen the high frequency signal and local contrast, we applied local contrast enhancement technique. Third, to improve the edge clearness, Unsharp masking is applied to enhance edges, making objects more visible. Forth, the super-resolution method of the Residual Dense Block (RDB) is used for noise reduction and image enhancement. Last, the Yolov5 algorithm is employed to train and detect foreign objects after learning. Using the proposed method in this study, experimental results show an improvement of more than 10% in performance metrics such as precision compared to low-density images.

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Clinical Experience with 3.0 T MR for Cardiac Imaging in Patients: Comparison to 1.5 T using Individually Optimized Imaging Protocols (장비 별 최적화된 영상 프로토콜을 이용한 환자에서의 3.0T 심장 자기공명영상의 임상경험: 1.5 T 자기공명영상과의 비교)

  • Ko, Jeong Min;Jung, Jung Im;Lee, Bae Young
    • Investigative Magnetic Resonance Imaging
    • /
    • v.17 no.2
    • /
    • pp.83-90
    • /
    • 2013
  • Purpose : To report our clinical experience with cardiac 3.0 T MRI in patients compared with 1.5 T using individually optimized imaging protocols. Materials and Methods: We retrospectively reviewed 30 consecutive patients and 20 consecutive patients who underwent 1.5 T and 3 T cardiac MRI within 10 months. A comparison study was performed by measuring the signal-to-noise ratio (SNR), the contrast-to-noise ratio (CNR) and the image quality (by grading each sequence on a 5-point scale, regarding the presence of artifacts). Results: In morphologic and viability studies, the use of 3.0 T provided increase of the baseline SNRs and CNRs, respectively (T1: SNR 29%, p < 0.001, CNR 37%, p < 0.001; T2-SPAIR: SNR 13%, p = 0.068, CNR 18%, p = 0.059; viability imaging: SNR 45%, p = 0.017, CNR 37%, p = 0.135) without significant impairment of the image quality (T1: $3.8{\pm}0.9$ vs. $3.9{\pm}0.7$, p = 0.438; T2-SPAIR: $3.8{\pm}0.9$ vs. $3.9{\pm}0.5$, p = 0.744; viability imaging: $4.5{\pm}0.8$ vs. $4.7{\pm}0.6$, p = 0.254). Although the image qualities of 3.0 T functional cine images were slightly lower than those of 1.5 T images ($3.6{\pm}0.7$ vs. $4.2{\pm}0.6$, p < 0.001), the mean SNR and CNR at 3.0 T were significantly improved (SNR 143% increase, CNR 108% increase, p < 0.001). With our imaging protocol for 3.0 T perfusion imaging, there was an insignificant decrease in the SNR (11% decrease, p = 0.172) and CNR (7% decrease, p = 0.638). However, the overall image quality was significantly improved ($4.6{\pm}0.5$ vs. $4.0{\pm}0.8$, p = 0.006). Conclusion: With our experience, 3.0 T MRI was shown to be feasible for the routine assessment of cardiac imaging.

A 1280-RGB $\times$ 800-Dot Driver based on 1:12 MUX for 16M-Color LTPS TFT-LCD Displays (16M-Color LTPS TFT-LCD 디스플레이 응용을 위한 1:12 MUX 기반의 1280-RGB $\times$ 800-Dot 드라이버)

  • Kim, Cha-Dong;Han, Jae-Yeol;Kim, Yong-Woo;Song, Nam-Jin;Ha, Min-Woo;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.1
    • /
    • pp.98-106
    • /
    • 2009
  • This work proposes a 1280-RGB $\times$ 800-Dot 70.78mW 0.l3um CMOS LCD driver IC (LDI) for high-performance 16M-color low temperature poly silicon (LTPS) thin film transistor liquid crystal display (TFT-LCD) systems such as ultra mobile PC (UMPC) and mobile applications simultaneously requiring high resolution, low power, and small size at high speed. The proposed LDI optimizes power consumption and chip area at high resolution based on a resistor-string based architecture. The single column driver employing a 1:12 MUX architecture drives 12 channels simultaneously to minimize chip area. The implemented class-AB amplifier achieves a rail-to-rail operation with high gain and low power while minimizing the effect of offset and output deviations for high definition. The supply- and temperature-insensitive current reference is implemented on chip with a small number of MOS transistors. A slew enhancement technique applicable to next-generation source drivers, not implemented on this prototype chip, is proposed to reduce power consumption further. The prototype LDI implemented in a 0.13um CMOS technology demonstrates a measured settling time of source driver amplifiers within 1.016us and 1.072us during high-to-low and low-to-high transitions, respectively. The output voltage of source drivers shows a maximum deviation of 11mV. The LDI with an active die area of $12,203um{\times}1500um$ consumes 70.78mW at 1.5V/5.5V.