• 제목/요약/키워드: sampling theorem

검색결과 58건 처리시간 0.03초

Modeling of the friction in the tool-workpiece system in diamond burnishing process

  • Maximov, J.T.;Anchev, A.P.;Duncheva, G.V.
    • Coupled systems mechanics
    • /
    • 제4권4호
    • /
    • pp.279-295
    • /
    • 2015
  • The article presents a theoretical-experimental approach developed for modeling the coefficient of sliding friction in the dynamic system tool-workpiece in slide diamond burnishing of low-alloy unhardened steels. The experimental setup, implemented on conventional lathe, includes a specially designed device, with a straight cantilever beam as body. The beam is simultaneously loaded by bending (from transverse slide friction force) and compression (from longitudinal burnishing force), which is a reason for geometrical nonlinearity. A method, based on the idea of separation of the variables (time and metric) before establishing the differential equation of motion, has been applied for dynamic modeling of the beam elastic curve. Between the longitudinal (burnishing force) and transverse (slide friction force) forces exists a correlation defined by Coulomb's law of sliding friction. On this basis, an analytical relationship between the beam deflection and the sought friction coefficient has been obtained. In order to measure the deflection of the beam, strain gauges connected in a "full bridge" type of circuit are used. A flexible adhesive is selected, which provides an opportunity for dynamic measurements through the constructed measuring system. The signal is proportional to the beam deflection and is fed to the analog input of USB DAQ board, from where the signal enters in a purposely created virtual instrument which is developed by means of Labview. The basic characteristic of the virtual instrument is the ability to record and visualize in a real time the measured deflection. The signal sampling frequency is chosen in accordance with Nyquist-Shannon sampling theorem. In order to obtain a regression model of the friction coefficient with the participation of the diamond burnishing process parameters, an experimental design with 55 experimental points is synthesized. A regression analysis and analysis of variance have been carried out. The influence of the factors on the friction coefficient is established using sections of the hyper-surface of the friction coefficient model with the hyper-planes.

전기제어 설비의 출력 안정화를 위한 가우시안 접근법 (A Gaussian Approach in Stabilizing Outputs of Electrical Control Systems)

  • 바스넷버룬;방준호;유인호;김태형
    • 전기학회논문지
    • /
    • 제67권11호
    • /
    • pp.1562-1569
    • /
    • 2018
  • Sensor readings always have a certain degree of randomness and fuzziness due to its intrinsic property, other electronic devices in the circuitry, wires and the rapidly changing environment. In an electrical control system, such readings will bring instability in the system and other undesired events especially if the signal hovers around the threshold. This paper proposes a Gaussian-based statistical approach in stabilizing the output through sampling the sensor data and automatic tuning the threshold to the range of multiple standard deviations. It takes advantage of the Central limit theorem and its properties assuming that a large number of sensor data samples will eventually converge to a Gaussian distribution. Experimental results demonstrate the effectiveness of the proposed algorithm in completely stabilizing the outputs over known filtering algorithms like Exponential smoothing and Kalman Filter.

Preconditions for High Speed Confocal Image Acquisition with DMD Scanning.

  • Shim, S.B.;Lee, K.J.;Lee, J.H.;Hwang, Y.H.;Han, S.O.;Pak, J.H.;Choi, S.E.;Milster, Tom D.;Kim, J.S.
    • 한국광학회:학술대회논문집
    • /
    • 한국광학회 2006년도 하계학술발표회 논문집
    • /
    • pp.39-40
    • /
    • 2006
  • Digital image-projection and several modifications are the classical applications of Digital Micromirror Devices (DMD), however further applications in the field of optical metrology are also available. Operated with certain patterns, a DMD can function, for instance, as an array of pinholes that may substitute the Galvanic mirror or the stage scanning system presently used for 2 dimensional scanning in confocal microscopes. The various process parameters that influence the result of measurement (e.g. pinhole size, lateral scanning pitch and the number of pinholes used simultaneously, etc.) should be configured precisely for individual measurements by appropriately operating the DMD. This paper presents suitable conditions for the diffraction limited analysis between DMD-optics-CCD to achieve the best performance. Also sampling theorem that is necessary for the image acquisition by scanning system is simulated with OPTISCAN which is the simulator based on the diffraction theory.

  • PDF

Reducing Power Consumption of Wireless Capsule Endoscopy Utilizing Compressive Sensing Under Channel Constraint

  • Saputra, Oka Danil;Murti, Fahri Wisnu;Irfan, Mohammad;Putri, Nadea Nabilla;Shin, Soo Young
    • Journal of information and communication convergence engineering
    • /
    • 제16권2호
    • /
    • pp.130-134
    • /
    • 2018
  • Wireless capsule endoscopy (WCE) is considered as recent technology for the detection cancer cells in the human digestive system. WCE sends the captured information from inside the body to a sensor on the skin surface through a wireless medium. In WCE, the design of low-power consumption devices is a challenging topic. In the Shannon-Nyquist sampling theorem, the number of samples should be at least twice the highest transmission frequency to reconstruct precise signals. The number of samples is proportional to the power consumption in wireless communication. This paper proposes compressive sensing as a method to reduce power consumption in WCE, by means of a trade-off between samples and reconstruction accuracy. The proposed scheme is validated under channel constraints, expressed as the realistic human body path loss. The results show that the proposed scheme achieves a significant reduction in WCE power consumption and achieves a faster computation time with low signal error reconstruction.

연속적으로 투자가 이루어지는 보험상품 리스크 모형의 추가 연구 (Further study on the risk model with a continuous type investment)

  • 최승경;이의용
    • 응용통계연구
    • /
    • 제31권6호
    • /
    • pp.751-759
    • /
    • 2018
  • Cho 등 (Communications for Statistical Applications and Methods, 23, 423-432, 2016)은 잉여금이 적정수준에 이르면 연속적으로 투자가 이루어지는 보험상품 리스크 모형을 소개하고, 잉여금 과정의 정상분포함수를 연구하였다. 본 논문에서는 잉여금이 적정수준을 넘어 또 다른 충분한 수준에 이르게 되면 추가로 즉시 투자가 이루어진다고 가정하고 기존의 연구를 확장한다. 잉여금 과정의 정상분포함수를 명확히 구하고, 보험청구액의 분포가 지수분포인 경우를 예제로 다룬다.

Carbonation depth prediction of concrete bridges based on long short-term memory

  • Youn Sang Cho;Man Sung Kang;Hyun Jun Jung;Yun-Kyu An
    • Smart Structures and Systems
    • /
    • 제33권5호
    • /
    • pp.325-332
    • /
    • 2024
  • This study proposes a novel long short-term memory (LSTM)-based approach for predicting carbonation depth, with the aim of enhancing the durability evaluation of concrete structures. Conventional carbonation depth prediction relies on statistical methodologies using carbonation influencing factors and in-situ carbonation depth data. However, applying in-situ data for predictive modeling faces challenges due to the lack of time-series data. To address this limitation, an LSTM-based carbonation depth prediction technique is proposed. First, training data are generated through random sampling from the distribution of carbonation velocity coefficients, which are calculated from in-situ carbonation depth data. Subsequently, a Bayesian theorem is applied to tailor the training data for each target bridge, which are depending on surrounding environmental conditions. Ultimately, the LSTM model predicts the time-dependent carbonation depth data for the target bridge. To examine the feasibility of this technique, a carbonation depth dataset from 3,960 in-situ bridges was used for training, and untrained time-series data from the Miho River bridge in the Republic of Korea were used for experimental validation. The results of the experimental validation demonstrate a significant reduction in prediction error from 8.19% to 1.75% compared with the conventional statistical method. Furthermore, the LSTM prediction result can be enhanced by sequentially updating the LSTM model using actual time-series measurement data.

재투자가 있는 잉여금 과정의 최적 운용정책 (An optimal management policy for the surplus process with investments)

  • 임세진;최승경;이의용
    • 응용통계연구
    • /
    • 제29권7호
    • /
    • pp.1165-1172
    • /
    • 2016
  • 보험 상품의 잉여금은 보험료 수입에 의해 증가하며 고객이 보험료를 청구할 때 감소한다. 보험회사는 잉여금이 충분히 많아지면 잉여금의 일부를 재투자하는 것을 통해 이익을 창출한다. 본 연구에서는 보험료 수입과 청구를 고려하여 잉여금의 수준을 나타낸 기존의 잉여금 모형을 소개하고 기존의 모형에 재투자의 개념과 운용비용을 도입하여 장시간에 걸친 단위시간당 평균비용을 구하고, 이를 최소화하는 재투자 수준과 목표 잉여금을 구한다.

심자도용 접선성분자장 측정방식 스퀴드 센서열 설계 (Design of a SQUID Sensor Array Measuring the Tangential Field Components in Magnetocardiogram)

  • 김기웅;이용호;권혁찬;김진목;김인선;박용기;이규원
    • Progress in Superconductivity
    • /
    • 제6권1호
    • /
    • pp.56-63
    • /
    • 2004
  • We consider design factors for a SQUID sensor array to construct a 52-channel magnetocardiogram (MCG) system that can be used to measure tangential components of the cardiac magnetic fields. Nowadays, full-size multichannel MCG systems, which cover the whole signal area of a heart, are developed to improve the clinical analysis with high accuracy and to provide patients with comfort in the course of measurement. To design the full-size MCG system, we have to make a compromise between cost and performance. The cost is involved with the number of sensors, the number of the electronics, the size of a cooling dewar, the consumption of refrigerants for maintenance, and etc. The performance is the capability of covering the whole heart volume at once and of localizing current sources with a small error. In this study, we design the cost-effective arrangement of sensors for MCG by considering an adequate sensor interval and the confidence region of a tolerable localization error, which covers the heart. In order to fit the detector array on the cylindrical dewar economically, we removed the detectors that were located at the corners of the array square. Through simulations using the confidence region method, we verified that our design of the detector array was good enough to obtain whole information from the heart at a time. A result of the simulation also suggested that tangential-component MCG measurement could localize deeper current dipoles than normal-component MCG measurement with the same confidence volume; therefore, we conclude that measurement of the tangential component is more suitable to an MCG system than measurement of the normal component.

  • PDF