• Title/Summary/Keyword: Discrete information

Search Result 1,925, Processing Time 0.026 seconds

Password-Based Authentication Protocol for Remote Access using Public Key Cryptography (공개키 암호 기법을 이용한 패스워드 기반의 원거리 사용자 인증 프로토콜)

  • 최은정;김찬오;송주석
    • Journal of KIISE:Information Networking
    • /
    • v.30 no.1
    • /
    • pp.75-81
    • /
    • 2003
  • User authentication, including confidentiality, integrity over untrusted networks, is an important part of security for systems that allow remote access. Using human-memorable Password for remote user authentication is not easy due to the low entropy of the password, which constrained by the memory of the user. This paper presents a new password authentication and key agreement protocol suitable for authenticating users and exchanging keys over an insecure channel. The new protocol resists the dictionary attack and offers perfect forward secrecy, which means that revealing the password to an attacher does not help him obtain the session keys of past sessions against future compromises. Additionally user passwords are stored in a form that is not plaintext-equivalent to the password itself, so an attacker who captures the password database cannot use it directly to compromise security and gain immediate access to the server. It does not have to resort to a PKI or trusted third party such as a key server or arbitrator So no keys and certificates stored on the users computer. Further desirable properties are to minimize setup time by keeping the number of flows and the computation time. This is very useful in application which secure password authentication is required such as home banking through web, SSL, SET, IPSEC, telnet, ftp, and user mobile situation.

Image Compression Using DCT Map FSVQ and Single - side Distribution Huffman Tree (DCT 맵 FSVQ와 단방향 분포 허프만 트리를 이용한 영상 압축)

  • Cho, Seong-Hwan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.10
    • /
    • pp.2615-2628
    • /
    • 1997
  • In this paper, a new codebook design algorithm is proposed. It uses a DCT map based on two-dimensional discrete cosine of transform (2D DCT) and finite state vector quantizer (FSVQ) when the vector quantizer is designed for image transmission. We make the map by dividing input image according to edge quantity, then by the map, the significant features of training image are extracted by using the 2D DCT. A master codebook of FSVQ is generated by partitioning the training set using binary tree based on tree-structure. The state codebook is constructed from the master codebook, and then the index of input image is searched at not master codebook but state codebook. And, because the coding of index is important part for high speed digital transmission, it converts fixed length codes to variable length codes in terms of entropy coding rule. The huffman coding assigns transmission codes to codes of codebook. This paper proposes single-side growing huffman tree to speed up huffman code generation process of huffman tree. Compared with the pairwise nearest neighbor (PNN) and classified VQ (CVQ) algorithm, about Einstein and Bridge image, the new algorithm shows better picture quality with 2.04 dB and 2.48 dB differences as to PNN, 1.75 dB and 0.99 dB differences as to CVQ respectively.

  • PDF

A Study of Carry Over Contamination in Chematology (이월오염에 대한 연구)

  • Chang, Sang-Wu;Kim, Nam-Yong;Lyu, Jae-Gi;Jung, Dong-Jin;Kim, Gi-You;Park, Yong-Won;Chu, Kyung-Bok
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.37 no.3
    • /
    • pp.178-184
    • /
    • 2005
  • Carry over contamination has been reduced in some systems by flushing the internal and external surfaces of the sample probe with copious amount of diluent. It between specimens should be kept as small as possible. A built-in, continuous-flow wash reservoir, which allows the simultaneous washing of the interior and exterior of the syringe needles, addresses this issue. In addition, residual contamination can further be prevented through the use of efficient needle rinsing procedures. In discrete systems with disposable reaction vessels and measuring cuvets, any carry over is entirely caused by the pipetting system. In analyzers with reuseable cuvets or flow cells, carry over may arise at every point through which high samples pass sequentially. Therefore, disposable sample probe tips can eliminate both the contamination of one sample by another inside the probe and the carry over of in specimen into the specimen in the cup. The results of the applicative carry over experiment studied on 21 items for total protein (TP), albumin (ALB), total bilirubin (TB), alkaline phosphatase (ALP), aspratate aminotranferase (AST), alanine aminotranferase (ALT), gamma glutamyl transferase (GGT), creatinine kinase (CK), lactic dehydrogenase (LD), creatnine (CRE), blood urea nitrogen (BUN), uric acid (UA), total cholesterol (TC), triglyceride (TG), glucose (GLU), amylase (AMY), calcium (CA), inorganic phosphorus (IP), sodium (Na), potassium (K), chloride (CL) tests in chematology were as follows. Evaluation of process performance less than 1% in all tests was very good, but a percentage of ALB, TP, TB, ALP, CRE, UA, TC, GLU, AMY, IP, K, Na, and CL was 0%, implying no carry over. Other tests were ALT(-0.08%), GGT(-0.09%), CK(0.08%), LD(0.06%), BUN(0.12%), TG (-0.06%), and CA(0.89%).

  • PDF

Evaluation of the Effectiveness of Surveillance on Improving the Detection of Healthcare Associated Infections (의료관련감염에서 감시 개선을 위한 평가)

  • Park, Chang-Eun
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.51 no.1
    • /
    • pp.15-25
    • /
    • 2019
  • The development of reliable and objective definitions as well as automated processes for the detection of health care-associated infections (HAIs) is crucial; however, transformation to an automated surveillance system remains a challenge. Early outbreak identification usually requires clinicians who can recognize abnormal events as well as ongoing disease surveillance to determine the baseline rate of cases. The system screens the laboratory information system (LIS) data daily to detect candidates for health care-associated bloodstream infection (HABSI) according to well-defined detection rules. The system detects and reserves professional autonomy by requiring further confirmation. In addition, web-based HABSI surveillance and classification systems use discrete data elements obtained from the LIS, and the LIS-provided data correlates strongly with the conventional infection-control personnel surveillance system. The system was timely, acceptable, useful, and sensitive according to the prevention guidelines. The surveillance system is useful because it can help health care professionals better understand when and where the transmission of a wide range of potential pathogens may be occurring in a hospital. A national plan is needed to strengthen the main structures in HAI prevention, Healthcare Associated Prevention and Control Committee (HAIPCC), sterilization service (SS), microbiology laboratories, and hand hygiene resources, considering their impact on HAI prevention.

A Fast Processor Architecture and 2-D Data Scheduling Method to Implement the Lifting Scheme 2-D Discrete Wavelet Transform (리프팅 스킴의 2차원 이산 웨이브릿 변환 하드웨어 구현을 위한 고속 프로세서 구조 및 2차원 데이터 스케줄링 방법)

  • Kim Jong Woog;Chong Jong Wha
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.42 no.4 s.334
    • /
    • pp.19-28
    • /
    • 2005
  • In this paper, we proposed a parallel fast 2-D discrete wavelet transform hardware architecture based on lifting scheme. The proposed architecture improved the 2-D processing speed, and reduced internal memory buffer size. The previous lifting scheme based parallel 2-D wavelet transform architectures were consisted with row direction and column direction modules, which were pair of prediction and update filter module. In 2-D wavelet transform, column direction processing used the row direction results, which were not generated in column direction order but in row direction order, so most hardware architecture need internal buffer memory. The proposed architecture focused on the reducing of the internal memory buffer size and the total calculation time. Reducing the total calculation time, we proposed a 4-way data flow scheduling and memory based parallel hardware architecture. The 4-way data flow scheduling can increase the row direction parallel performance, and reduced the initial latency of starting of the row direction calculation. In this hardware architecture, the internal buffer memory didn't used to store the results of the row direction calculation, while it contained intermediate values of column direction calculation. This method is very effective in column direction processing, because the input data of column direction were not generated in column direction order The proposed architecture was implemented with VHDL and Altera Stratix device. The implementation results showed overall calculation time reduced from $N^2/2+\alpha$ to $N^2/4+\beta$, and internal buffer memory size reduced by around $50\%$ of previous works.

Postprocessing of Inter-Frame Coded Images Based on Convex Projection and Regularization (POCS와 정규화를 기반으로한 프레임간 압출 영사의 후처리)

  • Kim, Seong-Jin;Jeong, Si-Chang;Hwang, In-Gyeong;Baek, Jun-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.58-65
    • /
    • 2002
  • In order to reduce blocking artifacts in inter-frame coded images, we propose a new image restoration algorithm, which directly processes differential images before reconstruction. We note that blocking artifact in inter-frame coded images is caused by both 8$\times$8 DCT and 16$\times$16 macroblock based motion compensation, while that of intra-coded images is caused by 8$\times$8 DCT only. According to the observation, we Propose a new degradation model for differential images and the corresponding restoration algorithm that utilizes additional constraints and convex sets for discontinuity inside blocks. The proposed restoration algorithm is a modified version of standard regularization that incorporate!; spatially adaptive lowpass filtering with consideration of edge directions by utilizing a part of DCT coefficients. Most of video coding standard adopt a hybrid structure of block-based motion compensation and block discrete cosine transform (BDCT). By this reason, blocking artifacts are occurred on both block boundary and block interior For more complete removal of both kinds of blocking artifacts, the restored differential image must satisfy two constraints, such as, directional discontinuities on block boundary and block interior Those constraints have been used for defining convex sets for restoring differential images.

High-Order Temporal Moving Average Filter Using Actively-Weighted Charge Sampling (능동-가중치 전하 샘플링을 이용한 고차 시간상 이동평균 필터)

  • Shin, Soo-Hwan;Cho, Yong-Ho;Jo, Sung-Hun;Yoo, Hyung-Joun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.49 no.2
    • /
    • pp.47-55
    • /
    • 2012
  • A discrete-time(DT) filter with high-order temporal moving average(TMA) using actively-weighted charge sampling is proposed in this paper. To obtain different weight of sampled charge, the variable transconductance OTA is used prior to charge sampler, and the ratio of charge can be effectively weighted by switching the control transistors in the OTA. As a result, high-order TMA operation can be possible by actively-weighted charge sampling. In addition, the transconductance generated by the OTA is relatively accurate and stable by using the size ratio of the control transistors. The high-order TMA filter has small size, increased voltage gain, and low parasitic effects due to the small amount of switches and sampling capacitors. It is implemented in the TSMC $0.18-{\mu}m$ CMOS process by TMA-$2^2$. The simulated voltage gain is about 16.7 dB, and P1dB and IIP3 are -32.5 dBm and -23.7 dBm, respectively. DC current consumption is about 9.7 mA.

Adaptive Block Watermarking Based on JPEG2000 DWT (JPEG2000 DWT에 기반한 적응형 블록 워터마킹 구현)

  • Lim, Se-Yoon;Choi, Jun-Rim
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.11
    • /
    • pp.101-108
    • /
    • 2007
  • In this paper, we propose and verify an adaptive block watermarking algorithm based on JPEG2000 DWT, which determines watermarking for the original image by two scaling factors in order to overcome image degradation and blocking problem at the edge. Adaptive block watermarking algorithm uses 2 scaling factors, one is calculated by the ratio of present block average to the next block average, and the other is calculated by the ratio of total LL subband average to each block average. Signals of adaptive block watermark are obtained from an original image by itself and the strength of watermark is automatically controlled by image characters. Instead of conventional methods using identical intensity of a watermark, the proposed method uses adaptive watermark with different intensity controlled by each block. Thus, an adaptive block watermark improves the visuality of images by 4$\sim$14dB and it is robust against attacks such as filtering, JPEG2000 compression, resizing and cropping. Also we implemented the algorithm in ASIC using Hynix 0.25${\mu}m$ CMOS technology to integrate it in JPEG2000 codec chip.

Influence of lossy ground on impulse propagation in time domain for impulse ground penetrating radar (초광대역 임펄스 지반탐사레이더에서 지면의 영향에 따른 임펄스 전파 특성 연구)

  • Kim, Kwan-Ho;Park, Young-Jin;Yoon, Young-Joong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.44 no.11
    • /
    • pp.42-47
    • /
    • 2007
  • In this paper, influence of lossy ground and gap variation between lossy ground and UWB antenna on impulse propagation in time domain for impulse ground penetrating radar (GPR) is numerically and experimentally investigated. For this study, a novel planar UWB fat dipole antenna is developed. First, influence of lossy ground and gap variation between lossy ground and UWB antenna is simulated. For verification, a test field of sand and wet clay soil is built and using the developed dipole antenna, transmission behavior is investigated at the test field. With an aid of IDFT (inverse discrete Fourier transform), time domain impulse response for transmission coefficient measured and simulated in frequency domain is obtained. Measurement and simulation show that the frequency of maximum transmission coefficient and transmission coefficient are increased with higher dielectric constant and larger gap distance. In time domain, it is shown that for higher dielectric constant, the amplitude of the received signal in time domain is higher and reflected signals are seriously modified. Also, it is found that variation of gap between antenna and ground surface makes timing of peak value changed.

Lane-wise Travel Speed Characteristics Analysis in Uninterrupted Flow Considering Lane-wise Speed Reversal (차로속도역전현상을 고려한 연속류 도로의 차로별 주행 속도 특성 분석)

  • Yang, Inchul;Jeon, Woo Hoon;Ki, Sung hwan;Yoon, Jungeun
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.15 no.6
    • /
    • pp.116-126
    • /
    • 2016
  • In this study, lane-wise traffic flow characteristics were analysed on uninterrupted flow using a new notion of "lane-wise travel speed reversal (LTSR)" which is defined as a phenomena that travel speed in the median lane is lower than other lanes. Mathematical formulation was also proposed to calculate the strength of LTSR. The experiment road site is Seoul Outer Ring Expressway (Jayuro-IC~Jangsoo-IC), and travel trajectories for each four lane were collected for weekdays (Mon. through Fri.) during morning peak. Comparing lane-wise travel speeds for entire test road section, no LTSR was observed, meaning that the travel speed in the median lane is the fastest, followed by 2nd, 3rd, and 4th lane as in order. Howerver, the result of microscopic analysis using 100-meter discrete road section based data shows that LTSR occurs many times. Especially the strength of LTSR is higher in congestion area and freeway merge and diverge segment. It is expected that these results could be used as a fundamental data when establishing lane-by-lane traffic operation strategy and developing lane-wise traffic information collection and dissemination technology.