• Title/Summary/Keyword: Information input algorithm

Search Result 2,444, Processing Time 0.036 seconds

On Adaptive LDPC Coded MIMO-OFDM with MQAM on Fading Channels (페이딩 채널에서 적응 LDPC 부호화 MIMO-OFDM의 성능 분석)

  • Kim, Jin-Woo;Joh, Kyung-Hyun;Ra, Keuk-Hwan
    • 전자공학회논문지 IE
    • /
    • v.43 no.2
    • /
    • pp.80-86
    • /
    • 2006
  • The wireless communication based on LDPC and adaptive spatial-subcarrier coded modulation using MQAM for orthogonal frequency division multiplexing (OFDM) wireless transmission by using instantaneous channel state information and employing multiple antennas at both the transmitter and the receiver. Adaptive coded modulation is a promising idea for bandwidth-efficient transmission on time-varying, narrowband wireless channels. On power limited Additive White Gaussian Noise (AWGN) channels, low density parity check (LDPC) codes are a class of error control codes which have demonstrated impressive error correcting qualities, under some conditions performing even better than turbo codes. The paper demonstrates OFDM with LDPC and adaptive modulation applied to Multiple-Input Multiple-Output (MIMO) system. An optimization algorithm to obtain a bit and power allocation for each subcarrier assuming instantaneous channel knowledge is used. The experimental results are shown the potential of our proposed system.

Driving Pattern Recognition System Using Smartphone sensor stream (스마트폰 센서스트림을 이용한 운전 패턴 인식 시스템)

  • Song, Chung-Won;Nam, Kwang-Woo;Lee, Chang-Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.17 no.3
    • /
    • pp.35-42
    • /
    • 2012
  • The database for driving patterns can be utilized in various system such as automatic driving system, driver safety system, and it can be helpful to monitor driving style. Therefore, we propose a driving pattern recognition system in which the sensor streams from a smartphone are recorded and used for recognizing driving events. In this paper we focus on the driving pattern recognition that is an essential and preliminary step of driving style recognition. We divide input sensor streams into 7 driving patterns such as, Left-turn(L), U-turn(U), Right-turn(R), Rapid-Braking(RB), Quick-Start(QS), Rapid-Acceleration (RA), Speed-Bump(SB). To classify driving patterns, first, a preprocessing step for data smoothing is followed by an event detection step. Last the detected events are classified by DTW(Dynamic Time Warping) algorithm. For assisting drivers we provide the classified pattern with the corresponding video stream which is recorded with its sensor stream. The proposed system will play an essential role in the safety driving system or driving monitoring system.

Implementation of Efficient Pile-up Pulse Processing Algorithm Based on Trapezoidal Filter (사다리꼴 필터를 이용한 효율적인 중첩펄스 처리 알고리즘 구현)

  • Piao, Zheyan;Chung, Jin-Gyun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.162-167
    • /
    • 2013
  • X-ray or ${\gamma}$-ray spectroscopy systems are widely used for analyzing material characteristics. Pile-up pulses are very often encountered for several reasons in XRF systems. Thus, it is necessary to reject or recover pile-up pulses to accurately analyze the material under test. In this paper, a pile up pulse rejection and recovery method is presented for XRF systems using trapezoidal pulse shaping of the input signals. Since the proposed method is based on the trapezoidal pulse shaping method widely-used in XRF systems, only two counters and a few registers are needed to implement the additional function of pile-up pulse rejection and recovery. Consequently, the proposed system is much simpler than conventional pulse reconstruction systems. It is shown that the proposed method can detect and reject pile-up pulses exactly. It is also shown that the pile-up pulses can be recovered if some conditions are satisfied.

A Design and Implementation Vessel USN Middleware of Server-Side Method based on Context Aware (Server-Side 방식의 상황 인식 기반 선박 USN 미들웨어 구현 및 설계)

  • Song, Byoung-Ho;Song, Iick-Ho;Kim, Jong-Hwa;Lee, Seong-Ro
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.2
    • /
    • pp.116-124
    • /
    • 2011
  • In this paper, We implemented vessel USN middleware by server-side method considering characteristics of ocean environment. We designed multiple query process module in order to efficient process multidimensional sensor stream data and proposed optimized query plan using Mjoin query and hash table. This paper proposed method that context aware of vessel and manage considering characteristics of ocean. We decided to risk context using SVM algorithm in context awareness management module. As a result, we obtained about 87.5% average accuracy for fire case and about 85.1% average accuracy for vessel risk case by input 5,000 data sets and implemented vessel USN monitoring system.

Fingerprint Image Quality Assessment for On-line Fingerprint Recognition (온라인 지문 인식 시스템을 위한 지문 품질 측정)

  • Lee, Sang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.2
    • /
    • pp.77-85
    • /
    • 2010
  • Fingerprint image quality checking is one of the most important issues in on-line fingerprint recognition because the recognition performance is largely affected by the quality of fingerprint images. In the past, many related fingerprint quality checking methods have typically considered the local quality of fingerprint. However, It is necessary to estimate the global quality of fingerprint to judge whether the fingerprint can be used or not in on-line recognition systems. Therefore, in this paper, we propose both local and global-based methods to calculate the fingerprint quality. Local fingerprint quality checking algorithm considers both the condition of the input fingerprints and orientation estimation errors. The 2D gradients of the fingerprint images were first separated into two sets of 1D gradients. Then,the shapes of the PDFs(Probability Density Functions) of these gradients were measured in order to determine fingerprint quality. And global fingerprint quality checking method uses neural network to estimate the global fingerprint quality based on local quality values. We also analyze the matching performance using FVC2002 database. Experimental results showed that proposed quality check method has better matching performance than NFIQ(NIST Fingerprint Image Quality) method.

Design and Implementation of Conversion System Between ISO/IEC 10646 and Multi-Byte Code Set (ISO/IEC 10646과 멀티바이트 코드 세트간의 변환시스템의 설계 및 구현)

  • Kim, Chul
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.4
    • /
    • pp.319-324
    • /
    • 2018
  • In this paper, we designed and implemented a code conversion method between ISO/IEC 10646 and the multi-byte code set. The Universal Multiple-Octet Coded Character Set(UCS) provides codes for more than 65,000 characters, huge increase over ASCII's code capacity of 128 characters. It is applicable to the representation, transmission, interchange, processing, storage, input and presentation of the written form of the language throughout the world. Therefore, it is so important to guide on code conversion methods to their customers during customer systems are migrated to the environment which the UCS code system is used and/or the current code systems, i.e., ASCII PC code and EBCDIC host code, are used with the UCS together. Code conversion utility including the mapping table between the UCS and IBM new host code is shown for the purpose of the explanation of code conversion algorithm and its implementation in the system. The programs are successfully executed in the real system environments and so can be delivered to the customer during its migration stage from the UCS to the current IBM code system and vice versa.

Efficient Rate Control by Lagrange Multiplier Using Adaptive Mode Selection in Video Coding (비디오 코팅시 Lagrage 승수를 조정하여 적응 모드 선택에 따른 비트율의 제어)

  • Ryu, Chul;Kim, Seung P.
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.1B
    • /
    • pp.77-88
    • /
    • 2000
  • This paper presents an approach for rate control by adaptively selecting macroblock modes in video coding.The problem of rate control has been investigated by many authors where quantizer level is adjustedbased on the buffer fullness. The proposed approach is different fron the previous ones [4] id that it finds the optimal decision curve rather than finding a set of the modes. Proposed algorithm extends the coding decision options for rate control to motion/no-motion compensation as well as inter/intra decisions. Instead of having a fixed motion/no-notion compensation or inter/intra decision curve, one can utilize an adaptive decision curvebased on the characteristics of input frames so that the PSNR at a given bit rate is maximized. Therefore, the proposed approach provides better rate control than simple quantizer feedback approach interns of visual quality. The curve is obtained by utilizing simulated annealing optimization technique. Thealgorithm is implemented and simulations are compared with other approaches within H.261 video codec.

  • PDF

Experiment and Implementation of a Machine-Learning Based k-Value Prediction Scheme in a k-Anonymity Algorithm (k-익명화 알고리즘에서 기계학습 기반의 k값 예측 기법 실험 및 구현)

  • Muh, Kumbayoni Lalu;Jang, Sung-Bong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.1
    • /
    • pp.9-16
    • /
    • 2020
  • The k-anonymity scheme has been widely used to protect private information when Big Data are distributed to a third party for research purposes. When the scheme is applied, an optimal k value determination is one of difficult problems to be resolved because many factors should be considered. Currently, the determination has been done almost manually by human experts with their intuition. This leads to degrade performance of the anonymization, and it takes much time and cost for them to do a task. To overcome this problem, a simple idea has been proposed that is based on machine learning. This paper describes implementations and experiments to realize the proposed idea. In thi work, a deep neural network (DNN) is implemented using tensorflow libraries, and it is trained and tested using input dataset. The experiment results show that a trend of training errors follows a typical pattern in DNN, but for validation errors, our model represents a different pattern from one shown in typical training process. The advantage of the proposed approach is that it can reduce time and cost for experts to determine k value because it can be done semi-automatically.

Flood Damage Assessment According to the Scenarios Coupled with GIS Data (GIS 자료와 연계한 시나리오별 홍수피해액 분석)

  • Lee, Geun-Sang;Park, Jin-Hyeg
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.19 no.4
    • /
    • pp.71-80
    • /
    • 2011
  • A simple and an improved methods for the assessment of flood damage were used in previous studies, and the Multi-Dimensional Flood Damage Assessment (MD-FDA) has been applied since 2004 in Korea. This study evaluated flood damage of dam downstream using considering MD-FDA method based on GIS data. Firstly, flood water level with FLDWAV (Flood Wave routing) model was input into cross section layer based on enforcement drainage algorithm, water depth grid data were created through spatial calculation with DEM data. The value of asset of building and agricultural land according to local government was evaluated using building layer from digital map and agricultural land map from landcover map. Also, itemized flood damage was calculated by unit price to building shape, evaluated value of housewares to urban type, unit cost to crop, tangible and inventory asset of company connected with building, agricultural land, flooding depth layer. Flood damage in rainfall frequency of 200 year showed 1.19, 1.30 and 1.96 times to flood damage in rainfall frequency of 100 year, 50 year and 10 year respectively by flood damage analysis.

A VLSI Array Processor Architecture for High-Speed Processing of Full Search Block Matching Algorithm (완전탐색 블럭정합 알고리즘의 고속 처리를 위한 VLSI 어레이 프로세서의 구조)

  • 이수진;우종호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.4A
    • /
    • pp.364-370
    • /
    • 2002
  • In this paper, we propose a VLSI array architecture for high speed processing of FBMA. First of all, the sequential FBMA is transformed into a single assignment code by using the index space expansion, and then the dependance graph is obtained from it. The two dimensional VLSI array is derived by projecting the dependance graph along the optimal direction. Since the candidate blocks in the search range are overlapped with columns as well as rows, the processing elements of the VLSI array are designed to reuse the overlapped data. As the results, the number of data inputs is reduced so that the processing performance is improved. The proposed VLSI array has (N$^2$+1)${\times}$(2p+1) processing elements and (N+2p) input ports where N is the block size and p is the maximum search range. The computation time of the rat reference block is (N$^2$+2(p+1)N+6p), and the block pipeline period is (3N+4p-1).