• Title/Summary/Keyword: IT-fusion

Search Result 2,959, Processing Time 0.031 seconds

Transflective liquid crystal display with single cell gap and simple structure

  • Kim, Mi-Young;Lim, Young-Jin;Jeong, Eun;Chin, Mi-Hyung;Kim, Jin-Ho;Srivastava, Anoop Kumar;Lee, Seung-Hee
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2008.10a
    • /
    • pp.340-343
    • /
    • 2008
  • This work reports the simple fabrication of the single cell gap transflective liquid crystal display (LCD) using wire grid polarizer. The nano sized wire grid polarizer was patterned on common electrode itself, on the reflective part of FFS (Fringe field switching) mode whereas the common electrode was unpatterned at transmissive part. However, this structure didn't show single gamma curve, so we further improved the device by patterning the common electrode at transmissive part. As a result, V-T curve of proposed structure shows single gamma curve. Such a device structure is free from in-cell retarder, compensation film and reflector and furthermore it is very thin and easy to fabricate.

  • PDF

Real-Time Visible-Infrared Image Fusion using Multi-Guided Filter

  • Jeong, Woojin;Han, Bok Gyu;Yang, Hyeon Seok;Moon, Young Shik
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3092-3107
    • /
    • 2019
  • Visible-infrared image fusion is a process of synthesizing an infrared image and a visible image into a fused image. This process synthesizes the complementary advantages of both images. The infrared image is able to capture a target object in dark or foggy environments. However, the utility of the infrared image is hindered by the blurry appearance of objects. On the other hand, the visible image clearly shows an object under normal lighting conditions, but it is not ideal in dark or foggy environments. In this paper, we propose a multi-guided filter and a real-time image fusion method. The proposed multi-guided filter is a modification of the guided filter for multiple guidance images. Using this filter, we propose a real-time image fusion method. The speed of the proposed fusion method is much faster than that of conventional image fusion methods. In an experiment, we compare the proposed method and the conventional methods in terms of quantity, quality, fusing speed, and flickering artifacts. The proposed method synthesizes 57.93 frames per second for an image size of $320{\times}270$. Based on our experiments, we confirmed that the proposed method is able to perform real-time processing. In addition, the proposed method synthesizes flicker-free video.

Development of Multi-sensor Image Fusion software(InFusion) for Value-added applications (고부가 활용을 위한 이종영상 융합 소프트웨어(InFusion) 개발)

  • Choi, Myung-jin;Chung, Inhyup;Ko, Hyeong Ghun;Jang, Sumin
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.3
    • /
    • pp.15-21
    • /
    • 2017
  • Following the successful launch of KOMPSAT-3 in May 2012, KOMPSAT-5 in August 2013, and KOMPSAT-3A in March 2015 have succeeded in launching the integrated operation of optical, radar and thermal infrared sensors in Korea. We have established a foundation to utilize the characteristics of each sensors. In order to overcome limitations in the range of application and accuracy of the application of a single sensor, multi-sensor image fusion techniques have been developed which take advantage of multiple sensors and complement each other. In this paper, we introduce the development of software (InFusion) for multi-sensor image fusion and valued-added product generation using KOMPSAT series. First, we describe the characteristics of each sensor and the necessity of fusion software development, and describe the entire development process. It aims to increase the data utilization of KOMPSAT series and to inform the superiority of domestic software through creation of high value-added products.

An Analysis on the Mechanism and Algorism of ET·IT Based Future City Space (환경기술과 정보기술 기반의 미래도시 공간 메커니즘과 알고리즘 분석)

  • Han, Ju-Hyung;Lee, Sang-Ho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.3
    • /
    • pp.296-305
    • /
    • 2017
  • This study aims to create a new urban space through mechanism structure and algorism analysis between IT and ET. The results are as follows. First, the development trends of ET IT are classified into 4 types, "Eco-Friendly Development", "Energy Production Technology Development", "Energy Saving Technology Development" and "Wide Area IT Network Development", which are found to be constantly evolving. Second, Sang-Am DMC developed through the environmentally protective and eco-friendly aspects of ET from the Korean War to 1978. Wide area IT networks developed rapidly from 1990 to 2000. However, in 2010, urban spaces began to develop by the fusion of the Environment and Information. The fusion of Environment and Information in the development trends in the past is referred to as "Individual Development", that in the present is referred to as "Semi-fusion Development" and that in the future will be referred to as "Total Fusion Development". Third, the mechanism structure of DMC has evolved through creation, extinction and fusion processes. The creation process will serve to supplement the insufficiencies of the existing systems, the extinction process will be the compactification of the fusion process, and the fusion process will be the standard for creation and extinction. Finally, the future of new innovative urban and architectural spaces will be forged by the mechanism symbolization patterns of IT ET.

X-Ray Image Enhancement Using a Boundary Division Wiener Filter and Wavelet-Based Image Fusion Approach

  • Khan, Sajid Ullah;Chai, Wang Yin;See, Chai Soo;Khan, Amjad
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.35-45
    • /
    • 2016
  • To resolve the problems of Poisson/impulse noise, blurriness, and sharpness in degraded X-ray images, a novel and efficient enhancement algorithm based on X-ray image fusion using a discrete wavelet transform is proposed in this paper. The proposed algorithm consists of two basics. First, it applies the techniques of boundary division to detect Poisson and impulse noise corrupted pixels and then uses the Wiener filter approach to restore those corrupted pixels. Second, it applies the sharpening technique to the same degraded X-ray image. Thus, it has two source X-ray images, which individually preserve the enhancement effects. The details and approximations of these sources X-ray images are fused via different fusion rules in the wavelet domain. The results of the experiment show that the proposed algorithm successfully combines the merits of the Wiener filter and sharpening and achieves a significant proficiency in the enhancement of degraded X-ray images exhibiting Poisson noise, blurriness, and edge details.

Attitude Estimation for the Biped Robot with Vision and Gyro Sensor Fusion (비전 센서와 자이로 센서의 융합을 통한 보행 로봇의 자세 추정)

  • Park, Jin-Seong;Park, Young-Jin;Park, Youn-Sik;Hong, Deok-Hwa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.6
    • /
    • pp.546-551
    • /
    • 2011
  • Tilt sensor is required to control the attitude of the biped robot when it walks on an uneven terrain. Vision sensor, which is used for recognizing human or detecting obstacles, can be used as a tilt angle sensor by comparing current image and reference image. However, vision sensor alone has a lot of technological limitations to control biped robot such as low sampling frequency and estimation time delay. In order to verify limitations of vision sensor, experimental setup of an inverted pendulum, which represents pitch motion of the walking or running robot, is used and it is proved that only vision sensor cannot control an inverted pendulum mainly because of the time delay. In this paper, to overcome limitations of vision sensor, Kalman filter for the multi-rate sensor fusion algorithm is applied with low-quality gyro sensor. It solves limitations of the vision sensor as well as eliminates drift of gyro sensor. Through the experiment of an inverted pendulum control, it is found that the tilt estimation performance of fusion sensor is greatly improved enough to control the attitude of an inverted pendulum.

A Study of 17×17 LED Dot Matrix for offring Various Contents (다양한 콘텐츠 제공을 위한 17×17 LED 도트 매트릭스 제작 및 연구)

  • Bae, Ye-Jeong;Kwon, Jong-Man;Jeong, Sun-Ho;Park, Goo-Man;Cha, Jae-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.11a
    • /
    • pp.197-198
    • /
    • 2016
  • 최근 RGB LED 조명은 다양한 장점으로 이를 활용한 연구가 진행되고 있으며 단순 조명이 아닌 정보전달 및 공간연출의 디자인적 요소의 역할을 이행하기도 한다. RGB의 혼합으로 다양한 색상이 표현 가능한 LED를 사용하여 개별적으로 LED를 제어해 여러 가지 색상 및 모양이 표현 가능한 LED 도트 매트릭스를 제작하였으며, 이를 활용해 다양한 콘텐츠를 출력하고자 한다. 다양한 모양의 콘텐츠에선 개별적으로 LED를 제어하고, 그 외의 콘텐츠에선 원하는 LED를 그룹지어 제어한다. 본 논문에서 제작한 LED 도트 매트릭스는 많은 정보를 전달 할 수 있으며, 다양한 콘텐츠 제공으로 인한 상업화 및 효율성의 확대를 꾀할 수 있다.

  • PDF

Transfer Learning-Based Feature Fusion Model for Classification of Maneuver Weapon Systems

  • Jinyong Hwang;You-Rak Choi;Tae-Jin Park;Ji-Hoon Bae
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.673-687
    • /
    • 2023
  • Convolutional neural network-based deep learning technology is the most commonly used in image identification, but it requires large-scale data for training. Therefore, application in specific fields in which data acquisition is limited, such as in the military, may be challenging. In particular, the identification of ground weapon systems is a very important mission, and high identification accuracy is required. Accordingly, various studies have been conducted to achieve high performance using small-scale data. Among them, the ensemble method, which achieves excellent performance through the prediction average of the pre-trained models, is the most representative method; however, it requires considerable time and effort to find the optimal combination of ensemble models. In addition, there is a performance limitation in the prediction results obtained by using an ensemble method. Furthermore, it is difficult to obtain the ensemble effect using models with imbalanced classification accuracies. In this paper, we propose a transfer learning-based feature fusion technique for heterogeneous models that extracts and fuses features of pre-trained heterogeneous models and finally, fine-tunes hyperparameters of the fully connected layer to improve the classification accuracy. The experimental results of this study indicate that it is possible to overcome the limitations of the existing ensemble methods by improving the classification accuracy through feature fusion between heterogeneous models based on transfer learning.

Pose Estimation of Ground Test Bed using Ceiling Landmark and Optical Flow Based on Single Camera/IMU Fusion (천정부착 랜드마크와 광류를 이용한 단일 카메라/관성 센서 융합 기반의 인공위성 지상시험장치의 위치 및 자세 추정)

  • Shin, Ok-Shik;Park, Chan-Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.1
    • /
    • pp.54-61
    • /
    • 2012
  • In this paper, the pose estimation method for the satellite GTB (Ground Test Bed) using vision/MEMS IMU (Inertial Measurement Unit) integrated system is presented. The GTB for verifying a satellite system on the ground is similar to the mobile robot having thrusters and a reaction wheel as actuators and floating on the floor by compressed air. The EKF (Extended Kalman Filter) is also used for fusion of MEMS IMU and vision system that consists of a single camera and infrared LEDs that is ceiling landmarks. The fusion filter generally utilizes the position of feature points from the image as measurement. However, this method can cause position error due to the bias of MEMS IMU when the camera image is not obtained if the bias is not properly estimated through the filter. Therefore, it is proposed that the fusion method which uses the position of feature points and the velocity of the camera determined from optical flow of feature points. It is verified by experiments that the performance of the proposed method is robust to the bias of IMU compared to the method that uses only the position of feature points.