• Title/Summary/Keyword: Vision sensing

Search Result 209, Processing Time 0.023 seconds

Compact Optical Systems for Space Applications

  • Biryuchinskiy, Sergey;Churayeu, Siarhei;Jeong, Yeuncheol
    • Journal of Space Technology and Applications
    • /
    • v.1 no.1
    • /
    • pp.104-120
    • /
    • 2021
  • Some optical schemes of lenses for spacecraft developed by the author are considered. The main optical characteristics of telescope lenses of various architectures are compared. We propose compact solutions of mirror, lens-mirror, and lens systems with maximum available angular resolutions and other parameters. Examples of calculating the optical systems of lenses used for various tasks both in the field of astronomy and in the field of remote sensing of the Earth and other planets are given. The example of onboard computer system is discussed. Practical recommendations on the development and use of telescope lenses are given.

Implementation of the SLAM System Using a Single Vision and Distance Sensors (단일 영상과 거리센서를 이용한 SLAM시스템 구현)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.149-156
    • /
    • 2008
  • SLAM(Simultaneous Localization and Mapping) system is to find a global position and build a map with sensing data when an unmanned-robot navigates an unknown environment. Two kinds of system were developed. One is used distance measurement sensors such as an ultra sonic and a laser sensor. The other is used stereo vision system. The distance measurement SLAM with sensors has low computing time and low cost, but precision of system can be somewhat worse by measurement error or non-linearity of the sensor In contrast, stereo vision system can accurately measure the 3D space area, but it needs high-end system for complex calculation and it is an expensive tool. In this paper, we implement the SLAM system using a single camera image and a PSD sensors. It detects obstacles from the front PSD sensor and then perceive size and feature of the obstacles by image processing. The probability SLAM was implemented using the data of sensor and image and we verify the performance of the system by real experiment.

Thermal imaging and computer vision technologies for the enhancement of pig husbandry: a review

  • Md Nasim Reza;Md Razob Ali;Samsuzzaman;Md Shaha Nur Kabir;Md Rejaul Karim;Shahriar Ahmed;Hyunjin Kyoung;Gookhwan Kim;Sun-Ok Chung
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.31-56
    • /
    • 2024
  • Pig farming, a vital industry, necessitates proactive measures for early disease detection and crush symptom monitoring to ensure optimum pig health and safety. This review explores advanced thermal sensing technologies and computer vision-based thermal imaging techniques employed for pig disease and piglet crush symptom monitoring on pig farms. Infrared thermography (IRT) is a non-invasive and efficient technology for measuring pig body temperature, providing advantages such as non-destructive, long-distance, and high-sensitivity measurements. Unlike traditional methods, IRT offers a quick and labor-saving approach to acquiring physiological data impacted by environmental temperature, crucial for understanding pig body physiology and metabolism. IRT aids in early disease detection, respiratory health monitoring, and evaluating vaccination effectiveness. Challenges include body surface emissivity variations affecting measurement accuracy. Thermal imaging and deep learning algorithms are used for pig behavior recognition, with the dorsal plane effective for stress detection. Remote health monitoring through thermal imaging, deep learning, and wearable devices facilitates non-invasive assessment of pig health, minimizing medication use. Integration of advanced sensors, thermal imaging, and deep learning shows potential for disease detection and improvement in pig farming, but challenges and ethical considerations must be addressed for successful implementation. This review summarizes the state-of-the-art technologies used in the pig farming industry, including computer vision algorithms such as object detection, image segmentation, and deep learning techniques. It also discusses the benefits and limitations of IRT technology, providing an overview of the current research field. This study provides valuable insights for researchers and farmers regarding IRT application in pig production, highlighting notable approaches and the latest research findings in this field.

Development of a Seabed Mapping System using SeaBeam2000 Multibeam Echo Sounder Data (SeaBeam2000 다중빔 음향측심기를 이용한 해저면 맵핑시스템 개발)

  • 박요섭;김학일;이용국;석봉출
    • Korean Journal of Remote Sensing
    • /
    • v.11 no.3
    • /
    • pp.129-145
    • /
    • 1995
  • SeaBeam2000, a multibeam echo sounder, is a new generation seabed mapping system of which a single swath covers an angular range of -60.deg. to 60.deg. from the vertical direction with 121 beams. It provides high-density and high-quality bathymetric data along with sidescan acoustic data. The purpose of the research is to develop a system for processing multibeam underwater acoustic and bathymetric data using digital signal processing techniques. Recently obtained multibeam echo sounder data covering a survey area in the East Sea of Korea ($37{\circ}$.00'N to $37{\circ}$30'N and $129{\circ}$40'E to $130{\circ}$30'E) are preliminarily processed using the developed system and reproduced in the raster image format as well as three dimensionally visualized form.

Remote Sensing Image Classification for Land Cover Mapping in Developing Countries: A Novel Deep Learning Approach

  • Lynda, Nzurumike Obianuju;Nnanna, Nwojo Agwu;Boukar, Moussa Mahamat
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.2
    • /
    • pp.214-222
    • /
    • 2022
  • Convolutional Neural networks (CNNs) are a category of deep learning networks that have proven very effective in computer vision tasks such as image classification. Notwithstanding, not much has been seen in its use for remote sensing image classification in developing countries. This is majorly due to the scarcity of training data. Recently, transfer learning technique has successfully been used to develop state-of-the art models for remote sensing (RS) image classification tasks using training and testing data from well-known RS data repositories. However, the ability of such model to classify RS test data from a different dataset has not been sufficiently investigated. In this paper, we propose a deep CNN model that can classify RS test data from a dataset different from the training dataset. To achieve our objective, we first, re-trained a ResNet-50 model using EuroSAT, a large-scale RS dataset to develop a base model then we integrated Augmentation and Ensemble learning to improve its generalization ability. We further experimented on the ability of this model to classify a novel dataset (Nig_Images). The final classification results shows that our model achieves a 96% and 80% accuracy on EuroSAT and Nig_Images test data respectively. Adequate knowledge and usage of this framework is expected to encourage research and the usage of deep CNNs for land cover mapping in cases of lack of training data as obtainable in developing countries.

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김재웅;김동호
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.643-646
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame process, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data of weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam, such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF

A Study on Automatic Seam Tracking and Weaving Width Control for Pipe Welding with Narrow Groove (협개선 배관 용접을 위한 용접선 추적 및 위빙 폭 자동 제어에 관한 연구)

  • Moon, Hyeong-Soon;Lee, Seok-Hyoung;Kim, Jong-Jun;Kim, Jong-Cheol
    • Special Issue of the Society of Naval Architects of Korea
    • /
    • 2013.12a
    • /
    • pp.73-80
    • /
    • 2013
  • From broad point of view, seam tracking has been one of main issues with respect to welding automation. Several attempts have been successful for seam tracking of fixed weaving width. As a solution of the seam tracking methods for varying groove width, the visual sensors such as CCD cameras have been adopted. Although the vision sensing techniques can achieve high accuracy, the weak point is that well-prepared vision sensor environment should be required to obtain high-quality visual measurements which can be easily affected by significant noises in industrial areas. This paper proposed an alternative seam tracking algorithm for narrow groove. A special measurement device for arc voltage, in this study, is developed to enhance the reliability of the measured welding signals. Based on the developed arc sensor algorithm, an automatic weld-width tracking algorithm is also proposed, which is able to predict the weld-position more accurately. The usefulness of the automatic weld-width tracking algorithm was well verified by applying it to gas tungsten arc welding (GTAW).

  • PDF

Fish-eye camera calibration and artificial landmarks detection for the self-charging of a mobile robot (이동로봇의 자동충전을 위한 어안렌즈 카메라의 보정 및 인공표지의 검출)

  • Kwon, Oh-Sang
    • Journal of Sensor Science and Technology
    • /
    • v.14 no.4
    • /
    • pp.278-285
    • /
    • 2005
  • This paper describes techniques of camera calibration and artificial landmarks detection for the automatic charging of a mobile robot, equipped with a fish-eye camera in the direction of its operation for movement or surveillance purposes. For its identification from the surrounding environments, three landmarks employed with infrared LEDs, were installed at the charging station. When the robot reaches a certain point, a signal is sent to the LEDs for activation, which allows the robot to easily detect the landmarks using its vision camera. To eliminate the effects of the outside light interference during the process, a difference image was generated by comparing the two images taken when the LEDs are on and off respectively. A fish-eye lens was used for the vision camera of the robot but the wide-angle lens resulted in a significant image distortion. The radial lens distortion was corrected after linear perspective projection transformation based on the pin-hole model. In the experiment, the designed system showed sensing accuracy of ${\pm}10$ mm in position and ${\pm}1^{\circ}$ in orientation at the distance of 550 mm.

The Market Orientation from Dual Perspectives: Customers and Managers Perceptions in Tunisian Banks

  • Najjar, Faouzi;Missaoui, Yosra
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.31-42
    • /
    • 2021
  • Several studies have been conducted on market orientation over the last three decades. However, the majority of previous research focused exclusively on an internal vision that conceives the market orientation from an organizational perspective, considering the market orientation as a strictly perceived culture or behavior by company's staff (managers and employees) .This study aims to emphasize the importance of analyzing the market orientation from a dual perspective by investigating simultaneously the perceptions of customers and those of managers. It examines the perceptual gap or perceptual congruence of market orientation between customers and managers. A survey is conducted with Tunisian bank managers and B to B customers to measure their market orientation perception. The results should reveal level of manager's market orientation in Tunisian banks compared to customers' perceptions. The perception gaps of market orientation between managers and customers named congruence is highlighted and categorized. This study provides some contributions to fill the gap emerging from the one-sidedness of market orientation evaluation and gives a dyadic vision of market orientation that helps managers in their continuous learning about markets and sensing customers' needs and expectations. Market orientation level between the two groups is evaluated to give some managerial recommendations.

Retrieval Spectral Albedo using red and NIR band of SPOT/VGT

  • Lee, Chang Suk;Seo, Min Ji;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.3
    • /
    • pp.367-373
    • /
    • 2014
  • Albedo is one of the critical parameters for understanding global climate change and energy/water balance. In this study, we used red and NIR reflectance from Satellite Pour I'Obervation de la Terre (SPOT)/Vegetation (VGT) S1 product. The product is preprocessed for users that they are atmospherically corrected using Simple Method Atmospheric Correction (SMAC) by Vision on Technology (VITO) for calculating broadband albedo. Roujean's Bi-directional Reflectance Distribution Function (BRDF) model is a semi-empirical method used for BRDF angular integration and inversion. Each kernel of Roujean's model was multi integrated by angle components (i.e., viewing zenith, solar zenith, and relative azimuth angle). Black-sky hemispherical function is integrated by observational angle; whereas, white-sky hemispherical efficient is integrated by incident angle. Estimated spectral albedo of red ($0.61{\sim}0.68{\mu}m$, B2) and near infrared ($0.79{\sim}0.89{\mu}m$, B3) have a good agreement with MODIS albedo products.