• Title/Summary/Keyword: RED ONE Camera

Search Result 43, Processing Time 0.026 seconds

Influence of the Welding Speeds and Changing the Tool Pin Profiles on the Friction Stir Welded AA5083-O Joints

  • El-Sayed, M.M.;Shash, A.Y.;Abd Rabou, M.
    • Journal of Welding and Joining
    • /
    • v.35 no.3
    • /
    • pp.44-51
    • /
    • 2017
  • In the present study, AA 5083-O plates are joined by friction stir welding technique. A universal milling machine was used to perform the welding process of the work-pieces which were fixed on the proper position by a vice. The joints were friction stir welded by two tools with different pin profiles; cylindrical threaded pin and tapered smooth one at different rotational speed values; 400 rpm and 630 rpm, and different welding speed values; 100 mm/min and 160 mm/min. During FSW of each joint, the temperature was measured by infra-red thermal image camera. The welded joints were inspected by visually as well as by the macro- and microstructure evolutions. Furthermore, the joints were tested for measuring the hardness and the tensile strength to study the effect of changing the FSW parameters on the mechanical properties. The results show that increasing the rotational speed results in increasing the peak temperature, while increasing the welding speed results in decreasing the peak temperature for the same tool pin profile. Defect free welds were obtained at lower rotational speed by the threaded tool profile. Moreover, the threaded tool pin profile gives superior mechanical properties at lower rotational speed.

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.

Color Sensing Technology using Arduino and Color Sensor (아두이노와 컬러센서를 이용한 색상 감지 기술)

  • Dusub Song;Hojun Yeom;Sangsoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.3
    • /
    • pp.13-17
    • /
    • 2024
  • A color sensor is an optical sensor used to take pictures of objects, including the human body, and reproduce them on a monitor. A color sensor quantifies the red, green, and blue light coming from an object and expresses it as a digital number, and can judge the state of the object by comparing the values ​​or the ratio.In this study, the standard colors displayed on the monitor were measured using a color sensor, and the magnitudes of the red, green, and blue components, or RGB values, were compared with the values ​​indicated by the computer. When measured with the TCS 34725 color sensor, even when the light generated by the computer consists of only one or two of red, green, and blue light, the color sensor detected all three components. Additionally, when the colors of two monitors with the same RGB values ​​were measured using a color sensor, different RGB values ​​were measured. These results can be attributed to the imperfection of the color filters used to express colors on the monitor and the imperfect optical characteristics of the photodiodes used in the color sensor. When photographing an object and judging its condition based on its color, you must use the same type of camera or smartphone.

Automatic Wood Species Identification of Korean Softwood Based on Convolutional Neural Networks

  • Kwon, Ohkyung;Lee, Hyung Gu;Lee, Mi-Rim;Jang, Sujin;Yang, Sang-Yun;Park, Se-Yeong;Choi, In-Gyu;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • v.45 no.6
    • /
    • pp.797-808
    • /
    • 2017
  • Automatic wood species identification systems have enabled fast and accurate identification of wood species outside of specialized laboratories with well-trained experts on wood species identification. Conventional automatic wood species identification systems consist of two major parts: a feature extractor and a classifier. Feature extractors require hand-engineering to obtain optimal features to quantify the content of an image. A Convolutional Neural Network (CNN), which is one of the Deep Learning methods, trained for wood species can extract intrinsic feature representations and classify them correctly. It usually outperforms classifiers built on top of extracted features with a hand-tuning process. We developed an automatic wood species identification system utilizing CNN models such as LeNet, MiniVGGNet, and their variants. A smartphone camera was used for obtaining macroscopic images of rough sawn surfaces from cross sections of woods. Five Korean softwood species (cedar, cypress, Korean pine, Korean red pine, and larch) were under classification by the CNN models. The highest and most stable CNN model was LeNet3 that is two additional layers added to the original LeNet architecture. The accuracy of species identification by LeNet3 architecture for the five Korean softwood species was 99.3%. The result showed the automatic wood species identification system is sufficiently fast and accurate as well as small to be deployed to a mobile device such as a smartphone.

A Study on the Implement of Image Recognition the Road Traffic Safety Information Board using Nearest Neighborhood Decision Making Algorithm (최근접 이웃 결정방법 알고리즘을 이용한 도로교통안전표지판 영상인식의 구현)

  • Jung Jin-Yong;Kim Dong-Hyun;Lee So-Haeng
    • Management & Information Systems Review
    • /
    • v.4
    • /
    • pp.257-284
    • /
    • 2000
  • According as the drivers increase who have their cars, the comprehensive studies on the automobile for the traffic safety have been raised as the important problems. Visual Recognition System for radio-controled driving is a part of the sensor processor of Unmanned Autonomous Vehicle System. When a driver drives his car on an unknown highway or general road, it produces a model from the successively inputted road traffic information. The suggested Recognition System of the Road Traffic Safety Information Board is to recognize and distinguish automatically a Road Traffic Safety Information Board as one of road traffic information. The whole processes of Recognition System of the Road Traffic Safety Information Board suggested in this study are as follows. We took the photographs of Road Traffic Safety Information Board with a digital camera in order to get an image and normalize bitmap image file with a size of $200{\times}200$ byte with Photo Shop 5.0. The existing True Color is made up the color data of sixteen million kinds. We changed it with 256 Color, because it has large capacity, and spend much time on calculating. We have practiced works of 30 times with erosion and dilation algorithm to remove unnecessary images. We drawing out original image with the Region Splitting Technique as a kind of segmentation. We made three kinds of grouping(Attention Information Board, Prohibit Information Board, and Introduction Information Board) by RYB( Red, Yellow, Blue) color segmentation. We minimized the image size of board, direction, and the influence of rounding. We also minimized the Influence according to position. and the brightness of light and darkness with Eigen Vector and Eigen Value. The data sampling this feature value appeared after building the learning Code Book Database. The suggested Recognition System of the Road Traffic Safety Information Board firstly distinguished three kinds of groups in the database of learning Code Book, and suggested in order to recognize after comparing and judging the board want to recognize within the same group with Nearest Neighborhood Decision Making.

  • PDF

HIGH REDSHIFT QUASAR SURVEY WITH IMS

  • JEON, YISEUL;IM, MYUNGSHIN
    • Publications of The Korean Astronomical Society
    • /
    • v.30 no.2
    • /
    • pp.405-407
    • /
    • 2015
  • We describe a survey of quasars in the early universe, beyond z ~ 5, which is one of the main science goals of the Infrared Medium-deep Survey (IMS) conducted by the Center for the Exploration of the Origin of the Universe (CEOU). We use multi-wavelength archival data from SDSS, CFHTLS, UKIDSS, WISE, and SWIRE, which provide deep images over wide areas suitable for searching for high redshift quasars. In addition, we carried out a J-band imaging survey at the United Kingdom InfraRed Telescope with a depth of ~23 AB mag and survey area of ${\sim}120deg^2$, which makes IMS a suitable survey for finding faint, high redshift quasars at z ~ 7. In addition, for the quasar candidates at z ~ 5.5, we are conducting observations with the Camera for QUasars in EArly uNiverse (CQUEAN) on the 2.1m telescope at McDonald Observatory, which has a custom-designed filter set installed to enhance the efficiency of selecting robust quasar candidate samples in this redshift range. We used various color-color diagrams suitable for the specific redshift ranges, which can reduce contaminating sources such as M/L/T dwarfs, low redshift galaxies, and instrumental defects. The high redshift quasars we are confirming can provide us with clues to the growth of supermassive black holes since z ~ 7. By expanding the quasar sample at 5 < z < 7, the final stage of the hydrogen reionization in the intergalactic medium (IGM) can also be fully understood. Moreover, we can make useful constraints on the quasar luminosity function to study the contribution of quasars to the IGM reionization.

Experience of a Disaster Medical Assistant Team activation in the fire disaster at Jecheon sports complex building: limitation and importance of rescue (제천 스포츠복합건물 화재 재난에서의 권역재난의료지원팀 활동 경험 고찰: 한계점과 구조의 중요성)

  • Jung, Seung Gyo;Kim, Yoon Seop;Kim, Oh Hyun;Lee, Kang Hyun;Kim, Kwan-Lae;Jung, Woo Jin
    • Journal of The Korean Society of Emergency Medicine
    • /
    • v.29 no.6
    • /
    • pp.585-594
    • /
    • 2018
  • Objective: This study was designed to report on the progress of the fire at Jecheon sports complex and to assess the adequacy of Disaster Medical Assistant Team (DMAT)'s activities in response to the fire disaster. Methods: We conducted a retrospective review based on camera recordings and medical records that were recorded at the disaster site for assessment of activities. We cooperated with firefighters, police officers, local hospital medical staffs and public health personnel in Jecheon in order to classify patients in the disaster field and to understand the patients' progress. Results: At 15:53, the first request for emergency rescue came to the 119 general emergency call center, and a request for DMAT activation came at 16:28. DMAT arrived at the site at 17:04 and remained active until the following day at 00:43. The total number of casualties was 60, including 27 minimal (Green) patients, 29 expectant (Black) patients, three delayed (Yellow) patients, and one immediate (Red) patient. There were 32 patients who received on-site care by DMAT. Two patients were transferred from a local hospital to Wonju Severance Christian Hospital for hyperbaric oxygen therapy. Conclusion: Twenty-nine victims were found in the sports complex building, and there were 31 mildly to moderately injured patients in this fire disaster. The main cause of death was thought to be smoke suffocation. Although DMAT was activated relatively quickly, it was not able to provide effective activity due to the late rescue and difficulty with fire suppression.

Coastal Shallow-Water Bathymetry Survey through a Drone and Optical Remote Sensors (드론과 광학원격탐사 기법을 이용한 천해 수심측량)

  • Oh, Chan Young;Ahn, Kyungmo;Park, Jaeseong;Park, Sung Woo
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.29 no.3
    • /
    • pp.162-168
    • /
    • 2017
  • Shallow-water bathymetry survey has been conducted using high definition color images obtained at the altitude of 100 m above sea level using a drone. Shallow-water bathymetry data are one of the most important input data for the research of beach erosion problems. Especially, accurate bathymetry data within closure depth are critically important, because most of the interesting phenomena occur in the surf zone. However, it is extremely difficult to obtain accurate bathymetry data due to wave-induced currents and breaking waves in this region. Therefore, optical remote sensing technique using a small drone is considered to be attractive alternative. This paper presents the potential utilization of image processing algorithms using multi-variable linear regression applied to red, green, blue and grey band images for estimating shallow water depth using a drone with HD camera. Optical remote sensing analysis conducted at Wolpo beach showed promising results. Estimated water depths within 5 m showed correlation coefficient of 0.99 and maximum error of 0.2 m compared with water depth surveyed through manual as well as ship-board echo-sounder measurements.

An Input/Output Technology for 3-Dimensional Moving Image Processing (3차원 동영상 정보처리용 영상 입출력 기술)

  • Son, Jung-Young;Chun, You-Seek
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.8
    • /
    • pp.1-11
    • /
    • 1998
  • One of the desired features for the realizations of high quality Information and Telecommunication services in future is "the Sensation of Reality". This will be achieved only with the visual communication based on the 3- dimensional (3-D) moving images. The main difficulties in realizing 3-D moving image communication are that there is no developed data transmission technology for the hugh amount of data involved in 3-D images and no established technologies for 3-D image recording and displaying in real time. The currently known stereoscopic imaging technologies can only present depth, no moving parallax, so they are not effective in creating the sensation of the reality without taking eye glasses. The more effective 3-D imaging technologies for achieving the sensation of reality are those based on the multiview 3-D images which provides the object image changes as the eyes move to different directions. In this paper, a multiview 3-D imaging system composed of 8 CCD cameras in a case, a RGB(Red, Green, Blue) beam projector, and a holographic screen is introduced. In this system, the 8 view images are recorded by the 8 CCD cameras and the images are transmitted to the beam projector in sequence by a signal converter. This signal converter converts each camera signal into 3 different color signals, i.e., RGB signals, combines each color signal from the 8 cameras into a serial signal train by multiplexing and drives the corresponding color channel of the beam projector to 480Hz frame rate. The beam projector projects images to the holographic screen through a LCD shutter. The LCD shutter consists of 8 LCD strips. The image of each LCD strip, created by the holographic screen, forms as sub-viewing zone. Since the ON period and sequence of the LCD strips are synchronized with those of the camera image sampling adn the beam projector image projection, the multiview 3-D moving images are viewed at the viewing zone.

  • PDF

Development of Street Crossing Assistive Embedded System for the Visually-Impaired Using Machine Learning Algorithm (머신러닝을 이용한 시각장애인 도로 횡단 보조 임베디드 시스템 개발)

  • Oh, SeonTaek;Jeong, Kidong;Kim, Homin;Kim, Young-Keun
    • Journal of the HCI Society of Korea
    • /
    • v.14 no.2
    • /
    • pp.41-47
    • /
    • 2019
  • In this study, a smart assistive device is designed to recognize pedestrian signal and to provide audio instructions for visually impaired people in crossing streets safely. Walking alone is one of the biggest challenges to the visually impaired and it deteriorates their life quality. The proposed device has a camera attached on a pair of glasses which can detect traffic lights, recognize pedestrian signals in real-time using a machine learning algorithm on GPU board and provide audio instructions to the user. For the portability, the dimension of the device is designed to be compact and light but with sufficient battery life. The embedded processor of device is wired to the small camera which is attached on a pair of glasses. Also, on inner part of the leg of the glasses, a bone-conduction speaker is installed which can give audio instructions without blocking external sounds for safety reason. The performance of the proposed device was validated with experiments and it showed 87.0% recall and 100% precision for detecting pedestrian green light, and 94.4% recall and 97.1% precision for detecting pedestrian red light.