• Title/Summary/Keyword: Conventional combine

Search Result 202, Processing Time 0.029 seconds

Combining 2D CNN and Bidirectional LSTM to Consider Spatio-Temporal Features in Crop Classification (작물 분류에서 시공간 특징을 고려하기 위한 2D CNN과 양방향 LSTM의 결합)

  • Kwak, Geun-Ho;Park, Min-Gyu;Park, Chan-Won;Lee, Kyung-Do;Na, Sang-Il;Ahn, Ho-Yong;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.5_1
    • /
    • pp.681-692
    • /
    • 2019
  • In this paper, a hybrid deep learning model, called 2D convolution with bidirectional long short-term memory (2DCBLSTM), is presented that can effectively combine both spatial and temporal features for crop classification. In the proposed model, 2D convolution operators are first applied to extract spatial features of crops and the extracted spatial features are then used as inputs for a bidirectional LSTM model that can effectively process temporal features. To evaluate the classification performance of the proposed model, a case study of crop classification was carried out using multi-temporal unmanned aerial vehicle images acquired in Anbandegi, Korea. For comparison purposes, we applied conventional deep learning models including two-dimensional convolutional neural network (CNN) using spatial features, LSTM using temporal features, and three-dimensional CNN using spatio-temporal features. Through the impact analysis of hyper-parameters on the classification performance, the use of both spatial and temporal features greatly reduced misclassification patterns of crops and the proposed hybrid model showed the best classification accuracy, compared to the conventional deep learning models that considered either spatial features or temporal features. Therefore, it is expected that the proposed model can be effectively applied to crop classification owing to its ability to consider spatio-temporal features of crops.

A Study on the Improvement of Skin Loss Area in Skin Color Extraction for Face Detection (얼굴 검출을 위한 피부색 추출 과정에서 피부색 손실 영역 개선에 관한 연구)

  • Kim, Dong In;Lee, Gang Seong;Han, Kun Hee;Lee, Sang Hun
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.5
    • /
    • pp.1-8
    • /
    • 2019
  • In this paper, we propose an improved facial skin color extraction method to solve the problem that facial surface is lost due to shadow or illumination in skin color extraction process and skin color extraction is not possible. In the conventional HSV method, when facial surface is brightly illuminated by light, the skin color component is lost in the skin color extraction process, so that a loss area appears on the face surface. In order to solve these problems, we extract the skin color, determine the elements in the H channel value range of the skin color in the HSV color space among the lost skin elements, and combine the coordinates of the lost part with the coordinates of the original image, To minimize the number of In the face detection process, the face was detected using the LBP Cascade Classifier, which represents texture feature information in the extracted skin color image. Experimental results show that the proposed method improves the detection rate and accuracy by 5.8% and 9.6%, respectively, compared with conventional RGB and HSV skin color extraction and face detection using the LBP cascade classifier method.

Effect of Mechanical Working System on Labor-Saving in Wheat Cultivation (밀 기계화 작업체계에 의한 노력 절감 효과)

  • Kim, Hag-Sin;Kim, Young-Jin;Kim, Kyeong-Hoon;Lee, Kwang-Won;Shin, Sang-Hyun;Cheong, Young-Keun;Park, Ki-Hoon
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.57 no.4
    • /
    • pp.331-336
    • /
    • 2012
  • This study was carried out to evaluate the wheat cultivation system to reduce costs and mechanize wheat production. A field study was conducted for 2 years (2009 to 2010) at the National institute of crop science, Iksan, Korea. We used working system I and working system II for the experiment. Working system I is used a multiple machine attached with a spreader tractor (seeding, fertilization, seed coverage, and weed control functionality) and working system II is used a multiple machine with a tractor which works for simultaneous job when seeding step (seeding, fertilization, and seed coverage). Sowing to harvesting operation time is 118 hours/ha for mechanize with conventional planting. Working system I is a multiple machine and a combine machine with a tractor, which worked 26 hours/ha lower than conventional planting. Working system II is 18 hours/ha lower than conventional planting. The reduced work efforts of working system I and II were 78% and 85% respectively. The growth and yield of wheat according to working system I and II is lower than conventional planting. Therefore, a multiple machine needs to study for appropriate seeding rate. Mechanization cost in consideration of the mechanical break-even point when the working system I is 3.7 ha and working system II is 4.2 ha. The farm income is enhanced by working system I (778,110 won/ha) and working system II (849,930 won/ha). The results showed that application of a multiple machine lowered costs of wheat production.

Lip Contour Detection by Multi-Threshold (다중 문턱치를 이용한 입술 윤곽 검출 방법)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.12
    • /
    • pp.431-438
    • /
    • 2020
  • In this paper, the method to extract lip contour by multiple threshold is proposed. Spyridonos et. el. proposed a method to extract lip contour. First step is get Q image from transform of RGB into YIQ. Second step is to find lip corner points by change point detection and split Q image into upper and lower part by corner points. The candidate lip contour can be obtained by apply threshold to Q image. From the candidate contour, feature variance is calculated and the contour with maximum variance is adopted as final contour. The feature variance 'D' is based on the absolute difference near the contour points. The conventional method has 3 problems. The first one is related to lip corner point. Calculation of variance depends on much skin pixels and therefore the accuracy decreases and have effect on the split for Q image. Second, there is no analysis for color systems except YIQ. YIQ is a good however, other color systems such as HVS, CIELUV, YCrCb would be considered. Final problem is related to selection of optimal contour. In selection process, they used maximum of average feature variance for the pixels near the contour points. The maximum of variance causes reduction of extracted contour compared to ground contours. To solve the first problem, the proposed method excludes some of skin pixels and got 30% performance increase. For the second problem, HSV, CIELUV, YCrCb coordinate systems are tested and found there is no relation between the conventional method and dependency to color systems. For the final problem, maximum of total sum for the feature variance is adopted rather than the maximum of average feature variance and got 46% performance increase. By combine all the solutions, the proposed method gives 2 times in accuracy and stability than conventional method.

Using the fusion of spatial and temporal features for malicious video classification (공간과 시간적 특징 융합 기반 유해 비디오 분류에 관한 연구)

  • Jeon, Jae-Hyun;Kim, Se-Min;Han, Seung-Wan;Ro, Yong-Man
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.365-374
    • /
    • 2011
  • Recently, malicious video classification and filtering techniques are of practical interest as ones can easily access to malicious multimedia contents through the Internet, IPTV, online social network, and etc. Considerable research efforts have been made to developing malicious video classification and filtering systems. However, the malicious video classification and filtering is not still being from mature in terms of reliable classification/filtering performance. In particular, the most of conventional approaches have been limited to using only the spatial features (such as a ratio of skin regions and bag of visual words) for the purpose of malicious image classification. Hence, previous approaches have been restricted to achieving acceptable classification and filtering performance. In order to overcome the aforementioned limitation, we propose new malicious video classification framework that takes advantage of using both the spatial and temporal features that are readily extracted from a sequence of video frames. In particular, we develop the effective temporal features based on the motion periodicity feature and temporal correlation. In addition, to exploit the best data fusion approach aiming to combine the spatial and temporal features, the representative data fusion approaches are applied to the proposed framework. To demonstrate the effectiveness of our method, we collect 200 sexual intercourse videos and 200 non-sexual intercourse videos. Experimental results show that the proposed method increases 3.75% (from 92.25% to 96%) for classification of sexual intercourse video in terms of accuracy. Further, based on our experimental results, feature-level fusion approach (for fusing spatial and temporal features) is found to achieve the best classification accuracy.

Green Manuring Effect of Pure and Mixed Barley-Hairy Vetch on Rice Production (보리-헤어리베치 단파 및 혼파가 벼 수량에 미치는 영향)

  • Kim, Tae-Young;Kim, Song-Yeob;Alam, Faridul;Lee, Yong-Bok
    • Korean Journal of Environmental Agriculture
    • /
    • v.32 no.4
    • /
    • pp.268-272
    • /
    • 2013
  • BACKGROUND: The mixtures of legumes and non legumes can be an efficient tool to combine the benefit of the single species in the cover crop practice. However, there is a lack of information on how the species proportion may affect N accumulation and how this can influence the nitrogen use of subsequent rice production. METHODS AND RESULTS: In this study barley and hairy vetch was selected as a green manure. The pure stands or mixtures with different seeding ratios was tested on green manure N accumulation and its following rice cultivation. Total aboveground biomass and N accumulation of mixture were higher compared to that of pure barley and hairy vetch. Among the mixtures, the highest aboveground biomass (8.07 Mg/ha) and N accumulation (131 kg/ha) was observed in B75H25 (barley 75% + hairy vetch 25%). The N accumulation of the mixture ranged from 99 kg/ha to 131 kg/ha which was much higher than amount of recommended (90 kg/ha) for rice. All mixture (barley 75%+hairy vetch 25%, barley 50%+hairy vetch 50%, barley 25%+hairy vetch 50%) produced 7-8% more rice yield than the conventional cultivation (NPK). The rice yield of in barley monocrop was 4% less than that of NPK. COLCLUSION(S): Adopting mixtures of barley and hairy vetch could be efficient strategy for rice production as an alternative of nitrogen fertilizer.

Development of Mobile Cloud Computing Client UI/UX based on Open Source SPICE (오픈소스 SPICE 기반의 모바일 클라우드 컴퓨팅 클라이언트 UI/UX 개발)

  • Jo, Seungwan;Oh, Hoon;Shim, Kyusung;Shim, Kyuhyun;Lee, Jongmyung;An, Beongku
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.8
    • /
    • pp.85-92
    • /
    • 2016
  • Mobile cloud computing (MCC) is not just extensions of cloud concepts into mobile environments, but the service technologies that all mobile devices including smartphone can use the desired services by using cloud technology without the constraints of time and space. Currently, a lot of works on mobile cloud computing have been actively researching, whereas user interfaces are not so much researched. The main features and contributions of this paper are as follows. First, develop UI considering UX that is different from conventional interfaces supported by SPICE. Second, combine two button interface into one button interface when keyboard is used in mobile cloud computing clients. Third, develop a mouse interface suitable for mobile cloud computing clients. Fourth, in mobile cloud computing client, solve a problem that the selection of button/files/folder has at the corner. Finally, in mobile cloud computing clients we change mouse scroll mapping functions from volume button to scroll interface in touch-screen. The results of performance evaluation shows that users can input easily with the increased and fixed mouse interface. Since shortcut keys instead of the complex button keys of keyboard are provided, the input with 3-6 steps is reduced into 1 step, which can simply support complex keys and mouse input for users.

Far Distance Face Detection from The Interest Areas Expansion based on User Eye-tracking Information (시선 응시 점 기반의 관심영역 확장을 통한 원 거리 얼굴 검출)

  • Park, Heesun;Hong, Jangpyo;Kim, Sangyeol;Jang, Young-Min;Kim, Cheol-Su;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.113-127
    • /
    • 2012
  • Face detection methods using image processing have been proposed in many different ways. Generally, the most widely used method for face detection is an Adaboost that is proposed by Viola and Jones. This method uses Haar-like feature for image learning, and the detection performance depends on the learned images. It is well performed to detect face images within a certain distance range, but if the image is far away from the camera, face images become so small that may not detect them with the pre-learned Haar-like feature of the face image. In this paper, we propose the far distance face detection method that combine the Aadaboost of Viola-Jones with a saliency map and user's attention information. Saliency Map is used to select the candidate face images in the input image, face images are finally detected among the candidated regions using the Adaboost with Haar-like feature learned in advance. And the user's eye-tracking information is used to select the interest regions. When a subject is so far away from the camera that it is difficult to detect the face image, we expand the small eye gaze spot region using linear interpolation method and reuse that as input image and can increase the face image detection performance. We confirmed the proposed model has better results than the conventional Adaboost in terms of face image detection performance and computational time.

Synthesis and Application of Bluish-Green BaSi2O2N2:Eu2+ Phosphor for White LEDs (백색 LED용 청록색 BaSi2O2N2:Eu2+ 형광체의 합성 및 응용)

  • Jee, Soon-Duk;Choi, Kang-Sik;Choi, Kyoung-Jae;Kim, Chang-Hae
    • Korean Journal of Materials Research
    • /
    • v.21 no.5
    • /
    • pp.250-254
    • /
    • 2011
  • We have synthesized bluish-green, highly-efficient $BaSi_2O_2N_2:Eu^{2+}$ and $(Ba,Sr)Si_2O_2N_2:Eu^{2+}$ phosphors through a conventional solid state reaction method using metal carbonate, $Si_3N_4$, and $Eu_2O_3$ as raw materials. The X-ray diffraction (XRD) pattern of these phosphors revealed that a $BaSi_2O_2N_2$ single phase was obtained. The excitation and emission spectra showed typical broadband excitation and emission resulting from the 5d to 4f transition of $Eu^{2+}$. These phosphors absorb blue light at around 450 nm and emit bluish-green luminescence, with a peak wavelength at around 495 nm. From the results of an experiment involving Eu concentration quenching, the relative PL intensity was reduced dramatically for Eu = 0.033. A small substitution of Sr in place of Ba increased the relative emission intensity of the phosphor. We prepared several white LEDs through a combination of $BaSi_2O_2N_2:Eu^{2+}$, YAG:$Ce^{3+}$, and silicone resin with a blue InGaN-based LED. In the case of only the YAG:$Ce^{3+}$-converted LED, the color rendering index was 73.4 and the efficiency was 127 lm/W. In contrast, in the YAG:$Ce^{3+}$ and $BaSi_2O_2N_2:Eu^{2+}$-converted LED, two distinct emission bands from InGaN (450 nm) and the two phosphors (475-750 nm) are observed, and combine to give a spectrum that appears white to the naked eye. The range of the color rendering index and the efficiency were 79.7-81.2 and 117-128 lm/W, respectively. The increased values of the color rendering index indicate that the two phosphor-converted LEDs have improved bluish-green emission compared to the YAG:Ce-converted LED. As such, the $BaSi_2O_2N_2:Eu^{2+}$ phosphor is applicable to white high-rendered LEDs for solid state lighting.

The New Calculation Model of Film Thickness to Evaluat Asphalt Mixtures (아스팔트혼합물을 평가하기 위한 유효아스팔트 함량의 새로운 계산 모델)

  • Kim, Sung-Ho;Kim, Boo-Il
    • International Journal of Highway Engineering
    • /
    • v.9 no.1 s.31
    • /
    • pp.57-67
    • /
    • 2007
  • Many researches have recently discussed about the film thickness as a good substitute or supplement for VMA or other volumetric criteria in the design procedure. Some researchers have not only proposed the specific number for the recommended film thickness, but also introduced the new calculation procedures or concepts. Each model (index model and the virtual model) has its own advantages and disadvantages in terms of the ability to account for the volumetric properties of the mixture. In this paper, the modified virtual model was proposed to combine advantages from both models. However, it cannot be disregarded the way to determine the appropriate particle shape factors for different sources and sizes of aggregates. In order to evaluate the different calculation methods, mixtures with two aggregate sources and eight gradations were designed based on the dominant aggregate size range (DASR) porosity concept. Superpave indirect tensile test (IDT) and asphalt pavement analyzer (AEA) test were used to describe the performance of mixtures. Test results indicated that the virtual model, which is the same to the modified virtual model for sphere 1:1 case, is better than the conventional standard model to define the range of the film thickness to have better performance of asphalt mixtures.

  • PDF