• Title/Summary/Keyword: Idle-time

Search Result 373, Processing Time 0.018 seconds

Analysis of Production Cost of Walnut Tree Cultivation in Major Cultivating Regions (호두나무 주요 재배지역의 생산비 분석)

  • Kim, Jae-Sung;Lee, Uk
    • Journal of Korean Society of Forest Science
    • /
    • v.99 no.4
    • /
    • pp.611-617
    • /
    • 2010
  • The current studies aim is to analyze the production cost of walnut tree cultivation and its object was targeted at walnut tree cultivating household region 163. The analysis is as follows. Our domestic walnut tree cultivating households averagely have cultivated about 0.7ha, and planting number per ha was averagely 204, and it showed that compared to the standard planting number (100), the plantation was done close planted. The most cultivar cultivated according to regions were Chungbuk region: sangchon 65.7%, Chungnam region: kwangduk 68.6%, Jeonbuk region: sangchon 98.0%, Gyeongbuk region: daeboo 61.2%. The production cost for cultivating walnuts can be classified into the followings; management cost(4436 thousand won/ha) such as manufacturing cost(292 thousand won/ha), intermediate material cost(3682 thousand won/ha), rent(103 thousand won/ha), employment cost(653 thousand won/ha) etc, and self-serviced expenses such as self-laboring cost(5,834 thousand won/ha), land security cost(490 thousand won/ha), fixed capital cost(834 thousand won/ha), circulating capital cost(234 thousand won/ha) etc. 11,820 thousand won were invested for the production cost of walnut and it made 11,586 thousand won/ha(rate of investment 72.3%) profit, and the net income was 4,196 thousand won/ha(net income rate 26.2%), showing high amount of income. The manufactured walnuts were marketed in Nong-hyup 39.8%, wholesalers 20.8%, dealers 19.8% and recently, as the amount of goods marketed directly to consumers themselves have increased, the income has reached up to 18.9%. At the basis of making most of idle soil, walnut tree's cultivated regions are fairly small, and due to the characteristics of sideline management, it has its limits in searching for production policy locally and promotion strategy of industries. Therefore, if the basic database can be established, subjected only to full-time cultivating households, then not only would the differences between the imported walnuts be reinforced, it would also be possible to transfer into the new and improved distribution system. Furthermore, through establishment of the database, it can be anticipated that it would contribute greatly in the increase of the household income.

A Study on the Consideration of the Locations of Gyeongju Oksan Gugok and Landscape Interpretation - Focusing on the Arbor of Lee, Jung-Eom's "Oksan Gugok" - (경주 옥산구곡(玉山九曲)의 위치비정과 경관해석 연구 - 이정엄의 「옥산구곡가」를 중심으로 -)

  • Peng, Hong-Xu;Kang, Tai-Ho
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.3
    • /
    • pp.26-36
    • /
    • 2018
  • This study aims to examine the characteristics of landscape through the analysis of location and the landscape of Gugok while also conducting the empirical study through the literature review, field study, and digital analysis of the Okgung Gugok. Oksan Gugok is a set of songs set in Ogsan Creek(玉山川)or Jagyese Creek(紫溪川, 紫玉山), which flows in front of the Oksan Memorial Hall(李彦迪), which is dedicated to the Lee Eong-jeok (李彦迪). We first ascertained the location and configuration of Oksan Gogok. Second, we confirmed the accurate location of Oksan Gogok by utilizing the digital topographic map of Oksan Gogok which was submitted by Google Earth Pro and Geographic Information Center as well as the length of the longitude of the gravel measured by the Trimble Juno SB GPS. Through the study of the literature and the field investigation, The results of the study are as follows. First, Yi Eonjeok was not a direct composer of Oksan Gugok, nor did he produce "Oksan Gugokha(Music)". Lee Ia-sung(李野淳), the ninth Youngest Son of Tweo-Kye, Hwang Lee, visited the "Oksan Gugokha" in the spring of 1823(Sunjo 23), which was the 270th years after the reign of Yi Eonjeok. At this time, receiving the proposal of Ian Sung, Lee Jung-eom(李鼎儼), Lee Jung-gi(李鼎基), and Lee Jung-byeong(李鼎秉), the descendants of Ian Sung set up a song and created Oksan Gugok Music. And the Essay of Oksan Travel Companions writted by Lee Jung-gi turns out being a crucial data to describe the situation when setting up the Ok-San Gugok. Second, In the majority of cases, Gogok Forest is a forest managed by a Confucian Scholar, not run by ordinary people. The creation of "Oksan Bugok Music" can be regarded as an expression of pride that the descendants of Yi Eonjeok and Lee Hwang, and next generation of several Confucian scholars had inherited traditional Neo-Confucian. Third, Lee Jung-eom's "Oksan Donghaengki" contains a detailed description of the "Oksan Gugokha" process and the process of creating a song. Fourth, We examined the location of one to nine Oksan songs again. In particular, eight songs and nine songs were located at irregular intervals, and eight songs were identified as $36^{\circ}01^{\prime}08.60^{{\prime}{\prime}}N$, $129^{\circ}09^{\prime}31.20^{{\prime}{\prime}}E$. Referring to the ancient kingdom of Taojam, the nine-stringed Sainam was unbiased as a lower rock where the two valleys of the East West congregate. The location was estimated at $36^{\circ}01^{\prime}19.79^{{\prime}{\prime}}N$, $129^{\circ}09^{\prime}30.26^{{\prime}{\prime}}E$. Fifth, The landscape elements and landscapes presented in Lee Jung-eom's "Oksan Gugokha" were divided into form, semantic and climatic elements. As a result, Lee Jung-eom's Cho Young-gwan was able to see the ideal of mountain water and the feeling of being idle in nature as well as the sense of freedom. Sixth, After examining the appearance of the elements and the frequency of the appearance of the landscape, 'water' and 'mountain' were the absolute factors that emphasized the original curved environment at the mouth of Lee Jung-eom. Therefore, there was gugokga can gauge the fresh ideas(神仙思想)and retreat ever(隱居思想). This inherent harmony between the landscape as well as through the mulah any ideas that one with nature and meditation, Confucian tube.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.