• Title/Summary/Keyword: low-power systems

Search Result 2,389, Processing Time 0.031 seconds

Simulation of Drying Grain with Solar-Heated Air (태양에너지를 이용한 곡물건조시스템의 시뮬레이션에 관한 연구)

  • 금동혁;김용운
    • Journal of Biosystems Engineering
    • /
    • v.4 no.2
    • /
    • pp.65-83
    • /
    • 1979
  • Low-temperature drying systems have been extensively used for drying cereal grain such as shelled corn and wheat. Since the 1973 energy crisis, many researches have been conducted to apply solar energy as supplemental heat to natural air drying systems. However, little research on rough rice drying has been done in this area, especially very little in Korea. In designing a solar drying system, quality loss, airflow requirements, temperature rise of drying air, fan power and energy requirements should be throughly studied. The factors affecting solar drying systems are airflow rate, initial moisture content, the amount of heat added to drying air, fan operation method and the weather conditions. The major objectives of this study were to analyze the effects of the performance factors and determine design parameters such as airflow requirements, optimum bed depth, optimum temperature rise of drying air, fan operation method and collector size. Three hourly observations based on the 4-year weather data in Chuncheon area were used to simulate rough rice drying. The results can be summarized as follows: 1. The results of the statistical analysis indicated that the experimental and predicted values of the temperature rise of the air passing through the collector agreed well. 2. Equilibrium moisture content was affected a little by airflow rate, but affected mainly by the amount of heat added, to drying air. Equilibrium moisture content ranged from 12.2 to 13.2 percent wet basis for the continuous fan operation, from 10.4 to 11.7 percent wet basis for the intermittent fan operation respectively, in range of 1. 6 to 5. 9 degrees Centigrade average temperature rise of drying air. 3. Average moisture content when top layer was dried to 15 percent wet basis ranged from 13.1 to 13.9 percent wet basis for the continuous fan operation, from 11.9 to 13.4 percent wet basis for the intermittent fan operation respectively, in the range of 1.6 to 5.9 degrees Centigrade average temperature rise of drying air and 18 to 24 percent wet basis initial moisture content. The results indicated that grain was overdried with the intermittent fan operation in any range of temperature rise of drying air. Therefore, the continuous fan operation is usually more effective than the intermittent fan operation considering the overdrying. 4. For the continuous fan operation, the average temperature rise of drying air may be limited to 2.2 to 3. 3 degrees Centigrade considering safe storage moisture level of 13.5 to 14 perceut wet basis. 5. Required drying time decrease ranged from 40 to 50 percent each time the airflow rate was doubled and from 3.9 to 4.3 percent approximately for each one degrees Centigrade in average temperature rise of drying air regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on required drying time. 6. Required drying time increase ranged from 18 to 30 percent approximately for each 2 percent increase in initial moisture content regardless of the fan operation methods, in the range of 18 to 24 percent moisture. 7. The intermittent fan operation showed about 36 to 42 percent decrease in required drying time as compared with the continuous fan operation. 8. Drymatter loss decrease ranged from 34 to 46 percent each time the airflow rate was doubled and from 2 to 3 percent approximately for each one degrees Centigrade in average temperature rise of drying air, regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on drymatter loss. 9. Drymatter loss increase ranged from 50 to 78 percent approximately for each 2 percent increase in initial moisture content, in the range of 18 to 24 percent moisture. 10. The intermittent fan operation: showed about 40 to 50 percent increase in drymatter loss as compared with the continuous fan operation and the increasing rate was higher at high level of initial moisture and average temperature rise. 11. Year-to-year weather conditions had a little effect on required drying time and drymatter loss. 12. The equations for estimating time required to dry top layer to 16 and 1536 wet basis and drymatter loss were derived as functions of the performance factors. by the least square method. 13. Minimum airflow rates based on 0.5 percent drymatter loss were estimated. Minimum airflow rates for the intermittent fan operation were approximately 1.5 to 1.8 times as much as compared with the continuous fan operation, but a few differences among year-to-year. 14. Required fan horsepower and energy for the intermittent fan operation were 3. 7 and 1. 5 times respectively as much as compared with the continuous fan operation. 15. The continuous fan operation may be more effective than the intermittent fan operation considering overdrying, fan horsepower requirements, and energy use. 16. A method for estimating the required collection area of flat-plate solar collector using average temperature rise and airflow rate was presented.

  • PDF

Simulation of Drying Grain with Solar-Heated Air (태양에너지를 이용한 곡물건조시스템의 시뮬레이션에 관한 연구)

  • Keum, Dong-Hyuk
    • Journal of Biosystems Engineering
    • /
    • v.4 no.2
    • /
    • pp.64-64
    • /
    • 1979
  • Low-temperature drying systems have been extensively used for drying cereal grain such as shelled corn and wheat. Since the 1973 energy crisis, many researches have been conducted to apply solar energy as supplemental heat to natural air drying systems. However, little research on rough rice drying has been done in this area, especially very little in Korea. In designing a solar drying system, quality loss, airflow requirements, temperature rise of drying air, fan power and energy requirements should be throughly studied. The factors affecting solar drying systems are airflow rate, initial moisture content, the amount of heat added to drying air, fan operation method and the weather conditions. The major objectives of this study were to analyze the effects of the performance factors and determine design parameters such as airflow requirements, optimum bed depth, optimum temperature rise of drying air, fan operation method and collector size. Three hourly observations based on the 4-year weather data in Chuncheon area were used to simulate rough rice drying. The results can be summarized as follows: 1. The results of the statistical analysis indicated that the experimental and predicted values of the temperature rise of the air passing through the collector agreed well.2. Equilibrium moisture content was affected a little by airflow rate, but affected mainly by the amount of heat added, to drying air. Equilibrium moisture content ranged from 12.2 to 13.2 percent wet basis for the continuous fan operation, from 10.4 to 11.7 percent wet basis for the intermittent fan operation respectively, in range of 1. 6 to 5. 9 degrees Centigrade average temperature rise of drying air.3. Average moisture content when top layer was dried to 15 percent wet basis ranged from 13.1 to 13.9 percent wet basis for the continuous fan operation, from 11.9 to 13.4 percent wet basis for the intermittent fan operation respectively, in the range of 1.6 to 5.9 degrees Centigrade average temperature rise of drying air and 18 to 24 percent wet basis initial moisture content. The results indicated that grain was overdried with the intermittent fan operation in any range of temperature rise of drying air. Therefore, the continuous fan operation is usually more effective than the intermittent fan operation considering the overdrying.4. For the continuous fan operation, the average temperature rise of drying air may be limited to 2.2 to 3. 3 degrees Centigrade considering safe storage moisture level of 13.5 to 14 perceut wet basis.5. Required drying time decrease ranged from 40 to 50 percent each time the airflow rate was doubled and from 3.9 to 4.3 percent approximately for each one degrees Centigrade in average temperature rise of drying air regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on required drying time.6. Required drying time increase ranged from 18 to 30 percent approximately for each 2 percent increase in initial moisture content regardless of the fan operation methods, in the range of 18 to 24 percent moisture.7. The intermittent fan operation showed about 36 to 42 percent decrease in required drying time as compared with the continuous fan operation.8. Drymatter loss decrease ranged from 34 to 46 percent each time the airflow rate was doubled and from 2 to 3 percent approximately for each one degrees Centigrade in average temperature rise of drying air, regardless of the fan operation methods. Therefore, the average temperature rise of drying air had a little effect on drymatter loss. 9. Drymatter loss increase ranged from 50 to 78 percent approximately for each 2 percent increase in initial moisture content, in the range of 18 to 24 percent moisture. 10. The intermittent fan operation: showed about 40 to 50 percent increase in drymatter loss as compared with the continuous fan operation and the increasing rate was higher at high level of initial moisture and average temperature rise.11. Year-to-year weather conditions had a little effect on required drying time and drymatter loss.12. The equations for estimating time required to dry top layer to 16 and 1536 wet basis and drymatter loss were derived as functions of the performance factors. by the least square method.13. Minimum airflow rates based on 0.5 percent drymatter loss were estimated.Minimum airflow rates for the intermittent fan operation were approximately 1.5 to 1.8 times as much as compared with the continuous fan operation, but a few differences among year-to-year.14. Required fan horsepower and energy for the intermittent fan operation were3. 7 and 1. 5 times respectively as much as compared with the continuous fan operation.15. The continuous fan operation may be more effective than the intermittent fan operation considering overdrying, fan horsepower requirements, and energy use.16. A method for estimating the required collection area of flat-plate solar collector using average temperature rise and airflow rate was presented.

Roles of Perceived Use Control consisting of Perceived Ease of Use and Perceived Controllability in IT acceptance (정보기술 수용에서 사용용이성과 통제가능성을 하위 차원으로 하는 지각된 사용통제의 역할)

  • Lee, Woong-Kyu
    • Asia pacific journal of information systems
    • /
    • v.18 no.2
    • /
    • pp.1-14
    • /
    • 2008
  • According to technology acceptance model(TAN) which is one of the most important research models for explaining IT users' behavior, on intention of using IT is determined by usefulness and ease of use of it. However, TAM wouldn't explain the performance of using IT while it has been considered as a very good model for prediction of the intention. Many people would not be confirmed in the performance of using IT until they can control it at their will, although they think it useful and easy to use. In other words, in addition to usefulness and ease of use as in TAM, controllability is also should be a factor to determine acceptance of IT. Especially, there is a very close relationship between controllability and ease of use, both of which explain the other sides of control over the performance of using IT, so called perceived behavioral control(PBC) in social psychology. The objective of this study is to identify the relationship between ease of use and controllability, and analyse the effects of both two beliefs over performance and intention in using IT. For this purpose, we review the issues related with PBC in information systems studies as well as social psychology, Based on a review of PBC, we suggest a research model which includes the relationship between control and performance in using IT, and prove its validity empirically. Since it was introduced as qa variable for explaining volitional control for actions in theory of planned behavior(TPB), there have been confusion about concept of PBC in spite of its important role in predicting so many kinds of actions. Some studies define PBC as self-efficacy that means actor's perception of difficulty or ease of actions, while others as controllability. However, this confusion dose not imply conceptual contradiction but a double-faced feature of PBC since the performance of actions is related with both self-efficacy and controllability. In other words, these two concepts are discriminated and correlated with each other. Therefore, PBC should be considered as a composite concept consisting of self-efficacy and controllability, Use of IT has been also one of important areas for predictions by PBC. Most of them have been studied by analysis of comparison in prediction power between TAM and TPB or modification of TAM by inclusion of PBC as another belief as like usefulness and ease of use. Interestingly, unlike the other applications in social psychology, it is hard to find such confusion in the concept of PBC in the studies for use of IT. In most of studies, controllability is adapted as PBC since the concept of self-efficacy is included in ease of use explicitly. Based on these discussions, we can suggest perceived use control(PUC) which is defined as perception of control over the performance of using IT and composed of controllability and ease of use as sub-concepts. We suggest a research model explaining acceptance of IT which includes the relationships of PUC with attitude and performance of using IT. For empirical test of our research model, two user groups are selected for surveying questionnaires. In the first group, there are freshmen who take a basic course for Microsoft Excel, and the second group consists of senior students who take a course for analysis of management information by Excel. Most of measurements are adapted ones that have been validated in the other studies, while performance is real score of mid-term in each class. In result, four hypotheses related with PUC are supported statistically with very low significance level. Main contribution of this study is suggestion of PUC through theoretical review of PBC. Specifically, a hierarchical model of PUC are derived from very rigorous studies in the relationship between self-efficacy and controllability with a view of PBC in social psychology. The relationship between PUC and performance is another main contribution.

Verifying the Classification Accuracy for Korea's Standardized Classification System of Research F&E by using LDA(Linear Discriminant Analysis) (선형판별분석(LDA)기법을 적용한 국가연구시설장비 표준분류체계의 분류 정확도 검증)

  • Joung, Seokin;Sawng, Yeongwha;Jeong, Euhduck
    • Management & Information Systems Review
    • /
    • v.39 no.1
    • /
    • pp.35-57
    • /
    • 2020
  • Recently, research F&E(Facilities and Equipment) have become very important as tools and means to lead the development of science and technology. The government has been continuously expanding investment budgets for R&D and research F&E, and the need for efficient operation and systematic management of research F&E built up nationwide has increased. In December 2010, The government developed and completed a standardized classification system for national research F&E. However, accuracy and trust of information classification are suspected because information is collected by a method in which a user(researcher) directly selects and registers a classification code in NTIS. Therefore, in the study, we analyzed linearly using linear discriminant analysis(LDA) and analysis of variance(ANOVA), to measure the classification accuracy for the standardized classification system(8 major-classes, 54 sub-classes, 410 small-classes) of the national research facilities and equipment established in 2010, and revised in 2015. For the analysis, we collected and used the information data(50,271 cases) cumulatively registered in NTIS(National Science and Technology Service) for the past 10 years. This is the first case of scientifically verifying the standardized classification system of the national research facilities and equipment, which is based on information of similar classification systems and a few expert reviews in the in-outside of the country. As a result of this study, the discriminant accuracy of major-classes organized hierarchically by sub-classes and small-classes was 92.2 %, which was very high. However, in post hoc verification through analysis of variance, the discrimination power of two classes out of eight major-classes was rather low. It is expected that the standardized classification system of the national research facilities and equipment will be improved through this study.

Design and Implementation of a Scalable Real-Time Sensor Node Platform (확장성 및 실시간성을 고려한 실시간 센서 노드 플랫폼의 설계 및 구현)

  • Jung, Kyung-Hoon;Kim, Byoung-Hoon;Lee, Dong-Geon;Kim, Chang-Soo;Tak, Sung-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.8B
    • /
    • pp.509-520
    • /
    • 2007
  • In this paper, we propose a real-time sensor node platform that guarantees the real-time scheduling of periodic and aperiodic tasks through a multitask-based software decomposition technique. Since existing sensor networking operation systems available in literature are not capable of supporting the real-time scheduling of periodic and aperiodic tasks, the preemption of aperiodic task with high priority can block periodic tasks, and so periodic tasks are likely to miss their deadlines. This paper presents a comprehensive evaluation of how to structure periodic or aperiodic task decomposition in real-time sensor-networking platforms as regard to guaranteeing the deadlines of all the periodic tasks and aiming to providing aperiodic tasks with average good response time. A case study based on real system experiments is conducted to illustrate the application and efficiency of the multitask-based dynamic component execution environment in the sensor node equipped with a low-power 8-bit microcontroller, an IEEE802.15.4 compliant 2.4GHz RF transceiver, and several sensors. It shows that our periodic and aperiodic task decomposition technique yields efficient performance in terms of three significant, objective goals: deadline miss ratio of periodic tasks, average response time of aperiodic tasks, and processor utilization of periodic and aperiodic tasks.

Design of a High-Resolution Integrating Sigma-Delta ADC for Battery Capacity Measurement (배터리 용량측정을 위한 고해상도 Integrating Sigma-Delta ADC 설계)

  • Park, Chul-Kyu;Jang, Ki-Chang;Woo, Sun-Sik;Choi, Joong-Ho
    • Journal of IKEEE
    • /
    • v.16 no.1
    • /
    • pp.28-33
    • /
    • 2012
  • Recently, with mobile devices increasing, as a variety of multimedia functions are needed, battery life is decreased. Accordingly the methods for extending the battery life has been proposed. In order to implement these methods, we have to know exactly the status of the battery, so we need a high resolution analog to digital converter(ADC). In case of the existing integrating sigma-delta ADC, it have not convert reset-time conversion cycle to function of resolution. Because of this reason, all digital values corresponding to the all number of bits will not be able to be expressed. To compensated this drawback, this paper propose that all digital values corresponding to the number of bits can be expressed without having to convert reset-time additional conversion cycle to function of resolution by using a up-down counter. The proposed circuit achieves improved SNDR compared to conventional converters simulation result. Also, this was designed for low power suitable for battery management systems and fabricated in 0.35um process.

Prediction of the Following BCI Performance by Means of Spectral EEG Characteristics in the Prior Resting State (뇌신호 주파수 특성을 이용한 CNN 기반 BCI 성능 예측)

  • Kang, Jae-Hwan;Kim, Sung-Hee;Youn, Joosang;Kim, Junsuk
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.265-272
    • /
    • 2020
  • In the research of brain computer interface (BCI) technology, one of the big problems encountered is how to deal with some people as called the BCI-illiteracy group who could not control the BCI system. To approach this problem efficiently, we investigated a kind of spectral EEG characteristics in the prior resting state in association with BCI performance in the following BCI tasks. First, spectral powers of EEG signals in the resting state with both eyes-open and eyes-closed conditions were respectively extracted. Second, a convolution neural network (CNN) based binary classifier discriminated the binary motor imagery intention in the BCI task. Both the linear correlation and binary prediction methods confirmed that the spectral EEG characteristics in the prior resting state were highly related to the BCI performance in the following BCI task. Linear regression analysis demonstrated that the relative ratio of the 13 Hz below and above the spectral power in the resting state with only eyes-open, not eyes-closed condition, were significantly correlated with the quantified metrics of the BCI performance (r=0.544). A binary classifier based on the linear regression with L1 regularization method was able to discriminate the high-performance group and low-performance group in the following BCI task by using the spectral-based EEG features in the precedent resting state (AUC=0.817). These results strongly support that the spectral EEG characteristics in the frontal regions during the resting state with eyes-open condition should be used as a good predictor of the following BCI task performance.

Digital Logic Extraction from QCA Designs (QCA 설계에서 디지털 논리 자동 추출)

  • Oh, Youn-Bo;Kim, Kyo-Sun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.1
    • /
    • pp.107-116
    • /
    • 2009
  • Quantum-dot Cellular Automata (QCA) is one of the most promising next generation nanoelectronic devices which will inherit the throne of CMOS which is the domineering implementation technology for large scale low power digital systems. In late 1990s, the basic operations of the QCA cell were already demonstrated on a hardware implementation. Also, design tools and simulators were developed. Nevertheless, its design technology is not quite ready for ultra large scale designs. This paper proposes a new approach which enables the QCA designs to inherit the verification methodologies and tools of CMOS designs, as well. First, a set of disciplinary rules strictly restrict the cell arrangement not to deviate from the predefined structures but to guarantee the deterministic digital behaviors is proposed. After the gate and interconnect structures of. the QCA design are identified, the signal integrity requirements including the input path balancing of majority gates, and the prevention of the noise amplification are checked. And then the digital logic is extracted and stored in the OpenAccess common engineering database which provides a connection to a large pool of CMOS design verification tools. Towards validating the proposed approach, we designed a 2-bit adder, a bit-serial adder, and an ALU bit-slice. For each design, the digital logic is extracted, translated into the Verilog net list, and then simulated using a commercial software.

Quantitative Evaluation of Criticality According to the Major Influence of Applied with Burnup Credit on Dual-purpose Metal Cask (국내 금속겸용용기의 연소도 이득효과 적용 시 주요영향인자에 따른 정량적 핵임계 평가)

  • Dho, Ho-seog;Kim, Tae-man;Cho, Chun-Hyung
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.13 no.2
    • /
    • pp.141-154
    • /
    • 2015
  • In general, conventional criticality analysis for spent fuel transport/storage systems have been performed based on the assumption of fresh fuel concerning the potential uncertainties from number density calculations of actinide nuclides and fission products in spent fuel. However, these evaluation methods cause financial losses due to an excessive criticality margin. In order to overcome this disadvantage, many studies have recently been conducted to design and commercialize a transportation and storage cask applied to the Burnup Credit (BUC). This study conducted an assessment to ensure criticality safety for reactor operating parameters, axial burn-up profiles and misload accident conditions, which are the factors that are likely to affect criticality safety when the BUC is applied to the dual-purpose cask under development at the KOrea RADioactive waste agency (KORAD). As a result, it was found that criticality resulting from specific power, changed substantially and relied on conditions of low enrichment and high burn-up. Considering the end effect in the case of high burn-up produced a positive-definite result. In particular, the increment of maximum effective multiplication factors due to misloading was 0.18467, confirming that misload is a factor that must be taken into account when applying the BUC. The results of this study may therefore be utilized as references in developing technologies to apply the BUC to domestic models and operational procedures or preventing any misload accidents during the process of spent fuel loading.

A New Flash Memory Package Structure with Intelligent Buffer System and Performance Evaluation (버퍼 시스템을 내장한 새로운 플래쉬 메모리 패키지 구조 및 성능 평가)

  • Lee Jung-Hoon;Kim Shin-Dug
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.2
    • /
    • pp.75-84
    • /
    • 2005
  • This research is to design a high performance NAND-type flash memory package with a smart buffer cache that enhances the exploitation of spatial and temporal locality. The proposed buffer structure in a NAND flash memory package, called as a smart buffer cache, consists of three parts, i.e., a fully-associative victim buffer with a small block size, a fully-associative spatial buffer with a large block size, and a dynamic fetching unit. This new NAND-type flash memory package can achieve dramatically high performance and low power consumption comparing with any conventional NAND-type flash memory. Our results show that the NAND flash memory package with a smart buffer cache can reduce the miss ratio by around 70% and the average memory access time by around 67%, over the conventional NAND flash memory configuration. Also, the average miss ratio and average memory access time of the package module with smart buffer for a given buffer space (e.g., 3KB) can achieve better performance than package modules with a conventional direct-mapped buffer with eight times(e.g., 32KB) as much space and a fully-associative configuration with twice as much space(e.g., 8KB)