• Title/Summary/Keyword: Storage Units

Search Result 310, Processing Time 0.032 seconds

Effects of Feeding Earthworm Meal on the Egg Quality and Performance of Laying Hens (지렁이 분말의 급여가 계란의 품질 및 산란계의 생산성에 미치는 영향)

  • Son J.H.
    • Korean Journal of Poultry Science
    • /
    • v.33 no.1
    • /
    • pp.41-47
    • /
    • 2006
  • A study was conducted to investigate the effect of supplementing earthworm meal(EWM) on the egg quality and performance of laying hens. A total of 360 laying hens at 55 weeks of age were fed the experimental diets containing 0(Control), 0.3 and 0.6% of EWM for 5 weeks. Eggs were collected and weighed in every day and egg production and feed conversion were weekly recorded. However egg quality were measured fer last week of experimental period. When fed both 0.3 and 0.6% of EWM, egg production and daily egg mass tended to increase but were not different between those treatments. Feed intake and feed conversion ratio of laying hens were not different among three groups. Egg shell thickness, breaking strength, color and egg yolk color were tend to improve in both 0.3 and 0.6% of EWM compared to those of control. The haugh units(HUs) showed no difference among each treatments at 14 after laying egg, but increased in EWM treatments compared to control for storage period. As, Cd, Cr, Hg and Pb detected 4.41, 1.23, 1.18, 0.00 and 3.39ppm in EWM, respectively, but which were not detected in control. It assumed that supplementing 0.3% of earthworm meal in the 55 weeks old laying hens diet, improved the laying performance and egg quality.

On-line Handwriting Chinese Character Recognition for PDA Using a Unit Reconstruction Method (유닛 재구성 방법을 이용한 PDA용 온라인 필기체 한자 인식)

  • Chin, Won;Kim, Ki-Doo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.1
    • /
    • pp.97-107
    • /
    • 2002
  • In this paper, we propose the realization of on-line handwritten Chinese character recognition for mobile personal digital assistants (PDA). We focus on the development of an algorithm having a high recognition performance under the restriction that PDA requires small memory storage and less computational complexity in comparison with PC. Therefore, we use index matching method having computational advantage for fast recognition and we suggest a unit reconstruction method to minimize the memory size to store the character models and to accomodate the various changes in stroke order and stroke number of each person in handwriting Chinese characters. We set up standard model consisting of 1800 characters using a set of pre-defined units. Input data are measured by similarity among candidate characters selected on the basis of stroke numbers and region features after preprocessing and feature extracting. We consider 1800 Chinese characters adopted in the middle and high school in Korea. We take character sets of five person, written in printed style, irrespective of stroke ordering and stroke numbers. As experimental results, we obtained an average recognition time of 0.16 second per character and the successful recognition rate of 94.3% with MIPS R4000 CPU in PDA.

A Study on the Efficiency of Container Ports in the Bay of Bengal Area (벵갈만 지역의 컨테이너항만 효율성 분석에 관한 연구)

  • Htet Htet, Kyaw Nyunt;Kim, Hyun Deok
    • Journal of Korea Port Economic Association
    • /
    • v.36 no.1
    • /
    • pp.41-58
    • /
    • 2020
  • This study aims to investigate the technical efficiency of major container ports in the Bay of Bengal area and to study how certain factors influence the efficiency of container ports and terminals. The research is conducted on the four main container ports in the Bay of Bengal area, namely, Colombo Port in Sri Lanka, Chennai Port in India, Chittagong Port in Bangladesh, and Yangon Port in Myanmar. There are three input variables (quay length, storage area, and the number of cranes) and two output variables (throughput twenty-foot equivalent units and vessel calls) chosen for the process in this study. This paper evaluates the efficiency score of the defined variables and suggests implications for further improvement of the core competitiveness of the four selected ports. The findings indicate that Colombo Port is the most efficient on a technical scale, followed by Chennai Port, Yangon Port, and Chittagong Port. However, the slack and radial movement calculation results show that the inputs and outputs of the four ports need to be adjusted to be efficient and to reduce the amount of resources that are wasted. The results validate the adaptability of the improved data envelopment analysis algorithm in port efficiency analysis. The research findings provide an overview of the efficiencies of the selected container ports and can potentially affect the port management decisions made by policymakers, terminal operators, and carriers.

Design and Implementation of B-Tree on Flash Memory (플래시 메모리 상에서 B-트리 설계 및 구현)

  • Nam, Jung-Hyun;Park, Dong-Joo
    • Journal of KIISE:Databases
    • /
    • v.34 no.2
    • /
    • pp.109-118
    • /
    • 2007
  • Recently, flash memory is used to store data in mobile computing devices such as PDAs, SmartCards, mobile phones and MP3 players. These devices need index structures like the B-tree to efficiently support some operations like insertion, deletion and search. The BFTL(B-tree Flash Translation Layer) technique was first introduced which is for implementing the B-tree on flash memory. Flash memory has characteristics that a write operation is more costly than a read operation and an overwrite operation is impossible. Therefore, the BFTL method focuses on minimizing the number of write operations resulting from building the B-tree. However, we indicate in this paper that there are many rooms of improving the performance of the I/O cost in building the B-tree using this method and it is not practical since it increases highly the usage of the SRAM memory storage. In this paper, we propose a BOF(the B-tree On Flash memory) approach for implementing the B-tree on flash memory efficiently. The core of this approach is to store index units belonging to the same B-tree node to the same sector on flash memory in case of the replacement of the buffer used to build the B-tree. In this paper, we show that our BOF technique outperforms the BFTL or other techniques.

Analysis of Post-LOCA pH for Korea Nuclear Units (국내 원자력발전소의 LOCA사고에 따른 pH 분석)

  • Hyung Won Lee;Yung Hee Kang;Jae Hee Kim
    • Nuclear Engineering and Technology
    • /
    • v.15 no.3
    • /
    • pp.179-187
    • /
    • 1983
  • The pH of containment spray and sump water following a LOCA for KNU 5'||'&'||'6 and KNU 1 was calculated to see if pH design criteria of containment spray system established by USNRC were met. The pH calculations have been made for the two cases; maximum pH and minimum pH. For KNU 5'||'&'||'6, results showed that long term sump pH values calculated for the maximum pH and minimum pH case well met the pH requirement of at least 8.5 and spray pH for the maximum case slightly exceeded the range of design criteria (8.5 to 11.0). For KNU 1, pH requirement of long term sump pH was also met, however, spray pH value for the maximum pH case was very largely greater than that of current pH requirement. (No pH requirement of containment spray water has been established at the time of designing KNU 1) In order to find the design parameters of containment spray system which are expected to meet the spray pH requirement, several calculations were wade, by changing the input parameters to "LCCAPH". Finally, it was shown that the boric acid concentration in RWST (refueling water storage tank), which was the primary sources of containment spray water during injection mode, be maintained the range of 2750 ppm to 2850 ppm, or tile flow rate of NaOH added to spray water he kept between 10 gpm to 24 gpm.

  • PDF

Current Status and Characterization of CANDU Spent Fuel for Geological Disposal System Design (심지층 처분시스템 설계를 위한 중수로 사용후핵연료 현황 및 선원항 분석)

  • Cho, Dong-Keun;Lee, Seung-Woo;Cha, Jeong-Hun;Choi, Jong-Won;Lee, Yang;Choi, Heui-Joo
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.6 no.2
    • /
    • pp.155-162
    • /
    • 2008
  • Inventories to be disposed of, reference turnup, and source terms for CANDU spent fuel were evaluated for geological disposal system design. The historical and projected inventory by 2040 is expected to be 14,600 MtU under the condition of 30-year lifetime for unit 1 and 40-year lifetime for other units in Wolsong site. As a result of statistical analysis for discharge burnup of the spent fuels generated by 2007, average and stand deviation revealed 6,987 MWD/MtU and 1,167, respectively. From this result, the reference burnup was determined as 8,100 MWD/MtU which covers 84% of spent fuels in total. Source terms such as nuclide concentration for a long-term safety analysis, decay heat, thermo-mechanical analysis, and radiation intenity and spectrum was characterized by using ORIGEN-ARP containing conservativeness in the aspect of decay heat up to several thousand years. The results from this study will be useful for the design of storage and disposal facilities.

  • PDF

An IoT based Green Home Architecture for Green Score Calculation towards Smart Sustainable Cities

  • Kumaran, K. Manikanda;Chinnadurai, M.;Manikandan, S.;Murugan, S. Palani;Elakiya, E.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2377-2398
    • /
    • 2021
  • In the recent modernized world, utilization of natural resources (renewable & non-renewable) is increasing drastically due to the sophisticated life style of the people. The over-consumption of non-renewable resources causes pollution which leads to global warming. Consequently, government agencies have been taking several initiatives to control the over-consumption of non-renewable natural resources and encourage the production of renewable energy resources. In this regard, we introduce an IoT powered integrated framework called as green home architecture (GHA) for green score calculation based on the usage of natural resources for household purpose. Green score is a credit point (i.e.,10 pts) of a family which can be calculated once in a month based on the utilization of energy, production of renewable energy and pollution caused. The green score can be improved by reducing the consumption of energy, generation of renewable energy and preventing the pollution. The main objective of GHA is to monitor the day-to-day usage of resources and calculate the green score using the proposed green score algorithm. This algorithm gives positive credits for economic consumption of resources and production of renewable energy and also it gives negative credits for pollution caused. Here, we recommend a green score based tax calculation system which gives tax exemption based on the green score value. This direct beneficiary model will appreciate and encourage the citizens to consume fewer natural resources and prevent pollution. Rather than simply giving subsidy, this proposed system allows monitoring the subsidy scheme periodically and encourages the proper working system with tax exemption rewards. Also, our GHA will be used to monitor all the household appliances, vehicles, wind mills, electricity meter, water re-treatment plant, pollution level to read the consumption/production in appropriate units by using the suitable sensors. These values will be stored in mass storage platform like cloud for the calculation of green score and also employed for billing purpose by the government agencies. This integrated platform can replace the manual billing and directly benefits the government.

Effect of Cryoprotectants on Quality Properties of Chicken Breast Surimi (냉동변성방지제의 종류가 닭가슴살 수리미의 품질 특성에 미치는 영향)

  • Jin, S.K.;Kim, I.S.;Kim, S.J.;Jeong, K.J.;Lee, J.R.;Choi, Y.J.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.6
    • /
    • pp.847-856
    • /
    • 2007
  • This study was conducted to determine the effect of cryoprotectants(sugar, sorbitol, polyphosphate) on quality properties of chicken breast surimi manufactured by pH adjustment(pH 11.0) during frozen storage. Final surimi was divided into experimental units to which the following treatments were randomly assigned: C(Alaska pollack surimi, two times washing, 4% sugar+5% sorbitol+0.3% polyphosphate additive); T1(chicken breast surimi, 0.3% polyphosphate additive); T2(chicken breast surimi, 5% sorbitol +0.3% polyphosphate additive); T3(chicken breast surimi, 4% sugar+5% sorbitol+0.3% polyphosphate additive). All amino acid contents of control were higher than those of all treatments, while T2 was higher in amino acid contents among the treatments. Shear force of all treatments were higher than that of control, but the breaking force, deformation and gel strength were lower. The TBARS(thiobarbituric acid reactive substances) and VBN(volatile basic nitrogen) values of all treatments were lower than those of control, The TBARS values of all treatments were increased with increased storage period. In sensory evaluation, the score of appearance, meat color and overall acceptability of control were higher than those of all treatments, but aroma, juiciness and tenderness were lower than those for all treatments.

Microbiological Status and Guideline for Raw Chicken distributed in Korea (국내 유통 닭고기의 미생물 수준과 위생관리기준 적합성)

  • Kim, Hye-Jin;Kim, Dongwook;Song, Sung Ok;Goh, Yong-Gyun;Jang, Aera
    • Korean Journal of Poultry Science
    • /
    • v.43 no.4
    • /
    • pp.235-242
    • /
    • 2016
  • This study was conducted to investigate the microbiological sanitation status of raw chicken meat distributed in Korea, and potential changes in chicken breast quality during storage. The microbiological sanitation status analysis of raw chicken involved studying the results of microbiological monitoring for a 5-year period (2010~2014) by the Korean Food and Drug Administration. Furthermore, the microbiological status of raw chicken meat in meat packing centers and shops in Seoul/Gyeonggi, Kangwon, and Chungcheong Provinces was investigated from July to August 2015. The total bacterial counts of chicken meat in the packaging centers and meat shop of these Provinces were below the level specified in the Korean Meat Microbiological Guideline ($1{\times}10^7$ colony forming units [CFU]/g) and showed a similar microbiological sanitation status with results of the microbiological monitoring for the analyzed 5-year period. To evaluate the relationship between quality change and microbiological level of the meat distributed in Korea, the pH and microbiological and sensory quality characteristics of the chicken breast samples during storage at $4{\pm}2^{\circ}C$were determined. On day 4, the total bacterial count of the chicken breast was 6.76 log CFU/g, which was close to the official $1{\times}10^7CFU/g$ standard, the pH was 5.96, and the overall acceptability was reduced significantly (p<0.05). In particular, the aroma score was <5, indicating that the consumer panel expressed a negative perception even though the chicken contained a lower microbial level than that specified in the Korean microbiological guideline. These results suggest that the current Korean microbiological guideline for raw chicken meat may require a stricter level of up to $1{\times}10^6CFU/g$ to satisfy both meat safety standards and organoleptic quality for consumers.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.