• Title/Summary/Keyword: Storing Method

Search Result 597, Processing Time 0.031 seconds

Evaluation of Cooling Energy Consumption Varying Economizer Control and Heat Generation Rates from IT Equipment in Data Center

  • Ahmin JANG;San JIN;Minho KIM;Hyoungchul KANG;Sung Lok DO
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1010-1017
    • /
    • 2024
  • A data center stores and manages internet data. The data center is mainly comprised of IT equipment, cooling systems, and other components. IT equipment is used for storing and processing internet data, generating heat during use. If the heat generated by IT equipment is not removed, it can cause malfunctions, and cooling systems are used to remove this heat. Cooling systems account for more than 40% of the total energy consumption and reducing cooling energy can reduce the overall energy consumption of the data center. Therefore, analyzing the cooling energy consumption according to heat generation changes caused by IT equipment in the data center is necessary. This study analyzed the impact of heat generation changes in IT equipment on cooling energy consumption. Additionally, three different economizer control methods were applied to select the optimal economizer control method. To achieve this, a data center model with economizer systems applied was developed using data measured from IT equipment and cooling systems. As a result, as the operation rates of IT equipment increased from minimum to maximum, the annual energy consumption for each case increased by approximately 11.7%. The economizer analysis showed that the energy savings were greatest when dry bulb temperature control was applied, but it did not meet the operation environment of the IT equipment. Therefore, it was determined that economizer control to meet the operation environment of IT equipment is required to be enthalpy-based.

Comparative study of volumetric change in water-stored and dry-stored complete denture base (공기중과 수중에서 보관한 총의치 의치상의 체적변화에 대한 비교연구)

  • Kim, Jinseon;Lee, Younghoo;Hong, Seoung-Jin;Paek, Janghyun;Noh, Kwantae;Pae, Ahran;Kim, Hyeong-Seob;Kwon, Kung-Rock
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.59 no.1
    • /
    • pp.18-26
    • /
    • 2021
  • Purpose: Generally, patients are noticed to store denture in water when removed from the mouth. However, few studies have reported the advantage of volumetric change in underwater storage over dry storage. To be a reference in defining the proper denture storage method, this study aims to evaluate the volumetric change and dimensional deformation in case of underwater and dry storage. Materials and methods: Definitive casts were scanned by a model scanner, and denture bases were designed with computer-aided design (CAD) software. Twelve denture bases (upper 6, lower 6) were printed with 3D printer. Printed denture bases were invested and flasked with heat-curing method. 6 upper and 6 lower dentures were divided into group A and B, and each group contains 3 upper and 3 lower dentures. Group A was stored dry at room temperature, group B was stored underwater. Group B was scanned at every 24 hours for 28 days and scanned data was saved as stereolithography (SLA) file. These SLA files were analyzed to measure the difference in volumetric change of a month and Kruskal-Wallis test were used for statistical analysis. Best-fit algorithm was used to overlap and 3-dimensional color-coded map was used to observe the changing pattern of impression surface. Results: No significant difference was found in volumetric changes regardless of the storage methods. In dry-stored denture base, significant changes were found in the palate of upper jaw and posterior lingual border of lower jaw in direction away from the underlying tissue, maxillary tuberosity of upper jaw and retromolar pad area of lower jaw in direction towards the underlying tissue. Conclusion: Storing the denture underwater shows less volumetric change of impression surface than storing in the dry air.

Inventory Management in Construction Operations Involving on-site Fabrication of Raw Materials (원자재 조립.가공과정을 갖는 건설공사 프로세스의 적정 재고관리 방안에 관한 연구)

  • Im, Keon-Soon;Han, Seung-Heon;Jung, Do-Young;Ryu, Chung-Kyu;Choi, Seok-Jin
    • Korean Journal of Construction Engineering and Management
    • /
    • v.9 no.1
    • /
    • pp.187-198
    • /
    • 2008
  • There are usually plenty of material inventories in a construction site. More inventories can meet unexpected demands, and also they may have an economical advantage by avoiding a probable escalation of raw material costs. On the other hand, these inventories also cause negative aspects to increase costs for storing redundant inventory as well as decreasing construction productivity. Therefore, a proper method of deciding an optimal level of material inventories while considering dynamic variations of resources under uncertainty is very crucial for the economical efficiency of construction projects. This research presents a stochastic modelling method for construction operations, particularly targeting a work process involving on-site fabrication of raw materials like iron-rebar process (delivery, cut and assembly, and placement). To develop the model, we apply the concept of factory physics to depict the overall components of a system. Then, an optimal inventory management model is devised to support purchase decisions where users can make timely actions on how much to order and when to buy raw materials. Also, optimal time lag, which minimizes the storage time for pre-assembled materials, is obtained. To verify this method, a real case is applied to elicit an optimal amount of inventory and time lag. It is found that average values as well as variability of inventory level decreased significantly so as to minimize economic costs related to inventory management under uncertain project condition.

Measurement of Flash Point for Binary Mixtures of 2-Butanol, 2,2,4-Trimethylpentane, Methylcyclohexane, and Toluene at 101.3 kPa (2-Butanol, 2,2,4-Trimethylpentane, Methylcyclohexane 그리고 Toluene 이성분 혼합계에 대한 101.3 kPa에서의 인화점 측정)

  • Hwang, In Chan;In, Se Jin
    • Clean Technology
    • /
    • v.26 no.3
    • /
    • pp.161-167
    • /
    • 2020
  • For the design of the prevention and mitigation measures in process industries involving flammable substances, reliable safety data are required. An important property used to estimate the risk of fire and explosion for a flammable liquid is the flash point. Flammability is an important factor to consider when developing safe methods for storing and handling solids and liquids. In this study, the flash point data were measured for the binary systems {2-butanol + 2,2,4-trimethylpentane}, {2-butanol + methylcyclohexane} and {2-butanol + toluene} at 101.3 kPa. Experiments were performed according to the standard test method (ASTM D 3278) using a Stanhope-Seta closed cup flash point tester. A minimum flash point behavior was observed in the binary systems as in the many cases for the hydrocarbon and alcohol mixture that were observed. The measured flash points were compared with the predicted values calculated via the following activity coefficient (GE) models: Wilson, Non-Random Two-Liquid (NRTL), and UNIversal QUAsiChemical (UNIQUAC) models. The predicted data were only adequate for the data determined by the closed-cup test method and may not be appropriate for the data obtained from the open-cup test method because of its deviation from the vapor liquid equilibrium. The predicted results of this work can be used to design safe petrochemical processes, such as the identification of safe storage conditions for non-ideal solutions containing flammable components.

Sharing of DLNA Media Contents among Inter-homes based on DHCP or Private IP using Homeserver (동적 사설 IP 기반의 다중 홈간 DLNA 미디어 컨텐츠 공유)

  • Oh, Yeon-Joo;Lee, Hoon-Ki;Kim, Jung-Tae;Paik, Eui-Hyun
    • The KIPS Transactions:PartC
    • /
    • v.13C no.6 s.109
    • /
    • pp.709-716
    • /
    • 2006
  • According to the increase of various AV media devices and contents in the digital home, the DLNA becomes to play an important role as the interoperability standard between then Since this guideline only focuses on the interoperability among home networked devices, media players, and media contents existing inside of the homenetwork, there is no retrieval and transmission method for sharing multimedia contents located over several homes via Internet. Additionally, this guideline lets device-detection and notification messages to be transmitted using W multicast methods, and current Internet environment cannot guarantee consistent IP multicast services, it has the limitation that it cannot retrieve and control DLNA devices in other digital homes remotely via the Internet. Therefore, in this paper, we propose the IHM(Inter-Home Media) proxy system and its operating mechanism to provide a way of sharing media contents distributed over multiple DLNA-based homes, through analyzing these limitations and building up a sharing method for A/V media contents distributed over the DLNA homes based on the dynamic or private IP networks. Our method removes the limitation on the user locations through sharing distributed media contents, and also makes cost-downs for storing media contents, from the view point of individual residents.

Iris Feature Extraction using Independent Component Analysis (독립 성분 분석 방법을 이용한 홍채 특징 추출)

  • 노승인;배광혁;박강령;김재희
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.20-30
    • /
    • 2003
  • In a conventional method based on quadrature 2D Gator wavelets to extract iris features, the iris recognition is performed by a 256-byte iris code, which is computed by applying the Gabor wavelets to a given area of the iris. However, there is a code redundancy because the iris code is generated by basis functions without considering the characteristics of the iris texture. Therefore, the size of the iris code is increased unnecessarily. In this paper, we propose a new feature extraction algorithm based on the ICA (Independent Component Analysis) for a compact iris code. We implemented the ICA to generate optimal basis functions which could represent iris signals efficiently. In practice the coefficients of the ICA expansions are used as feature vectors. Then iris feature vectors are encoded into the iris code for storing and comparing an individual's iris patterns. Additionally, we introduce two methods to enhance the recognition performance of the ICA. The first is to reorganize the ICA bases and the second is to use a different ICA bases set. Experimental results show that our proposed method has a similar EER (Equal Error Rate) as a conventional method based on the Gator wavelets, and the iris code size of our proposed methods is four times smaller than that of the Gabor wavelets.

Reference Frame Memory Compression Using Selective Processing Unit Merging Method (선택적 수행블록 병합을 이용한 참조 영상 메모리 압축 기법)

  • Hong, Soon-Gi;Choe, Yoon-Sik;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.339-349
    • /
    • 2011
  • IBDI (Internal Bit Depth Increase) is able to significantly improve the coding efficiency of high definition video compression by increasing the bit depth (or precision) of internal arithmetic operation. However the scheme also increases required internal memory for storing decoded reference frames and this can be significant for higher definition of video contents. So, the reference frame memory compression method is proposed to reduce such internal memory requirement. The reference memory compression is performed on 4x4 block called the processing unit to compress the decoded image using the correlation of nearby pixel values. This method has successively reduced the reference frame memory while preserving the coding efficiency of IBDI. However, additional information of each processing unit has to be stored also in internal memory, the amount of additional information could be a limitation of the effectiveness of memory compression scheme. To relax this limitation of previous memory compression scheme, we propose a selective merging-based reference frame memory compression algorithm, dramatically reducing the amount of additional information. Simulation results show that the proposed algorithm provides much smaller overhead than that of the previous algorithm while keeping the coding efficiency of IBDI.

A research on the medical theory of Choo-Joo(鄒澍) -- (추주(鄒澍)의 의학사상(醫學思想)에 대한 연구(硏究) [약리설(藥理設)을 중심(中心)으로])

  • Lim, Jin Seok;Park, Chan Kuk
    • Journal of Korean Medical classics
    • /
    • v.9
    • /
    • pp.381-429
    • /
    • 1996
  • Choo-Joo(鄒澍;1790-1844) was the medicine scholar who lived in the late peroid of the Chung-Dynasty and wrote "Bon-Kyung-So-Jeung(本經疏證)", "Bon-kyung-Sok-So(本經續疏)", "Bon-Kyng-Seo-So-Yo(本經序疏要)". In the books mentioned above, He annotated the chief effectiveness of herbal medicine(本草) which had been presented on "Shin-Nong-Bon-Cho-Kyung(神農本草經)" and "Myoung-Eui-Byul-Lok(名醫別錄)". He defined medical action of 315 herb-items with the many theories of various scholars. Scholars whom Choo-Joo has qoutated belong to the school of study of Chinese classics, and they have regarded "Hwang-Je-Nae-Kyung(黃帝內經)", "Shin-Nong-Bon-Cho-Kyung(神農本草經)" and "Sang-Han-Lon(傷寒論)" as great important cannon and have lived during the Myoung Chung Dynasty. The distinctive character of Choo-Joo belongs to similar academic traditions. It seems that he was appected mainly by "Bon-Cho-Gang-Mok(本草綱目)" written by Lee-Si-Jin(李時珍), Mok-Jung-Soon(繆仲淳)'s "Sin-Nonng-Bon-Cho-Kyung-So(神農本草經疏)", You-Yak-Guem(劉若金)'s "Bon-Cho-Sul(本草述)" and Yang-Si-Tae(楊時泰)'s "Bon-Cho-Sul-Gu-Won(本草述鉤元)". He contributed in two big sides. First, Choo-Joo(鄒澍) have achieved much contribution in biliographical study of Chinese classics(考證學). He analyzed the medical theory of herb-medicine, combining with "Nae-Kyung(內經)", "Sang-Han-Lon(傷寒論)" and many theory of various scholars in order to make research on the chief effectiveness that had been presented in "Shin-Nong-Bon-Cho-Kyung(神農本草經)". Therefore the practical application of medical theory and term which had been represented on classics were offered. Secondary, Choo-joo did great accomplishment in pharmacology. The point of his theory was grasping the effect of a medicine through distinctive one beyond general feature. He set up standards that grasp distintive feature as form, color, energy and taste, place of production and temper. And on the basis of these standards he investigated distinctive feature on various fields, then he induced 'the Uem-Yang-0-Haeng Theory(陰陽五行說)' from distictions. According to the these method of classification, form(形) stand for the resultant shape of herbal function, color(色) represent the active direction of herb, energy and taste(氣味) imply the ultimate active function of herb, the place of production(産地) and the period of occurrence(發生時期) symbolize symptoms. When he applied these method to seek for effetiveness, he regarded the field which revealed most representative feature as of great importance, and Combining remained distinctions with one another, he determined more accurate medicinal value. These method of obsevation solved contradiction which occured by equaly appling all medical herbs for the regular standard. The most important theory that represented in Choo-Joo(鄒澍) is to induce and to certify the distintive feature of herb into the 'Uem-Yang-0-Haeng Theory'. That is, concluded as "spring(生), growing(長), change(化), collecting(收) and storing(藏)". As the results of these studies, he made clear the action of medicine more concretly and made 'the Uem-Yang-0-Haeng Theory(陰陽五行說)' more concret and actual for applying. And he contrbuted to establish the standard for grasping the effect of medicines.

  • PDF

IRFP-tree: Intersection Rule Based FP-tree (IRFP-tree(Intersection Rule Based FP-tree): 메모리 효율성을 향상시키기 위해 교집합 규칙 기반의 패러다임을 적용한 FP-tree)

  • Lee, Jung-Hun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.3
    • /
    • pp.155-164
    • /
    • 2016
  • For frequency pattern analysis of large databases, the new tree-based frequency pattern analysis algorithm which can compensate for the disadvantages of the Apriori method has been variously studied. In frequency pattern tree, the number of nodes is associated with memory allocation, but also affects memory resource consumption and processing speed of the growth. Therefore, reducing the number of nodes in the tree is very important in the frequency pattern mining. However, the absolute criteria which need to order the transaction items for construction frequency pattern tree has lowered the compression ratio of the tree nodes. But most of the frequency based tree construction methods adapted the absolute criteria. FP-tree is typically frequency pattern tree structure which is an extended prefix-tree structure for storing compressed frequent crucial information about frequent patterns. For construction the tree, all the frequent items in different transactions are sorted according to the absolute criteria, frequency descending order. CanTree also need to absolute criteria, canonical order, to construct the tree. In this paper, we proposed a novel frequency pattern tree construction method that does not use the absolute criteria, IRFP-tree algorithm. IRFP-tree(Intersection Rule based FP-tree). IRFP-tree is constituted with the new paradigm of the intersection rule without the use of the absolute criteria. It increased the compression ratio of the tree nodes, and reduced the tree construction time. Our method has the additional advantage that it provides incremental mining. The reported test result demonstrate the applicability and effectiveness of the proposed approach.

Discovering Association Rules using Item Clustering on Frequent Pattern Network (빈발 패턴 네트워크에서 아이템 클러스터링을 통한 연관규칙 발견)

  • Oh, Kyeong-Jin;Jung, Jin-Guk;Ha, In-Ay;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.1
    • /
    • pp.1-17
    • /
    • 2008
  • Data mining is defined as the process of discovering meaningful and useful pattern in large volumes of data. In particular, finding associations rules between items in a database of customer transactions has become an important thing. Some data structures and algorithms had been proposed for storing meaningful information compressed from an original database to find frequent itemsets since Apriori algorithm. Though existing method find all association rules, we must have a lot of process to analyze association rules because there are too many rules. In this paper, we propose a new data structure, called a Frequent Pattern Network (FPN), which represents items as vertices and 2-itemsets as edges of the network. In order to utilize FPN, We constitute FPN using item's frequency. And then we use a clustering method to group the vertices on the network into clusters so that the intracluster similarity is maximized and the intercluster similarity is minimized. We generate association rules based on clusters. Our experiments showed accuracy of clustering items on the network using confidence, correlation and edge weight similarity methods. And We generated association rules using clusters and compare traditional and our method. From the results, the confidence similarity had a strong influence than others on the frequent pattern network. And FPN had a flexibility to minimum support value.

  • PDF