• Title/Summary/Keyword: Table component

Search Result 185, Processing Time 0.033 seconds

A Scheduling Algorithm using The Priority of Broker for Improving The Performance of Semantic Web-based Visual Media Retrieval Framework (분산시각 미디어 검색 프레임워크의 성능향상을 위한 브로커 서버 우선순위를 이용한 라운드 로빈 스케줄링 기법)

  • Shim, Jun-Yong;Won, Jae-Hoon;Kim, Se-Chang;Kim, Jung-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.22-32
    • /
    • 2008
  • To overcome the weakness of the image retrieval system using the existing Ontology and the distributed image based on the database having a simple structure, HERMES was suggested to ensure the self-control of various image suppliers and support the image retrieval based on semantic, the mentioned framework could not solve the problems which are not considered the deterioration in the capacity and scalability when many users connect to broker server simultaneously. In this paper the tables are written which in the case numerous users connect at the same time to the supply analogous level of services without the deterioration in the capacity installs Broker servers and then measures the performance time of each inner Broker Component through Monitoring System and saved and decides the ranking in saved data. As many Query performances are dispersed into several Servers User inputted from the users Interface with reference to Broker Ranking Table, Load Balancing system improving reliability in capacity is proposed. Through the experiment, the scheduling technique has proved that this schedule is faster than existing techniques.

Establishment of Total Sugar Reference Value for Koreans (한국인 총당류 섭취기준 설정)

  • Cho, Sung-Hee;Chung, Chin-Eun;Kim, Sun-Hee;Chung, Hye-Kyung
    • Journal of Nutrition and Health
    • /
    • v.40 no.sup
    • /
    • pp.3-8
    • /
    • 2007
  • Sugars are a ubiquitous component of our food supply and are consumed as a naturally occurring component of many foods and as additions to foods during processing, preparation, or at the table. Most fruits and dairy products are high in sugars and thus naturally occurring sugars are consumed as part of a healthy diet. Some countries developed recommended daily intake figures(daily values : DVs or guideline daily amounts: GDA) for nutrients, and some countries, but not most have developed DV/GDA for total sugars. Dietary Reference Intakes for Koreans established by the Korean Nutrition Society in 2005, did not include the reference values for total sugar or added sugar. The committee on Dietary Reference Intakes for sugar was constituted in 2006 and discussed whether to special added sugars or total sugar. Although added sugars are not chemically or physiologically different from naturally occurring sugars, many foods and beverages that are major sources of added sugars have lower micronutrient densities compared with foods and beverages that are major sources of naturally occuring sugars. But it was so hard to calculate a dietary intake of added sugar for Korean people, because there was insufficient information about contents of added sugar during processing or preparation of Korean food. Currently Korean or US food labels contain information on total sugars per serving but do not distinguish between sugars naturally present in food and added sugars. Therefore the committee decided to set the reference value for total sugar for Koreans. According to the recommended diet pattern for Koreans suggested by the Korean Nutrition Society, estimated sugar intake from the sugar containing food based on 2,000 kilocalories is 67 g or 13% of total energy. Based on the data available on risk of obesity, hypertension, hyperlipidemia, insulin resistance, and metabolic syndrome from the analysis of Korean NHANES, it was insufficient evidence to set a UL for total sugar, but tended to increase serum LDL cholesterol and obesity at over 20-25% of energy from total sugar when consumed with high carbohydrates. Therefore the committee on Dietary Reference Intakes for sugar set the Acceptable Macronutrient Distribution Range for total sugar as 10-20% of total energy intake.

Statistical Analysis of Quality Control Data of Blood Components (혈액성분제제 품질관리 자료의 통계학적인 비교)

  • Kim, Chongahm;Seo, Dong Hee;Kwon, So Yong;Oh, Yuong Chul;Lim, Chae Seung;Jang, Choong Hoon;Kim, Soonduck
    • Korean Journal of Clinical Laboratory Science
    • /
    • v.36 no.1
    • /
    • pp.19-26
    • /
    • 2004
  • According to increase of domestic blood components use, the quality control of blood components is necessary to support good products. The purpose of this study is used to provide the producing index of the good product as compared with the accuracy and validity for the distribution of the quality control data. The value of mean, standard deviation, 95% confidence interval and degree of normal distribution of data were calculated by univariate procedure, the value of monthly mean of each blood centers per items were compared by Analysis of Variance(ANOVA) test for the degree of distribution. When there was difference among the mean values, the Duncan's multiple range test was done to confirm the difference. Finally, methods for accessing accuracy and validity of the quality data was done by the Contingency table test. The quality data of five blood centers was showed to the normal distribution and it was in a acceptable range. For each blood centers, the monthly means of Hematocrit(Hct), Platelet(PLT) and pH were not significantly different except Hct of C center, PLT of B, D center and pH of A center. The quality data per items was graded according to quality to six level. As a result of the comparative analysis, the monthly means of Hct of C and E center was significantly different higher than that of D, B and A center. The monthly means of PLT of A center and pH of C center was significantly different higher than that of the others. In the accuracy and validity of the quality control data, C center for Hct, A center for PLT and C center for pH were better than the other. The C blood center was most satisfiable and stable in the quality control for blood component. If the quality control method used in C blood center is adopted in other blood centers, the prepared level of the blood component of the center will be improved partly.

  • PDF

Classification and discrimination of excel radial charts using the statistical shape analysis (통계적 형상분석을 이용한 엑셀 방사형 차트의 분류와 판별)

  • Seungeon Lee;Jun Hong Kim;Yeonseok Choi;Yong-Seok Choi
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.1
    • /
    • pp.73-86
    • /
    • 2024
  • A radial chart of Excel is very useful graphical method in delivering information for numerical data. However, it is not easy to discriminate or classify many individuals. In this case, after shaping each individual of a radial chart, we need to apply shape analysis. For a radial chart, since landmarks for shaping are formed as many as the number of variables representing the characteristics of the object, we consider a shape that connects them to a line. If the shape becomes complicated due to the large number of variables, it is difficult to easily grasp even if visualized using a radial chart. Principal component analysis (PCA) is performed on variables to create a visually effective shape. The classification table and classification rate are checked by applying the techniques of traditional discriminant analysis, support vector machine (SVM), and artificial neural network (ANN), before and after principal component analysis. In addition, the difference in discrimination between the two coordinates of generalized procrustes analysis (GPA) coordinates and Bookstein coordinates is compared. Bookstein coordinates are obtained by converting the position, rotation, and scale of the shape around the base landmarks, and show higher rate than GPA coordinates for the classification rate.

MRQUTER : A Parallel Qualitative Temporal Reasoner Using MapReduce Framework (MRQUTER: MapReduce 프레임워크를 이용한 병렬 정성 시간 추론기)

  • Kim, Jonghoon;Kim, Incheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.5
    • /
    • pp.231-242
    • /
    • 2016
  • In order to meet rapid changes of Web information, it is necessary to extend the current Web technologies to represent both the valid time and location of each fact and knowledge, and reason their relationships. Until recently, many researches on qualitative temporal reasoning have been conducted in laboratory-scale, dealing with small knowledge bases. However, in this paper, we propose the design and implementation of a parallel qualitative temporal reasoner, MRQUTER, which can make reasoning over Web-scale large knowledge bases. This parallel temporal reasoner was built on a Hadoop cluster system using the MapReduce parallel programming framework. It decomposes the entire qualitative temporal reasoning process into several MapReduce jobs such as the encoding and decoding job, the inverse and equal reasoning job, the transitive reasoning job, the refining job, and applies some optimization techniques into each component reasoning job implemented with a pair of Map and Reduce functions. Through experiments using large benchmarking temporal knowledge bases, MRQUTER shows high reasoning performance and scalability.

An Experimental Study of In-Mold Coating of Automotive Armrests (자동차 암레스트의 인몰드코팅에 관한 실험적 연구)

  • Park, Jong Rak;Lee, Ho Sang
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.39 no.7
    • /
    • pp.687-692
    • /
    • 2015
  • A mold design for in-mold coating was developed to achieve simultaneous coating and injection molding of an automotive armrest. The developed mold includes one core and two cavities which are composed of a substrate cavity and a coating cavity. The core was attached to a movable plate and two cavities were mounted on a plate sliding in a stationary plate. In a two-step process, the part was first injection molded and subsequently, with the aid of a sliding table, was transferred to a second cavity. The materials used were PC/ABS for substrate and two-component polyurethane for coating. The experiments were conducted by changing the flow rate to investigate mixing characteristics. As the flow rate increased, the mixing improved. Additionally, the bubbles appeared over the substrate surface decreased with an increase of the weight of injected coating material.

Improvements to the Terrestrial Hydrologic Scheme in a Soil-Vegetation-Atmosphere Transfer Model (토양-식생-대기 이송모형내의 육지수문모의 개선)

  • Choi, Hyun-Il;Jee, Hong-Kee;Kim, Eung-Seok
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2009.05a
    • /
    • pp.529-534
    • /
    • 2009
  • Climate models, both global and regional, have increased in sophistication and are being run at increasingly higher resolutions. The Land Surface Models (LSMs) coupled to these climate models have evolved from simple bucket models to sophisticated Soil-Vegetation-Atmosphere Transfer (SVAT) schemes needed to support complex linkages and processes. However, some underpinnings of terrestrial hydrologic parameterizations so crucial in the predictions of surface water and energy fluxes cause model errors that often manifest as non-linear drifts in the dynamic response of land surface processes. This requires the improved parameterizations of key processes for the terrestrial hydrologic scheme to improve the model predictability in surface water and energy fluxes. The Common Land Model (CLM), one of state-of-the-art LSMs, is the land component of the Community Climate System Model (CCSM). However, CLM also has energy and water biases resulting from deficiencies in some parameterizations related to hydrological processes. This research presents the implementation of a selected set of parameterizations and their effects on the runoff prediction. The modifications consist of new parameterizations for soil hydraulic conductivity, water table depth, frozen soil, soil water availability, and topographically controlled baseflow. The results from a set of offline simulations are compared with observed data to assess the performance of the new model. It is expected that the advanced terrestrial hydrologic scheme coupled to the current CLM can improve model predictability for better prediction of runoff that has a large impact on the surface water and energy balance crucial to climate variability and change studies.

  • PDF

18FDG Synthesis and Supply: a Journey from Existing Centralized to Future Decentralized Models

  • uz Zaman, Maseeh;Fatima, Nosheen;Sajjad, Zafar;Zaman, Unaiza;Tahseen, Rabia;Zaman, Areeba
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.15 no.23
    • /
    • pp.10057-10059
    • /
    • 2015
  • Positron emission tomography (PET) as the functional component of current hybrid imaging (like PET/CT or PET/MRI) seems to dominate the horizon of medical imaging in coming decades. $^{18}$Flourodeoxyglucose ($^{18}FDG$) is the most commonly used probe in oncology and also in cardiology and neurology around the globe. However, the major capital cost and exorbitant running expenditure of low to medium energy cyclotrons (about 20 MeV) and radiochemistry units are the seminal reasons of low number of cyclotrons but mushroom growth pattern of PET scanners. This fact and longer half-life of $^{18}F$ (110 minutes) have paved the path of a centralized model in which $^{18}FDG$ is produced by commercial PET radiopharmacies and the finished product (multi-dose vial with tungsten shielding) is dispensed to customers having only PET scanners. This indeed reduced the cost but has limitations of dependence upon timely arrival of daily shipments as delay caused by any reason results in cancellation or rescheduling of the PET procedures. In recent years, industry and academia have taken a step forward by producing low energy, table top cyclotrons with compact and automated radiochemistry units (Lab-on-Chip). This decentralized strategy enables the users to produce on-demand doses of PET probe themselves at reasonably low cost using an automated and user-friendly technology. This technological development would indeed provide a real impetus to the availability of complete set up of PET based molecular imaging at an affordable cost to the developing countries.

A Database Design for Remote Maintenance of Navigation and Communication Equipments in a Vessel (선박 항해통신장비 원격유지보수를 위한 데이터베이스 설계)

  • Kim, Ju-young;Ok, Kyeong-suk;Kim, Ju-won;Cho, Ik-soon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.11
    • /
    • pp.2052-2060
    • /
    • 2017
  • The SOLAS ship should carry at least 83 different types of equipment based on the SFI group codes and each of which consists of several to dozens of components. During ship operation, it is necessary to ensure the normal operation of such equipment, and remote maintenance is highly demanded for immediate repair in the event of a equipment fault. This study proposes to find suitable classification system and to derive database structure for remote maintenance of navigation and communication equipment. As a result of this study, the classification system of equipment should be layered into equipment type, model, and component, and main table in the database consists of FMEA, service history, case data through Q&A, Preventive Maintenance. A database was constructed for 140 navigation and communication equipment models and 750 components. In order to evaluate the practical effects, service engineer evaluated the usefulness using the cloud app.

Segmentation and Contents Classification of Document Images Using Local Entropy and Texture-based PCA Algorithm (지역적 엔트로피와 텍스처의 주성분 분석을 이용한 문서영상의 분할 및 구성요소 분류)

  • Kim, Bo-Ram;Oh, Jun-Taek;Kim, Wook-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.16B no.5
    • /
    • pp.377-384
    • /
    • 2009
  • A new algorithm in order to classify various contents in the image documents, such as text, figure, graph, table, etc. is proposed in this paper by classifying contents using texture-based PCA, and by segmenting document images using local entropy-based histogram. Local entropy and histogram made the binarization of image document not only robust to various transformation and noise, but also easy and less time-consuming. And texture-based PCA algorithm for each segmented region was taken notice of each content in the image documents having different texture information. Through this, it was not necessary to establish any pre-defined structural information, and advantages were found from the fact of fast and efficient classification. The result demonstrated that the proposed method had shown better performances of segmentation and classification for various images, and is also found superior to previous methods by its efficiency.