• Title/Summary/Keyword: Quality Output

Search Result 1,503, Processing Time 0.032 seconds

A Study on the Component-based GIS Development Methodology using UML (UML을 활용한 컴포넌트 기반의 GIS 개발방법론에 관한 연구)

  • Park, Tae-Og;Kim, Kye-Hyun
    • Journal of Korea Spatial Information System Society
    • /
    • v.3 no.2 s.6
    • /
    • pp.21-43
    • /
    • 2001
  • The environment to development information system including a GIS has been drastically changed in recent years in the perspectives of the complexity and diversity of the software, and the distributed processing and network computing, etc. This leads the paradigm of the software development to the CBD(Component Based Development) based object-oriented technology. As an effort to support these movements, OGC has released the abstract and implementation standards to enable approaching to the service for heterogeneous geographic information processing. It is also common trend in domestic field to develop the GIS application based on the component technology for municipal governments. Therefore, it is imperative to adopt the component technology considering current movements, yet related research works have not been made. This research is to propose a component-based GIS development methodology-ATOM(Advanced Technology Of Methodology)-and to verify its adoptability through the case study. ATOM can be used as a methodology to develop component itself and enterprise GIS supporting the whole procedure for the software development life cycle based on conventional reusable component. ATOM defines stepwise development process comprising activities and work units of each process. Also, it provides input and output, standardized items and specs for the documentation, detailed instructions for the easy understanding of the development methodology. The major characteristics of ATOM would be the component-based development methodology considering numerous features of the GIS domain to generate a component with a simple function, the smallest size, and the maximum reusability. The case study to validate the adoptability of the ATOM showed that it proves to be a efficient tool for generating a component providing relatively systematic and detailed guidelines for the component development. Therefore, ATOM would lead to the promotion of the quality and the productivity for developing application GIS software and eventually contribute to the automatic production of the GIS software, the our final goal.

  • PDF

Performance Measurement of Diagnostic X Ray System (진단용 X선 발생장치의 성능 측정)

  • You, Ingyu;Lim, Cheonghwan;Lee, Sangho;Lee, Mankoo
    • Journal of the Korean Society of Radiology
    • /
    • v.6 no.6
    • /
    • pp.447-454
    • /
    • 2012
  • To examine the performance of a diagnostic X-ray system, we tested a linearity, reproducibility, and Half Value Layer(HVL). The linearity was examined 4 times of irradiation with a given condition, and we recorded a level of radiation. We then calculated the mR/mAs. And the measured value should not be more than 0.1. If the measured value was more than 0.1, we could know that the linearity was decreased. The reproducibility was analyzed 10 times of irradiations at 80kVp, 200mA, 20mAs and 120kVp, 300mA, 8mAs. The values from these analyses were integrated into CV equation, and we could get outputs. The reproducibility was good if the output was lower than 0.05. HVL was measured 3 times of irradiation without a filter, and we inserted additional HLV filters with 0, 1, 2, 4 mm of thickness. We tested the values until we get the measured value less than a half of the value measured without additional filter. We tested the linearity, the reproducibility, and HVL of 5 diagnostic X-ray generators in this facilities. The linearity of No. 1 and No. 5 generator didn't satisfy the standard for radiation safety around 300mA~400mA and 100mA~200mA, respectively. HVL of No.1 generator was not satisfied at 80kVp. The outputs were higher in the three-phase equipment than the single-phase equipment. The old generators need to maintain and exchange of components based on the these results. Then, we could contribute to getting more exact diagnosis increasing a quality of the image and decreasing an expose dose of radiation.

A Study on the Ecological Indices for the Assessment of the Function and Maturity of Artificial Reefs (인공어초의 기능도와 성숙도 평가를 위한 생태학적 지수에 대한 연구)

  • Yoo, Jae-Won;Hong, Hyun-Pyo;Hwang, Jae-Youn;Lee, Min-Soo;Lee, Yong-Woo;Lee, Chae-Sung;Hwang, Sun-Do
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.19 no.1
    • /
    • pp.8-34
    • /
    • 2014
  • We reviewed foreign evaluation systems based on the macrobenthic and macroalgal communities and developed a system, composed of a set of ecological indices able to evaluate the functionality (FI, Functional Index; estimation of stability and productivity) and maturity (MI, Maturity Index; comparisons with biological parameters of natural reefs) of artificial reefs by comparing the status in the adjacent natural reefs in Korean coastal waters. The evaluation system was applied to natural and artificial reefs/reef-planned areas (natural reefs), established in the 5 marine ranching areas (Bangnyeong-Daechung, Yeonpyung, Taean, Seocheon and Buan) in the west coast of Korea. The FI ranged between 31.6 (Bangnyeong-Daechung) and 72.5% (Buan) and MI did between 53.1 (Seocheon) and 76.9% (Taean) in average. The evaluation of artificial reefs by the two indices, showed the most appropriate status in Taean. The FI between the adjacent artificial and natural reefs were in significant linear relationship ($r^2=0.83$, p=0.01). This indicated the local status of biological community may be critical in determining the functionality of the artificial reefs. We have suggested an integrative but preliminary evaluation system of artificial reefs in this study. The output from the evaluation system may be utilized as a tool for environment/resource managers or policy makers, responsible for effective use of funds and decision making. Given the importance, we need to use the options to enhance and improve the accuracy as follows: (1) continuous validation of the evaluation system and rescaling the criteria of indicators, (2) vigorous utilization of observation and experience through the application and data accumulation and (3) development and testing of brand-new indicators.

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Building the Process for Reducing Whole Body Bone Scan Errors and its Effect (전신 뼈 스캔의 오류 감소를 위한 프로세스 구축과 적용 효과)

  • Kim, Dong Seok;Park, Jang Won;Choi, Jae Min;Shim, Dong Oh;Kim, Ho Seong;Lee, Yeong Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.21 no.1
    • /
    • pp.76-82
    • /
    • 2017
  • Purpose Whole body bone scan is one of the most frequently performed in nuclear medicine. Basically, both the anterior and posterior views are acquired simultaneously. Occasionally, it is difficult to distinguish the lesion by only the anterior view and the posterior view. In this case, accurate location of the lesion through SPECT / CT or additional static scan images are important. Therefore, in this study, various improvement activities have been carried out in order to enhance the work capacity of technologists. In this study, we investigate the effect of technologist training and standardized work process processes on bone scan error reduction. Materials and Methods Several systems have been introduced in sequence for the application of new processes. The first is the implementation of education and testing with physicians, the second is the classification of patients who are expected to undergo further scanning, introducing a pre-filtration system that allows technologists to check in advance, and finally, The communication system called NMQA is applied. From January, 2014 to December, 2016, we examined the whole body bone scan patients who visited the Department of Nuclear Medicine, Asan Medical Center, Seoul, Korea Results We investigated errors based on the Bone Scan NMQA sent from January 2014 to December 2016. The number of tests in which NMQA was transmitted over the entire bone scan during the survey period was calculated as a percentage. The annual output is 141 cases in 2014, 88 cases in 2015, and 86 cases in 2016. The rate of NMQA has decreased to 0.88% in 2014, 0.53% in 2015 and 0.45% in 2016. Conclusion The incidence of NMQA has decreased since 2014 when the new process was applied. However, we believe that it will be necessary to accumulate data continuously in the future because of insufficient data until statistically confirming its usefulness. This study confirmed the necessity of standardized work and education to improve the quality of Bone Scan image, and it is thought that update is needed for continuous research and interest in the future.

  • PDF

Effects of streambed geomorphology on nitrous oxide flux are influenced by carbon availability (하상 미지형에 따른 N2O 발생량 변화 효과에 대한 탄소 가용성의 영향)

  • Ko, Jongmin;Kim, Youngsun;Ji, Un;Kang, Hojeong
    • Journal of Korea Water Resources Association
    • /
    • v.52 no.11
    • /
    • pp.917-929
    • /
    • 2019
  • Denitrification in streams is of great importance because it is essential for amelioration of water quality and accurate estimation of $N_2O$ budgets. Denitrification is a major biological source or sink of $N_2O$, an important greenhouse gas, which is a multi-step respiratory process that converts nitrate ($NO_3{^-}$) to gaseous forms of nitrogen ($N_2$ or $N_2O$). In aquatic ecosystems, the complex interactions of water flooding condition, substrate supply, hydrodynamic and biogeochemical properties modulate the extent of multi-step reactions required for $N_2O$ flux. Although water flow in streambed and residence time affect reaction output, effects of a complex interaction of hydrodynamic, geomorphology and biogeochemical controls on the magnitude of denitrification in streams are still illusive. In this work, we built a two-dimensional water flow channel and measured $N_2O$ flux from channel sediment with different bed geomorphology by using static closed chambers. Two independent experiments were conducted with identical flume and geomorphology but sediment with differences in dissolved organic carbon (DOC). The experiment flume was a circulation channel through which the effluent flows back, and the size of it was $37m{\times}1.2m{\times}1m$. Five days before the experiment began, urea fertilizer (46% N) was added to sediment with the rate of $0.5kg\;N/m^2$. A sand dune (1 m length and 0.15 m height) was made at the middle of channel to simulate variations in microtopography. In high- DOC experiment, $N_2O$ flux increases in the direction of flow, while the highest flux ($14.6{\pm}8.40{\mu}g\;N_2O-N/m^2\;hr$) was measured in the slope on the back side of the sand dune. followed by decreases afterward. In contrast, low DOC sediment did not show the geomorphological variations. We found that even though topographic variation influenced $N_2O$ flux and chemical properties, this effect is highly constrained by carbon availability.

Calculation of Soil Moisture and Evapotranspiration for KLDAS(Korea Land Data Assimilation System) using Hydrometeorological Data Set (수문기상 데이터 세트를 이용한 KLDAS(Korea Land Data Assimilation System)의 토양수분·증발산량 산출)

  • PARK, Gwang-Ha;LEE, Kyung-Tae;KYE, Chang-Woo;YU, Wan-Sik;HWANG, Eui-Ho;KANG, Do-Hyuk
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.4
    • /
    • pp.65-81
    • /
    • 2021
  • In this study, soil moisture and evapotranspiration were calculated throughout South Korea using the Korea Land Data Assimilation System(KLDAS) of the Korea-Land Surface Information System(K-LIS) built on the basis of the Land Information System (LIS). The hydrometeorological data sets used to drive K-LIS and build KLDAS are MERRA-2(Modern-Era Retrospective analysis for Research and Applications, version 2) GDAS(Global Data Assimilation System) and ASOS(Automated Synoptic Observing System) data. Since ASOS is a point-based observation, it was converted into grid data with a spatial resolution of 0.125° for the application of KLDAS(ASOS-S, ASOS-Spatial). After comparing the hydrometeorological data sets applied to KLDAS against the ground-based observation, the mean of R2 ASOS-S, MERRA-2, and GDAS were analyzed as temperature(0.994, 0.967, 0.975), pressure(0.995, 0.940, 0.942), humidity (0.993, 0.895, 0.915), and rainfall(0.897, 0.682, 0.695), respectively. For the hydrologic output comparisons, the mean of R2 was ASOS-S(0.493), MERRA-2(0.56) and GDAS (0.488) in soil moisture, and the mean of R2 was analyzed as ASOS-S(0.473), MERRA-2(0.43) and GDAS(0.615) in evapotranspiration. MERRA-2 and GDAS are quality-controlled data sets using multiple satellite and ground observation data, whereas ASOS-S is grid data using observation data from 103 points. Therefore, it is concluded that the accuracy is lowered due to the error from the distance difference between the observation data. If the more ASOS observation are secured and applied in the future, the less error due to the gridding will be expected with the increased accuracy.

Space design Effect on Marketing ­ - Concentrating on B to B transaction - (공간 디자인이 마케팅에 미치는 영향 ­ - 전문전시회에서 B to B 거래중심으로 -)

  • Kim, Young Soo;Jeong, Dong Bin;Kim, Kyong Hoon
    • Korea Science and Art Forum
    • /
    • v.20
    • /
    • pp.147-158
    • /
    • 2015
  • This study made an approach to the industrial exhibition space, which is a medium of marketing communication, from the position of an enterprise and consumers through the output of Space Design, and conducted it with focus on B2B transactions among specialized exhibitions. In addition, this study inquired into what factors should be considered along with space design by interpreting the purpose of participating in the exhibition and space design of the enterprise which supply capital goods, elements, related technologies and materials, etc. This study aimed at drawing the direct/indirect effect, produced by space design, on the marketing by analyzing correlation between space design and participating enterprises' marketing. Despite the marketing effect of the exhibition, which was proved by preceding research results, the reality is that exhibition-participating expenses work as considerable burden on enterprises. Particularly, booth design, which is forming the most proportion among the participating expenses, was found to have insufficient influence on visitors due to the decline in its importance among diverse factors influencing visitor's decision to visit a booth. Regardless of the business category of participating enterprises in the exhibition, the standard of exhibits was ranked as the most important consideration factor in visiting a booth. Even by business category, the standard of booth design rarely had an influence on booth visit. Booth design had an affirmative influence on participating enterprise's preference, but its influence on product purchase or business talk & contact with a participating enterprise or price was found to be extremely low. It's difficult to judge marketing success or failure of an exhibition by the form and standard of booth design. Preferably, this study infers that it's necessary to put much weight on qualitative excellence of an exhibition, which consists of participation of an enterprise in possession of excellent technologies, exhibits with higher standards and high-quality visitors with purchasing power. This study suggests that it's more effective to set up the plan for expansion of participation in exhibition by optimally regulating the proportion of space design in participating expense to increase marketing effectiveness of an exhibition. The limitations of this study, analysis of which based on the visitors to an exhibition only, requires supplementation through the follow-up research work on participating enterprises in the exhibition.

Evaluation of Cryptosporidiurn Disinfection by Ozone and Ultraviolet Irradiation Using Viability and Infectivity Assays (크립토스포리디움의 활성/감염성 판별법을 이용한 오존 및 자외선 소독능 평가)

  • Park Sang-Jung;Cho Min;Yoon Je-Yong;Jun Yong-Sung;Rim Yeon-Taek;Jin Ing-Nyol;Chung Hyen-Mi
    • Journal of Life Science
    • /
    • v.16 no.3 s.76
    • /
    • pp.534-539
    • /
    • 2006
  • In the ozone disinfection unit process of a piston type batch reactor with continuous ozone analysis using a flow injection analysis (FIA) system, the CT values for 1 log inactivation of Cryptosporidium parvum by viability assays of DAPI/PI and excystation were $1.8{\sim}2.2\;mg/L{\cdot}min$ at $25^{\circ}C$ and $9.1mg/L{\cdot}min$ at $5^{\circ}C$, respectively. At the low temperature, ozone requirement rises $4{\sim}5$ times higher in order to achieve the same level of disinfection at room temperature. In a 40 L scale pilot plant with continuous flow and constant 5 minutes retention time, disinfection effects were evaluated using excystation, DAPI/PI, and cell infection method at the same time. About 0.2 log inactivation of Cryptosporidium by DAPI/PI and excystation assay, and 1.2 log inactivation by cell infectivity assay were estimated, respectively, at the CT value of about $8mg/L{\cdot}min$. The difference between DAPI/PI and excystation assay was not significant in evaluating CT values of Cryptosporidium by ozone in both experiment of the piston and the pilot reactors. However, there was significant difference between viability assay based on the intact cell wall structure and function and infectivity assay based on the developing oocysts to sporozoites and merozoites in the pilot study. The stage of development should be more sensitive to ozone oxidation than cell wall intactness of oocysts. The difference of CT values estimated by viability assay between two studies may partly come from underestimation of the residual ozone concentration due to the manual monitoring in the pilot study, or the difference of the reactor scale (50 mL vs 40 L) and types (batch vs continuous). Adequate If value to disinfect 1 and 2 log scale of Cryptosporidium in UV irradiation process was 25 $mWs/cm^2$ and 50 $mWs/cm^2$, respectively, at $25^{\circ}C$ by DAPI/PI. At $5^{\circ}C$, 40 $mWs/cm^2$ was required for disinfecting 1 log Cryptosporidium, and 80 $mWs/cm^2$ for disinfecting 2 log Cryptosporidium. It was thought that about 60% increase of If value requirement to compensate for the $20^{\circ}C$ decrease in temperature was due to the low voltage low output lamp letting weaker UV rays occur at lower temperatures.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.