• Title/Summary/Keyword: Model Repository

Search Result 299, Processing Time 0.025 seconds

Assessment Guidelines for Decision Making of Implementation Strategy in Web Services Development Process (웹서비스 개발 프로세스에서 구현전략 결정을 위한 평가 지침)

  • Kim Yu-Kyung;Yun Hong-Ran;Park Jae-Nyun
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.5
    • /
    • pp.460-469
    • /
    • 2006
  • To integrate heterogeneous distributed systems, there exist various researches and developments for the purpose of its adoption into enterprise environment. However, when web service technologies are applied, it is difficult to adopt directly existing software system development methodologies, because of the peculiar architecture of web services, such as service provider, service requester, and service repository. M4WSD(Method for Wes Services Development) is a web service development process model and involves procedures and guidelines to develop web services based on a Use Case model that is elicited from a business domain in requirement analysis. In this paper, we focus on how to determine key realization decisions for each service. The assessment guidelines help you to structure the problem of determining implementation strategy.

Marine Environment Monitoring and Analysis System Model (해양환경 모니터링 및 분석 시스템의 모델)

  • Park, Sun;Kim, Chul Won;Lee, Seong Ro
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.10
    • /
    • pp.2113-2120
    • /
    • 2012
  • The study of automatic monitoring and analysis of marine environment in Korea is not enough. Recently, the marine monitoring technology is actively being studied since the sea is a rich repository of natural resources that is taken notice in the world. In particular, the marine environment data should be collected continuously in order to understand and analyze the marine environment, however the marine environment monitoring is limited in many area yet. The prediction of marine disaster by automatic collecting marine environment data and analyzing the collected data can contribute to minimized the damages with respect to marine pollution of oil spill and fisheries damage by red tide blooms and marine environment upsets. In this paper, we proposed the marine environment monitoring and analysis system model. The proposed system automatically collects the marine environment information for monitoring the marine environment intelligently. Also it predicts the marine disaster by analyzing the collected ocean data.

A Knowledge-based Model for Semantic Oriented Contextual Advertising

  • Maree, Mohammed;Hodrob, Rami;Belkhatir, Mohammed;Alhashmi, Saadat M.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.2122-2140
    • /
    • 2020
  • Proper and precise embedding of commercial ads within Webpages requires Ad-hoc analysis and understanding of their content. By the successful implementation of this step, both publishers and advertisers gain mutual benefits through increasing their revenues on the one hand, and improving user experience on the other. In this research work, we propose a novel multi-level context-based ads serving approach through which ads will be served at generic publisher websites based on their contextual relevance. In the proposed approach, knowledge encoded in domain-specific and generic semantic repositories is exploited in order to analyze and segment Webpages into sets of contextually-relevant segments. Semantically-enhanced indexes are also constructed to index ads based on their textual descriptions provided by advertisers. A modified cosine similarity matching algorithm is employed to embed each ad from the Ads repository into one or more contextually-relevant segments. In order to validate our proposal, we have implemented a prototype of an ad serving system with two datasets that consist of (11429 ads and 93 documents) and (11000 documents and 15 ads), respectively. To demonstrate the effectiveness of the proposed techniques, we experimentally tested the proposed method and compared the produced results against five baseline metrics that can be used in the context of ad serving systems. In addition, we compared the results produced by our system with other state-of-the-art models. Findings demonstrate that the accuracy of conventional ad matching techniques has improved by exploiting the proposed semantically-enhanced context-based ad serving model.

A Study on the Improvement of University Institutional Repositories (dCollection) based on its Current State (대학 기관 리포지토리의 운영 현황 분석 및 개선 방안에 관한 연구 - dCollection을 중심으로 -)

  • Kim, Hyun-Hee;Joung, Kyoung-Hee;Kim, Yong-Ho
    • Journal of the Korean Society for information Management
    • /
    • v.23 no.4 s.62
    • /
    • pp.17-39
    • /
    • 2006
  • Building institutional repositories is known as one of powerful methods for realizing the open access movement. The Korean Education and Research Information Service(KERIS) proposed to organize institutional repositories into a consortium, called 'dCollection (Digital Collection),' composed of 62 universities since 2003. The purpose of this study is to investigate the current state of 40 member universities of dCollection using the evaluation model including 4 categories and 39 indicators , and, based on the survey outcomes, to pinpoint the procedural or performance weak points of the dCollection systems in order to find its customized solutions focusing on the improvement of use and self-archiving rates.

An Application Method Study on the Electronic Records Management Systems based on Cloud Computing (클라우드 컴퓨팅 기반의 전자기록관리시스템 구축방안에 관한 연구)

  • Lim, Ji-Hoon;Kim, Eun-Chong;Bang, Ki-Young;Lee, Yu-Jin;Kim, Yong
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.14 no.3
    • /
    • pp.153-179
    • /
    • 2014
  • After making amendments to archive-related legislations in 2006, the Electronic Records Management Systems (ERMS) were introduced into public institutions. With this, most institutions constructed digital repositories in their records center. In this system, there are several weaknesses that wasted costs and manpower for its introduction and maintenance. The extensibility of a repository is debased, and securing the interoperability is difficult. The study proposed a cloud computing-based model to solve such problems that the previous system had. In particular, this study expected effects through the proposed model. The expected effects are low-cost, highly efficient, extensible, and interoperable in embracing various systems. Thus, the appropriateness of introducing cloud computing into ERMS was analyzed in this study.

A Study on the Mid-Long Term Direction for Development of Software Cost Estimation Guidelines (소프트웨어 사업대가기준 중장기 발전 방향에 관한 연구)

  • Kim, Woo-Je;Kwon, Moon-Ju
    • The Journal of Society for e-Business Studies
    • /
    • v.15 no.1
    • /
    • pp.139-155
    • /
    • 2010
  • The purpose of this paper is to develop a framework of software cost estimation guidelines and to derive a mid-long term direction for development of the software cost estimation guidelines. In this paper, all the steps in the software life cycle are researched in the view of cost estimation, and current software cost estimation guidelines and models have been reviewed and analysed first. Second, a plan to separate unit cost per function point from standard procedure in current software cost estimation guidelines is presented to strengthen maket self-regulating function as a mid-long term developmental direction for software cost estimation guidelines. Third, construction of cost repository, making standard procedure for software cost estimation guidelines, development of various kinds of software cost estimation models, and a system for experts on software cost estimation are presented as the prerequisites for the future model framework of software cost estimation guidelines. Finally a roadmap for establishing the future model is proposed.

Empirical model to estimate the thermal conductivity of granite with various water contents (다양한 함수비를 가진 화강암의 열전도도 추정을 위한 실험적 모델)

  • Cho, Won-Jin;Kwon, Sang-Ki;Lee, Jae-Owan
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.8 no.2
    • /
    • pp.135-142
    • /
    • 2010
  • To obtain the input data for the design and long-term performance assessment of a high-level waste repository, the thermal conductivities of several granite rocks which were taken from the rock cores from the declined borehole were measured. The thermal conductivities of granite were measured under the different conditions of water content to investigate the effects of the water content on the thermal conductivity. A simple empirical correlation was proposed to predict the thermal conductivity of granite as a function of effective porosity and water content which can be measured with relative ease while neglecting the possible effects of mineralogy, structure and anisotropy. The correlation could predict the thermal conductivity of granite with the effective porosity below 2.7% from the KURT site with an estimated error below 10%.

Predicting Bug Severity by utilizing Topic Model and Bug Report Meta-Field (토픽 모델과 버그 리포트 메타 필드를 이용한 버그 심각도 예측 방법)

  • Yang, Geunseok;Lee, Byungjeong
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.9
    • /
    • pp.616-621
    • /
    • 2015
  • Recently developed software systems have many components, and their complexity is thus increasing. Last year, about 375 bug reports in one day were reported to a software repository in Eclipse and Mozilla open source projects. With so many bug reports submitted, developers' time and efforts have increased unnecessarily. Since the bug severity is manually determined by quality assurance, project manager or other developers in the general bug fixing process, it is biased to them. They might also make a mistake on the manual decision because of the large number of bug reports. Therefore, in this study, we propose an approach of bug severity prediction to solve these problems. First, we find similar topics within a new bug report and reduce the candidate reports of the topic by using the meta field of the bug report. Next, we train the reduced reports by applying Naive Bayes Multinomial. Finally, we predict the severity of the new bug report. We compare our approach with other prediction algorithms by using bug reports in open source projects. The results show that our approach better predicts bug severity than other algorithms.

Review of Erosion and Piping in Compacted Bentonite Buffers Considering Buffer-Rock Interactions and Deduction of Influencing Factors (완충재-근계암반 상호작용을 고려한 압축 벤토나이트 완충재 침식 및 파이핑 연구 현황 및 주요 영향인자 도출)

  • Hong, Chang-Ho;Kim, Ji-Won;Kim, Jin-Seop;Lee, Changsoo
    • Tunnel and Underground Space
    • /
    • v.32 no.1
    • /
    • pp.30-58
    • /
    • 2022
  • The deep geological repository for high-level radioactive waste disposal is a multi barrier system comprised of engineered barriers and a natural barrier. The long-term integrity of the deep geological repository is affected by the coupled interactions between the individual barrier components. Erosion and piping phenomena in the compacted bentonite buffer due to buffer-rock interactions results in the removal of bentonite particles via groundwater flow and can negatively impact the integrity and performance of the buffer. Rapid groundwater inflow at the early stages of disposal can lead to piping in the bentonite buffer due to the buildup of pore water pressure. The physiochemical processes between the bentonite buffer and groundwater lead to bentonite swelling and gelation, resulting in bentonite erosion from the buffer surface. Hence, the evaluation of erosion and piping occurrence and its effects on the integrity of the bentonite buffer is crucial in determining the long-term integrity of the deep geological repository. Previous studies on bentonite erosion and piping failed to consider the complex coupled thermo-hydro-mechanical-chemical behavior of bentonite-groundwater interactions and lacked a comprehensive model that can consider the complex phenomena observed from the experimental tests. In this technical note, previous studies on the mechanisms, lab-scale experiments and numerical modeling of bentonite buffer erosion and piping are introduced, and the future expected challenges in the investigation of bentonite buffer erosion and piping are summarized.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.