• Title/Summary/Keyword: Query process

Search Result 526, Processing Time 0.024 seconds

Classification and Retrieval of Object - Oriented Reuse Components with HACM (HACM을 사용한 객체지향 재사용 부품의 분류와 검색)

  • Bae, Je-Min;Kim, Sang-Geun;Lee, Kyung-Whan
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.7
    • /
    • pp.1733-1748
    • /
    • 1997
  • In this paper, we propose the classification scheme and retrieval mechanism which can apply to many application domains in order to construct the software reuse library. Classification scheme which is the core of the accessibility in the reusability, is defined by the hierarchical structure using the agglomerative clusters. Agglomerative cluster means the group of the reuse component by the functional relationships. Functional relationships are measured by the HACM which is the representation method about software components to calculate the similarities among the classes in the particular domain. And clustering informations are added to the library structure which determines the functionality and accuracy of the retrieval system. And the system stores the classification results such as the index information with the weights, the similarity matrix, the hierarchical structure. Therefore users can retrieve the software component using the query which is the natural language. The thesis is studied to focus on the findability of software components in the reuse library. As a result, the part of the construction process of the reuse library was automated, and we can construct the object-oriented reuse library with the extendibility and relationship about the reuse components. Also the our process is visualized through the browse hierarchy of the retrieval environment, and the retrieval system is integrated to the reuse system CARS 2.1.

  • PDF

An Efficient ROLAP Cube Generation Scheme (효율적인 ROLAP 큐브 생성 방법)

  • Kim, Myung;Song, Ji-Sook
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.99-109
    • /
    • 2002
  • ROLAP(Relational Online Analytical Processing) is a process and methodology for a multidimensional data analysis that is essential to extract desired data and to derive value-added information from an enterprise data warehouse. In order to speed up query processing, most ROLAP systems pre-compute summary tables. This process is called 'cube generation' and it mostly involves intensive table sorting stages. (1) showed that it is much faster to generate ROLAP summary tables indirectly using a MOLAP(multidimensional OLAP) cube generation algorithm. In this paper, we present such an indirect ROLAP cube generation algorithm that is fast and scalable. High memory utilization is achieved by slicing the input fact table along one or more dimensions before generating summary tables. High speed is achieved by producing summary tables from their smallest parents. We showed the efficiency of our algorithm through experiments.

A Data Mining System for Supporting of Business Intelligence in e-Business (e-Business에서의 BI지원 데이타마이닝 시스템)

  • Lee, Jun-Wook;Baek, Ok-Hyun;Ryu, Keun-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.5
    • /
    • pp.489-500
    • /
    • 2002
  • As the interest in business interest is increased, data mining is increasingly used in BI as the core technique. To support Business Intelligence in e-business environment, the integrated data mining system which included in various mining operations should be able to flexibly integrate with database system and also it must provide the easy and efficient interface to implement the marketing process in various business applications. In this paper, we have implemented the EC-DaMiner system to support business intelligence in e-business area. The implemented system can be integrated with the conventional database system with the standard interface. Business applications can use MQL mining query language to discover the rules and mining result is modeled in marketing database, and the EC-DaMiner system make the implementation of business marketing process more easy.

Factors Clustering Approach to Parametric Cost Estimates And OLAP Driver

  • JaeHo, Cho;BoSik, Son;JaeYoul, Chun
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.707-716
    • /
    • 2009
  • The role of cost modeller is to facilitate the design process by systematic application of cost factors so as to maintain a sensible and economic relationship between cost, quantity, utility and appearance which thus helps in achieving the client's requirements within an agreed budget. There are a number of research on cost estimates in the early design stage based on the improvement of accuracy or impact factors. It is common knowledge that cost estimates are undertaken progressively throughout the design stage and make use of the information that is available at each phase, through the related research up to now. In addition, Cost estimates in the early design stage shall analyze the information under the various kinds of precondition before reaching the more developed design because a design can be modified and changed in all process depending on clients' requirements. Parametric cost estimating models have been adopted to support decision making in a changeable environment, in the early design stage. These models are using a similar instance or a pattern of historical case to be constituted in project information, geographic design features, relevant data to quantity or cost, etc. OLAP technique analyzes a subject data by multi-dimensional points of view; it supports query, analysis, comparison of required information by diverse queries. OLAP's data structure matches well with multiview-analysis framework. Accordingly, this study implements multi-dimensional information system for case based quantity data related to design information that is utilizing OLAP's technology, and then analyzes impact factors of quantity by the design criteria or parameter of the same meaning. On the basis of given factors examined above, this study will generate the rules on quantity measure and produce resemblance class using clustering of data mining. These sorts of knowledge-base consist of a set of classified data as group patterns, of which will be appropriate stand on the parametric cost estimating method.

  • PDF

A Study on the Effects of Search Language on Web Searching Behavior: Focused on the Differences of Web Searching Pattern (검색 언어가 웹 정보검색행위에 미치는 영향에 관한 연구 - 웹 정보검색행위의 양상 차이를 중심으로 -)

  • Byun, Jeayeon
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.52 no.3
    • /
    • pp.289-334
    • /
    • 2018
  • Even though information in many languages other than English is quickly increasing, English is still playing the role of the lingua franca and being accounted for the largest proportion on the web. Therefore, it is necessary to investigate the key features and differences between "information searching behavior using mother tongue as a search language" and "information searching behavior using English as a search language" of users who are non-mother tongue speakers of English to acquire more diverse and abundant information. This study conducted the experiment on the web searching which is applied in concurrent think-aloud method to examine the information searching behavior and the cognitive process in Korean search and English search through the twenty-four undergraduate students at a private university in South Korea. Based on the qualitative data, this study applied the frequency analysis to web search pattern under search language. As a result, it is active, aggressive and independent information searching behavior in Korean search, while information searching behavior in English search is passive, submissive and dependent. In Korean search, the main features are the query formulation by extract and combine the terms from various sources such as users, tasks and system, the search range adjustment in diverse level, the smooth filtering of the item selection in search engine results pages, the exploration and comparison of many items and the browsing of the overall contents of web pages. Whereas, in English search, the main features are the query formulation by the terms principally extracted from task, the search range adjustment in limitative level, the item selection by rely on the relevance between the items such as categories or links, the repetitive exploring on same item, the browsing of partial contents of web pages and the frequent use of language support tools like dictionaries or translators.

An Efficient Bitmap Indexing Method for Multimedia Data Reflecting the Characteristics of MPEG-7 Visual Descriptors (MPEG-7 시각 정보 기술자의 특성을 반영한 효율적인 멀티미디어 데이타 비트맵 인덱싱 방법)

  • Jeong Jinguk;Nang Jongho
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.1
    • /
    • pp.9-20
    • /
    • 2005
  • Recently, the MPEG-7 standard a multimedia content description standard is wide]y used for content based image/video retrieval systems. However, since the descriptors standardized in MPEG-7 are usually multidimensional and the problem called 'Curse of dimensionality', previously proposed indexing methods(for example, multidimensional indexing methods, dimensionality reduction methods, filtering methods, and so on) could not be used to effectively index the multimedia database represented in MPEG-7. This paper proposes an efficient multimedia data indexing mechanism reflecting the characteristics of MPEG-7 visual descriptors. In the proposed indexing mechanism, the descriptor is transformed into a histogram of some attributes. By representing the value of each bin as a binary number, the histogram itself that is a visual descriptor for the object in multimedia database could be represented as a bit string. Bit strings for all objects in multimedia database are collected to form an index file, bitmap index, in the proposed indexing mechanism. By XORing them with the descriptors for query object, the candidate solutions for similarity search could be computed easily and they are checked again with query object to precisely compute the similarity with exact metric such as Ll-norm. These indexing and searching mechanisms are efficient because the filtering process is performed by simple bit-operation and it reduces the search space dramatically. Upon experimental results with more than 100,000 real images, the proposed indexing and searching mechanisms are about IS times faster than the sequential searching with more than 90% accuracy.

Design and Algorithm Implementation of a Distributed Information Retrieval System using Sequential Transferring Method(STM) (순차적 전달방식(STM)을 이용한 분산정보검색시스템의 설계 및 알고리즘 구현)

  • Yoon, Hee-Byung;Kim, Yong-Han;Kim, Hwa-Soo
    • The KIPS Transactions:PartB
    • /
    • v.11B no.5
    • /
    • pp.603-610
    • /
    • 2004
  • The distributed Information Retrieval System centrally controlled by mediator or meta search engine result in congestion of heavy traffic and int he problem of increment of cost for the reason of the design of complicated algorithm for central control and installation of hardware. So to figure out this problem, the way is needed that has independent retrieval functionality and can cooperate each other without dependency. In this paper, we overview a few works involved in distributed information retrieval system, then, implement algorithm and design the frame-work of distributed information retrieval system using sequential transferring method(STM) including multiple information retrieval system separated from central control. For this first of all, we present a web partition policy which devide and manage web logically and we present the sequential query processing way by means of illustration through changing numbered information retrieval system. Then, we also present 3-layered structure of framework and function and module of each layer suitable for information retrieval system. Last of ail, for effective implementation of STM algorithm we analysis module structure and present description of pseudocode of this, and show that the proposed STM algorithm works smoothly by demonstration of sequential query transfer process between servers.

Automatic Generator for Component-Based Web Database Applications (컴포넌트 기반 웹 데이터베이스 응용의 자동 생성기)

  • Eum, Doo-Hun;Ko, Min-Jeung;Kang, I-Zzy
    • The KIPS Transactions:PartD
    • /
    • v.11D no.2
    • /
    • pp.371-380
    • /
    • 2004
  • E-commerce is in wide use with the rapid advance of internet technology. The main component of an e-commerce application is a Web-based database application. Currently, it takes a lot of time in developing Web applications since developers should write codes manually or semi-automatically for user interface forms and query processing of an application. Therefore, the productivity increase of Web-based database applications has been demanded. In this paper, we introduce a software tool, which we call the WebSiteGen2, that automatically generates the forms that we used as user interfaces and the EJB/JSP components that process the query made through the forms for an application that needs a new database or uses an existing database. The WebSiteGen2 thus increases the productivity, reusability, expandibility, and portability of an application by automatically generating a 3-tier application based on component technology. Moreover, one user interface form that are generated by the WebSiteGen2 provides information on an interested entity as well as information on all the directly or indirectly related entities with the interested one. In this paper, we explain the functionality and implementation of the WebSiteGen2 and then show the merits by comparing the WebSiteGen2 to the other commercial Web application generators.

Applying an Aggregate Function AVG to OLAP Cubes (OLAP 큐브에서의 집계함수 AVG의 적용)

  • Lee, Seung-Hyun;Lee, Duck-Sung;Choi, In-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.1
    • /
    • pp.217-228
    • /
    • 2009
  • Data analysis applications typically aggregate data across many dimensions looking for unusual patterns in data. Even though such applications are usually possible with standard structured query language (SQL) queries, the queries may become very complex. A complex query may result in many scans of the base table, leading to poor performance. Because online analytical processing (OLAP) queries are usually complex, it is desired to define a new operator for aggregation, called the data cube or simply cube. Data cube supports OLAP tasks like aggregation and sub-totals. Many aggregate functions can be used to construct a data cube. Those functions can be classified into three categories, the distributive, the algebraic, and the holistic. It has been thought that the distributive functions such as SUM, COUNT, MAX, and MIN can be used to construct a data cube, and also the algebraic function such as AVG can be used if the function is replaced to an intermediate function. It is believed that even though AVG is not distributive, but the intermediate function (SUM, COUNT) is distributive, and AVG can certainly be computed from (SUM, COUNT). In this paper, however, it is found that the intermediate function (SUM COUNT) cannot be applied to OLAP cubes, and consequently the function leads to erroneous conclusions and decisions. The objective of this study is to identify some problems in applying aggregate function AVG to OLAP cubes, and to design a process for solving these problems.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.