• Title/Summary/Keyword: Database model

Search Result 2,979, Processing Time 0.027 seconds

The Relations between Financial Constraints and Dividend Smoothing of Innovative Small and Medium Sized Enterprises (혁신형 중소기업의 재무적 제약과 배당스무딩간의 관계)

  • Shin, Min-Shik;Kim, Soo-Eun
    • Korean small business review
    • /
    • v.31 no.4
    • /
    • pp.67-93
    • /
    • 2009
  • The purpose of this paper is to explore the relations between financial constraints and dividend smoothing of innovative small and medium sized enterprises(SMEs) listed on Korea Securities Market and Kosdaq Market of Korea Exchange. The innovative SMEs is defined as the firms with high level of R&D intensity which is measured by (R&D investment/total sales) ratio, according to Chauvin and Hirschey (1993). The R&D investment plays an important role as the innovative driver that can increase the future growth opportunity and profitability of the firms. Therefore, the R&D investment have large, positive, and consistent influences on the market value of the firm. In this point of view, we expect that the innovative SMEs can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. And also, we expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Aivazian et al.(2006) exert that the financial unconstrained firms with the high accessibility to capital market can adjust dividend payment faster than the financial constrained firms. We collect the sample firms among the total SMEs listed on Korea Securities Market and Kosdaq Market of Korea Exchange during the periods from January 1999 to December 2007 from the KIS Value Library database. The total number of firm-year observations of the total sample firms throughout the entire period is 5,544, the number of firm-year observations of the dividend firms is 2,919, and the number of firm-year observations of the non-dividend firms is 2,625. About 53%(or 2,919) of these total 5,544 observations involve firms that make a dividend payment. The dividend firms are divided into two groups according to the R&D intensity, such as the innovative SMEs with larger than median of R&D intensity and the noninnovative SMEs with smaller than median of R&D intensity. The number of firm-year observations of the innovative SMEs is 1,506, and the number of firm-year observations of the noninnovative SMEs is 1,413. Furthermore, the innovative SMEs are divided into two groups according to level of financial constraints, such as the financial unconstrained firms and the financial constrained firms. The number of firm-year observations of the former is 894, and the number of firm-year observations of the latter is 612. Although all available firm-year observations of the dividend firms are collected, deletions are made in the case of financial industries such as banks, securities company, insurance company, and other financial services company, because their capital structure and business style are widely different from the general manufacturing firms. The stock repurchase was involved in dividend payment because Grullon and Michaely (2002) examined the substitution hypothesis between dividends and stock repurchases. However, our data structure is an unbalanced panel data since there is no requirement that the firm-year observations data are all available for each firms during the entire periods from January 1999 to December 2007 from the KIS Value Library database. We firstly estimate the classic Lintner(1956) dividend adjustment model, where the decision to smooth dividend or to adopt a residual dividend policy depends on financial constraints measured by market accessibility. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between current payout rato and target payout ratio each year. In the Lintner model, dependent variable is the current dividend per share(DPSt), and independent variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt). We hypothesized that firms adjust partially the gap between the current dividend per share(DPSt) and the target payout ratio(Ω) each year, when the past dividend per share(DPSt-1) deviate from the target payout ratio(Ω). We secondly estimate the expansion model that extend the Lintner model by including the determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory. In the expansion model, dependent variable is the current dividend per share(DPSt), explanatory variables are the past dividend per share(DPSt-1) and the current earnings per share(EPSt), and control variables are the current capital expenditure ratio(CEAt), the current leverage ratio(LEVt), the current operating return on assets(ROAt), the current business risk(RISKt), the current trading volume turnover ratio(TURNt), and the current dividend premium(DPREMt). In these control variables, CEAt, LEVt, and ROAt are the determinants suggested by the residual dividend theory and the agency theory, ROAt and RISKt are the determinants suggested by the dividend signaling theory, TURNt is the determinant suggested by the transactions cost theory, and DPREMt is the determinant suggested by the catering theory. Furthermore, we thirdly estimate the Lintner model and the expansion model by using the panel data of the financial unconstrained firms and the financial constrained firms, that are divided into two groups according to level of financial constraints. We expect that the financial unconstrained firms can adjust dividend payment faster than the financial constrained firms, because the former can finance more easily the investment funds through the market accessibility than the latter. We analyzed descriptive statistics such as mean, standard deviation, and median to delete the outliers from the panel data, conducted one way analysis of variance to check up the industry-specfic effects, and conducted difference test of firms characteristic variables between innovative SMEs and noninnovative SMEs as well as difference test of firms characteristic variables between financial unconstrained firms and financial constrained firms. We also conducted the correlation analysis and the variance inflation factors analysis to detect any multicollinearity among the independent variables. Both of the correlation coefficients and the variance inflation factors are roughly low to the extent that may be ignored the multicollinearity among the independent variables. Furthermore, we estimate both of the Lintner model and the expansion model using the panel regression analysis. We firstly test the time-specific effects and the firm-specific effects may be involved in our panel data through the Lagrange multiplier test that was proposed by Breusch and Pagan(1980), and secondly conduct Hausman test to prove that fixed effect model is fitter with our panel data than the random effect model. The main results of this study can be summarized as follows. The determinants suggested by the major theories of dividend, namely, residual dividend theory, dividend signaling theory, agency theory, catering theory, and transactions cost theory explain significantly the dividend policy of the innovative SMEs. Lintner model indicates that firms maintain stable and long run target payout ratio, and that firms adjust partially the gap between the current payout ratio and the target payout ratio each year. In the core variables of Lintner model, the past dividend per share has more effects to dividend smoothing than the current earnings per share. These results suggest that the innovative SMEs maintain stable and long run dividend policy which sustains the past dividend per share level without corporate special reasons. The main results show that dividend adjustment speed of the innovative SMEs is faster than that of the noninnovative SMEs. This means that the innovative SMEs with high level of R&D intensity can adjust dividend payment faster than the noninnovative SMEs, on the ground of their future growth opportunity and profitability. The other main results show that dividend adjustment speed of the financial unconstrained SMEs is faster than that of the financial constrained SMEs. This means that the financial unconstrained firms with high accessibility to capital market can adjust dividend payment faster than the financial constrained firms, on the ground of their financing ability of investment funds through the market accessibility. Futhermore, the other additional results show that dividend adjustment speed of the innovative SMEs classified by the Small and Medium Business Administration is faster than that of the unclassified SMEs. They are linked with various financial policies and services such as credit guaranteed service, policy fund for SMEs, venture investment fund, insurance program, and so on. In conclusion, the past dividend per share and the current earnings per share suggested by the Lintner model explain mainly dividend adjustment speed of the innovative SMEs, and also the financial constraints explain partially. Therefore, if managers can properly understand of the relations between financial constraints and dividend smoothing of innovative SMEs, they can maintain stable and long run dividend policy of the innovative SMEs through dividend smoothing. These are encouraging results for Korea government, that is, the Small and Medium Business Administration as it has implemented many policies to commit to the innovative SMEs. This paper may have a few limitations because it may be only early study about the relations between financial constraints and dividend smoothing of the innovative SMEs. Specifically, this paper may not adequately capture all of the subtle features of the innovative SMEs and the financial unconstrained SMEs. Therefore, we think that it is necessary to expand sample firms and control variables, and use more elaborate analysis methods in the future studies.

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.

A Performance Comparison of the Mobile Agent Model with the Client-Server Model under Security Conditions (보안 서비스를 고려한 이동 에이전트 모델과 클라이언트-서버 모델의 성능 비교)

  • Han, Seung-Wan;Jeong, Ki-Moon;Park, Seung-Bae;Lim, Hyeong-Seok
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.3
    • /
    • pp.286-298
    • /
    • 2002
  • The Remote Procedure Call(RPC) has been traditionally used for Inter Process Communication(IPC) among precesses in distributed computing environment. As distributed applications have been complicated more and more, the Mobile Agent paradigm for IPC is emerged. Because there are some paradigms for IPC, researches to evaluate and compare the performance of each paradigm are issued recently. But the performance models used in the previous research did not reflect real distributed computing environment correctly, because they did not consider the evacuation elements for providing security services. Since real distributed environment is open, it is very vulnerable to a variety of attacks. In order to execute applications securely in distributed computing environment, security services which protect applications and information against the attacks must be considered. In this paper, we evaluate and compare the performance of the Remote Procedure Call with that of the Mobile Agent in IPC paradigms. We examine security services to execute applications securely, and propose new performance models considering those services. We design performance models, which describe information retrieval system through N database services, using Petri Net. We compare the performance of two paradigms by assigning numerical values to parameters and measuring the execution time of two paradigms. In this paper, the comparison of two performance models with security services for secure communication shows the results that the execution time of the Remote Procedure Call performance model is sharply increased because of many communications with the high cryptography mechanism between hosts, and that the execution time of the Mobile Agent model is gradually increased because the Mobile Agent paradigm can reduce the quantity of the communications between hosts.

Landslide Susceptibility Mapping Using Deep Neural Network and Convolutional Neural Network (Deep Neural Network와 Convolutional Neural Network 모델을 이용한 산사태 취약성 매핑)

  • Gong, Sung-Hyun;Baek, Won-Kyung;Jung, Hyung-Sup
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1723-1735
    • /
    • 2022
  • Landslides are one of the most prevalent natural disasters, threating both humans and property. Also landslides can cause damage at the national level, so effective prediction and prevention are essential. Research to produce a landslide susceptibility map with high accuracy is steadily being conducted, and various models have been applied to landslide susceptibility analysis. Pixel-based machine learning models such as frequency ratio models, logistic regression models, ensembles models, and Artificial Neural Networks have been mainly applied. Recent studies have shown that the kernel-based convolutional neural network (CNN) technique is effective and that the spatial characteristics of input data have a significant effect on the accuracy of landslide susceptibility mapping. For this reason, the purpose of this study is to analyze landslide vulnerability using a pixel-based deep neural network model and a patch-based convolutional neural network model. The research area was set up in Gangwon-do, including Inje, Gangneung, and Pyeongchang, where landslides occurred frequently and damaged. Landslide-related factors include slope, curvature, stream power index (SPI), topographic wetness index (TWI), topographic position index (TPI), timber diameter, timber age, lithology, land use, soil depth, soil parent material, lineament density, fault density, normalized difference vegetation index (NDVI) and normalized difference water index (NDWI) were used. Landslide-related factors were built into a spatial database through data preprocessing, and landslide susceptibility map was predicted using deep neural network (DNN) and CNN models. The model and landslide susceptibility map were verified through average precision (AP) and root mean square errors (RMSE), and as a result of the verification, the patch-based CNN model showed 3.4% improved performance compared to the pixel-based DNN model. The results of this study can be used to predict landslides and are expected to serve as a scientific basis for establishing land use policies and landslide management policies.

A Study on Web-based Technology Valuation System (웹기반 지능형 기술가치평가 시스템에 관한 연구)

  • Sung, Tae-Eung;Jun, Seung-Pyo;Kim, Sang-Gook;Park, Hyun-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.23-46
    • /
    • 2017
  • Although there have been cases of evaluating the value of specific companies or projects which have centralized on developed countries in North America and Europe from the early 2000s, the system and methodology for estimating the economic value of individual technologies or patents has been activated on and on. Of course, there exist several online systems that qualitatively evaluate the technology's grade or the patent rating of the technology to be evaluated, as in 'KTRS' of the KIBO and 'SMART 3.1' of the Korea Invention Promotion Association. However, a web-based technology valuation system, referred to as 'STAR-Value system' that calculates the quantitative values of the subject technology for various purposes such as business feasibility analysis, investment attraction, tax/litigation, etc., has been officially opened and recently spreading. In this study, we introduce the type of methodology and evaluation model, reference information supporting these theories, and how database associated are utilized, focusing various modules and frameworks embedded in STAR-Value system. In particular, there are six valuation methods, including the discounted cash flow method (DCF), which is a representative one based on the income approach that anticipates future economic income to be valued at present, and the relief-from-royalty method, which calculates the present value of royalties' where we consider the contribution of the subject technology towards the business value created as the royalty rate. We look at how models and related support information (technology life, corporate (business) financial information, discount rate, industrial technology factors, etc.) can be used and linked in a intelligent manner. Based on the classification of information such as International Patent Classification (IPC) or Korea Standard Industry Classification (KSIC) for technology to be evaluated, the STAR-Value system automatically returns meta data such as technology cycle time (TCT), sales growth rate and profitability data of similar company or industry sector, weighted average cost of capital (WACC), indices of industrial technology factors, etc., and apply adjustment factors to them, so that the result of technology value calculation has high reliability and objectivity. Furthermore, if the information on the potential market size of the target technology and the market share of the commercialization subject refers to data-driven information, or if the estimated value range of similar technologies by industry sector is provided from the evaluation cases which are already completed and accumulated in database, the STAR-Value is anticipated that it will enable to present highly accurate value range in real time by intelligently linking various support modules. Including the explanation of the various valuation models and relevant primary variables as presented in this paper, the STAR-Value system intends to utilize more systematically and in a data-driven way by supporting the optimal model selection guideline module, intelligent technology value range reasoning module, and similar company selection based market share prediction module, etc. In addition, the research on the development and intelligence of the web-based STAR-Value system is significant in that it widely spread the web-based system that can be used in the validation and application to practices of the theoretical feasibility of the technology valuation field, and it is expected that it could be utilized in various fields of technology commercialization.

A Store Recommendation Procedure in Ubiquitous Market for User Privacy (U-마켓에서의 사용자 정보보호를 위한 매장 추천방법)

  • Kim, Jae-Kyeong;Chae, Kyung-Hee;Gu, Ja-Chul
    • Asia pacific journal of information systems
    • /
    • v.18 no.3
    • /
    • pp.123-145
    • /
    • 2008
  • Recently, as the information communication technology develops, the discussion regarding the ubiquitous environment is occurring in diverse perspectives. Ubiquitous environment is an environment that could transfer data through networks regardless of the physical space, virtual space, time or location. In order to realize the ubiquitous environment, the Pervasive Sensing technology that enables the recognition of users' data without the border between physical and virtual space is required. In addition, the latest and diversified technologies such as Context-Awareness technology are necessary to construct the context around the user by sharing the data accessed through the Pervasive Sensing technology and linkage technology that is to prevent information loss through the wired, wireless networking and database. Especially, Pervasive Sensing technology is taken as an essential technology that enables user oriented services by recognizing the needs of the users even before the users inquire. There are lots of characteristics of ubiquitous environment through the technologies mentioned above such as ubiquity, abundance of data, mutuality, high information density, individualization and customization. Among them, information density directs the accessible amount and quality of the information and it is stored in bulk with ensured quality through Pervasive Sensing technology. Using this, in the companies, the personalized contents(or information) providing became possible for a target customer. Most of all, there are an increasing number of researches with respect to recommender systems that provide what customers need even when the customers do not explicitly ask something for their needs. Recommender systems are well renowned for its affirmative effect that enlarges the selling opportunities and reduces the searching cost of customers since it finds and provides information according to the customers' traits and preference in advance, in a commerce environment. Recommender systems have proved its usability through several methodologies and experiments conducted upon many different fields from the mid-1990s. Most of the researches related with the recommender systems until now take the products or information of internet or mobile context as its object, but there is not enough research concerned with recommending adequate store to customers in a ubiquitous environment. It is possible to track customers' behaviors in a ubiquitous environment, the same way it is implemented in an online market space even when customers are purchasing in an offline marketplace. Unlike existing internet space, in ubiquitous environment, the interest toward the stores is increasing that provides information according to the traffic line of the customers. In other words, the same product can be purchased in several different stores and the preferred store can be different from the customers by personal preference such as traffic line between stores, location, atmosphere, quality, and price. Krulwich(1997) has developed Lifestyle Finder which recommends a product and a store by using the demographical information and purchasing information generated in the internet commerce. Also, Fano(1998) has created a Shopper's Eye which is an information proving system. The information regarding the closest store from the customers' present location is shown when the customer has sent a to-buy list, Sadeh(2003) developed MyCampus that recommends appropriate information and a store in accordance with the schedule saved in a customers' mobile. Moreover, Keegan and O'Hare(2004) came up with EasiShop that provides the suitable tore information including price, after service, and accessibility after analyzing the to-buy list and the current location of customers. However, Krulwich(1997) does not indicate the characteristics of physical space based on the online commerce context and Keegan and O'Hare(2004) only provides information about store related to a product, while Fano(1998) does not fully consider the relationship between the preference toward the stores and the store itself. The most recent research by Sedah(2003), experimented on campus by suggesting recommender systems that reflect situation and preference information besides the characteristics of the physical space. Yet, there is a potential problem since the researches are based on location and preference information of customers which is connected to the invasion of privacy. The primary beginning point of controversy is an invasion of privacy and individual information in a ubiquitous environment according to researches conducted by Al-Muhtadi(2002), Beresford and Stajano(2003), and Ren(2006). Additionally, individuals want to be left anonymous to protect their own personal information, mentioned in Srivastava(2000). Therefore, in this paper, we suggest a methodology to recommend stores in U-market on the basis of ubiquitous environment not using personal information in order to protect individual information and privacy. The main idea behind our suggested methodology is based on Feature Matrices model (FM model, Shahabi and Banaei-Kashani, 2003) that uses clusters of customers' similar transaction data, which is similar to the Collaborative Filtering. However unlike Collaborative Filtering, this methodology overcomes the problems of personal information and privacy since it is not aware of the customer, exactly who they are, The methodology is compared with single trait model(vector model) such as visitor logs, while looking at the actual improvements of the recommendation when the context information is used. It is not easy to find real U-market data, so we experimented with factual data from a real department store with context information. The recommendation procedure of U-market proposed in this paper is divided into four major phases. First phase is collecting and preprocessing data for analysis of shopping patterns of customers. The traits of shopping patterns are expressed as feature matrices of N dimension. On second phase, the similar shopping patterns are grouped into clusters and the representative pattern of each cluster is derived. The distance between shopping patterns is calculated by Projected Pure Euclidean Distance (Shahabi and Banaei-Kashani, 2003). Third phase finds a representative pattern that is similar to a target customer, and at the same time, the shopping information of the customer is traced and saved dynamically. Fourth, the next store is recommended based on the physical distance between stores of representative patterns and the present location of target customer. In this research, we have evaluated the accuracy of recommendation method based on a factual data derived from a department store. There are technological difficulties of tracking on a real-time basis so we extracted purchasing related information and we added on context information on each transaction. As a result, recommendation based on FM model that applies purchasing and context information is more stable and accurate compared to that of vector model. Additionally, we could find more precise recommendation result as more shopping information is accumulated. Realistically, because of the limitation of ubiquitous environment realization, we were not able to reflect on all different kinds of context but more explicit analysis is expected to be attainable in the future after practical system is embodied.

Natural Language Processing Model for Data Visualization Interaction in Chatbot Environment (챗봇 환경에서 데이터 시각화 인터랙션을 위한 자연어처리 모델)

  • Oh, Sang Heon;Hur, Su Jin;Kim, Sung-Hee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.11
    • /
    • pp.281-290
    • /
    • 2020
  • With the spread of smartphones, services that want to use personalized data are increasing. In particular, healthcare-related services deal with a variety of data, and data visualization techniques are used to effectively show this. As data visualization techniques are used, interactions in visualization are also naturally emphasized. In the PC environment, since the interaction for data visualization is performed with a mouse, various filtering for data is provided. On the other hand, in the case of interaction in a mobile environment, the screen size is small and it is difficult to recognize whether or not the interaction is possible, so that only limited visualization provided by the app can be provided through a button touch method. In order to overcome the limitation of interaction in such a mobile environment, we intend to enable data visualization interactions through conversations with chatbots so that users can check individual data through various visualizations. To do this, it is necessary to convert the user's query into a query and retrieve the result data through the converted query in the database that is storing data periodically. There are many studies currently being done to convert natural language into queries, but research on converting user queries into queries based on visualization has not been done yet. Therefore, in this paper, we will focus on query generation in a situation where a data visualization technique has been determined in advance. Supported interactions are filtering on task x-axis values and comparison between two groups. The test scenario utilized data on the number of steps, and filtering for the x-axis period was shown as a bar graph, and a comparison between the two groups was shown as a line graph. In order to develop a natural language processing model that can receive requested information through visualization, about 15,800 training data were collected through a survey of 1,000 people. As a result of algorithm development and performance evaluation, about 89% accuracy in classification model and 99% accuracy in query generation model was obtained.

Predicting the Potential Habitat and Future Distribution of Brachydiplax chalybea flavovittata Ris, 1911 (Odonata: Libellulidae) (기후변화에 따른 남색이마잠자리 잠재적 서식지 및 미래 분포예측)

  • Soon Jik Kwon;Yung Chul Jun;Hyeok Yeong Kwon;In Chul Hwang;Chang Su Lee;Tae Geun Kim
    • Journal of Wetlands Research
    • /
    • v.25 no.4
    • /
    • pp.335-344
    • /
    • 2023
  • Brachydiplax chalybea flavovittata, a climate-sensitive biological indicator species, was first observed and recorded at Jeju Island in Korea in 2010. Overwintering was recently confirmed in the Yeongsan River area. This study was aimed to predict the potential distribution patterns for the larvae of B. chalybea flavovittata and to understand its ecological characteristics as well as changes of population under global climate change circumstances. Data was collected both from the Global Biodiversity Information Facility (GBIF) and by field surveys from May 2019 to May 2023. We used for the distribution model among downloaded 19 variables from the WorldClim database. MaxEnt model was adopted for the prediction of potential and future distribution for B. chalybea flavovittata. Larval distribution ranged within a region delimited by northern latitude from Jeju-si, Jeju Special Self-Governing Province (33.318096°) to Yeoju-si, Gyeonggi-do (37.366734°) and eastern longitude from Jindo-gun, Jeollanam-do (126.054925°) to Yangsan-si, Gyeongsangnam-do (129.016472°). M type (permanent rivers, streams and creeks) wetlands were the most common habitat based on the Ramsar's wetland classification system, followed by Tp type (permanent freshwater marshes and pools) (45.8%) and F type (estuarine waters) (4.2%). MaxEnt model presented that potential distribution with high inhabiting probability included Ulsan and Daegu Metropolitan City in addition to the currently discovered habitats. Applying to the future scenarios by Intergovernmental Panel on Climate Change (IPCC), it was predicted that the possible distribution area would expand in the 2050s and 2090s, covering the southern and western coastal regions, the southern Daegu metropolitan area and the eastern coastal regions in the near future. This study suggests that B. chalybea flavovittata can be used as an effective indicator species for climate changes with a monitoring of their distribution ranges. Our findings will also help to provide basic information on the conservation and management of co-existing native species.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

Adaptive Lock Escalation in Database Management Systems (데이타베이스 관리 시스템에서의 적응형 로크 상승)

  • Chang, Ji-Woong;Lee, Young-Koo;Whang, Kyu-Young;Yang, Jae-Heon
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.742-757
    • /
    • 2001
  • Since database management systems(DBMSS) have limited lock resources, transactions requesting locks beyond the limit mutt be aborted. In the worst carte, if such transactions are aborted repeatedly, the DBMS can become paralyzed, i.e., transaction execute but cannot commit. Lock escalation is considered a solution to this problem. However, existing lock escalation methods do not provide a complete solution. In this paper, we prognose a new lock escalation method, adaptive lock escalation, that selves most of the problems. First, we propose a general model for lock escalation and present the concept of the unescalatable look, which is the major cause making the transactions to abort. Second, we propose the notions of semi lock escalation, lock blocking, and selective relief as the mechanisms to control the number of unescalatable locks. We then propose the adaptive lock escalation method using these notions. Adaptive lock escalation reduces needless aborts and guarantees that the DBMS is not paralyzed under excessive lock requests. It also allows graceful degradation of performance under those circumstances. Third, through extensive simulation, we show that adaptive lock escalation outperforms existing lock escalation methods. The results show that, compared to the existing methods, adaptive lock escalation reduces the number of aborts and the average response time, and increases the throughput to a great extent. Especially, it is shown that the number of concurrent transactions can be increased more than 16 ~256 fold. The contribution of this paper is significant in that it has formally analysed the role of lock escalation in lock resource management and identified the detailed underlying mechanisms. Existing lock escalation methods rely on users or system administrator to handle the problems of excessive lock requests. In contrast, adaptive lock escalation releases the users of this responsibility by providing graceful degradation and preventing system paralysis through automatic control of unescalatable locks Thus adaptive lock escalation can contribute to developing self-tuning: DBMSS that draw a lot of attention these days.

  • PDF