• Title/Summary/Keyword: Web-based Database

Search Result 957, Processing Time 0.035 seconds

Developing an Endangered Species Habitat Management System based on Location Information (위치정보 기반 멸종위기종 서식지 관리시스템 개발)

  • Kim, Sun-Jib;Kim, Sang-hyup
    • Journal of Internet of Things and Convergence
    • /
    • v.6 no.3
    • /
    • pp.67-73
    • /
    • 2020
  • The research status of endangered amphibians in Korea was mainly studied the life-cycle and distribution status of species from the 1980s to the early 2000s. Although a relatively diverse range of studies have been conducted on mackerels, studies on habitat prediction, analysis, change and management are insufficient. WEB, which provides biota information using location information in Korea, is a site operated by the National Bio Resource Center under the Ministry of Environment, but there is no information on endangered species and general species information has also been found to be very scantily. For this research, we secured a database of location information of Narrow-mouth frog, an endangered species, by combining literature and field research, and established a system by applying new technologies and open-based platform technologies that can be easily accessed by non-technical personnel of IT among IT technologies. The system was divided into administrator functions and user functions to prevent indiscriminate sharing of information through authentication procedures through user membership of users. The established system was authorized to show the distance between the current location and the location of the Narrow-mouth frog. Considering the ecological characteristics of the Narrow-mouth frog, which is an endangered species, a radius of 500m was marked to determine the habitat range. The system is expected to be applied to the legal system to change existing protected areas, etc. and to select new ones. It is estimated that practical reduction measures can be derived by utilizing the development plan for reviewing the natural environment. In addition, the deployed system has the advantage of being able to apply to a wide variety of endangered species by modifying the information entered.

A Scheduling Algorithm using The Priority of Broker for Improving The Performance of Semantic Web-based Visual Media Retrieval Framework (분산시각 미디어 검색 프레임워크의 성능향상을 위한 브로커 서버 우선순위를 이용한 라운드 로빈 스케줄링 기법)

  • Shim, Jun-Yong;Won, Jae-Hoon;Kim, Se-Chang;Kim, Jung-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.22-32
    • /
    • 2008
  • To overcome the weakness of the image retrieval system using the existing Ontology and the distributed image based on the database having a simple structure, HERMES was suggested to ensure the self-control of various image suppliers and support the image retrieval based on semantic, the mentioned framework could not solve the problems which are not considered the deterioration in the capacity and scalability when many users connect to broker server simultaneously. In this paper the tables are written which in the case numerous users connect at the same time to the supply analogous level of services without the deterioration in the capacity installs Broker servers and then measures the performance time of each inner Broker Component through Monitoring System and saved and decides the ranking in saved data. As many Query performances are dispersed into several Servers User inputted from the users Interface with reference to Broker Ranking Table, Load Balancing system improving reliability in capacity is proposed. Through the experiment, the scheduling technique has proved that this schedule is faster than existing techniques.

The Development of DB-type Teaching and Learning Material for Geography Instruction Using a Method of ICT (ICT 활용 지리수업을 위한 DB형 교수-학습 자료 개발)

  • 최원회;조남강;장길수;박종승;최규학;신기진;백종렬;현경숙;신홍철
    • Journal of the Korean Geographical Society
    • /
    • v.38 no.2
    • /
    • pp.275-291
    • /
    • 2003
  • It was essential to develop the DB-type teaching and teaming material for geography instruction using a method of ICT. The DB-type teaching and learning material was considered as a alternative in solving the problems of web-based geography instruction. Accordingly, in this study, the geography image DB program as developed, and based on this program the CD-ROM called GEO-DB, having the function of electronic dictionary of geography image for geography teaching and teaming was made. The GEO-DB was composed of 3,060 geography images collected by teachers and learners. The GEO-DB was made to be used simply by teachers and learners. Especially, the portfolio function was Included in the GEO-DB, and that was focused to the instructional system design of teacher and the self-directed teaming ability development of learner. Teachers and learners using this GEO-DB assessed that because the GEO-DB had the easiness of use, the speed of reference and the unlimitedness of extension, it could enlarge the possibility of using a method of In, and it could contribute to the development of geography teaming ability and the change of geography teaming attitude.

Development of Android Smartphone App for Corner Point Feature Extraction using Remote Sensing Image (위성영상정보 기반 코너 포인트 객체 추출 안드로이드 스마트폰 앱 개발)

  • Kang, Sang-Goo;Lee, Ki-Won
    • Korean Journal of Remote Sensing
    • /
    • v.27 no.1
    • /
    • pp.33-41
    • /
    • 2011
  • In the information communication technology, it is world-widely apparent that trend movement from internet web to smartphone app by users demand and developers environment. So it needs kinds of appropriate technological responses from geo-spatial domain regarding this trend. However, most cases in the smartphone app are the map service and location recognition service, and uses of geo-spatial contents are somewhat on the limited level or on the prototype developing stage. In this study, app for extraction of corner point features using geo-spatial imagery and their linkage to database system are developed. Corner extraction is based on Harris algorithm, and all processing modules in database server, application server, and client interface composing app are designed and implemented based on open source. Extracted corner points are applied LOD(Level of Details) process to optimize on display panel. Additional useful function is provided that geo-spatial imagery can be superimposed with the digital map in the same area. It is expected that this app can be utilized to automatic establishment of POI (Point of Interests) or point-based land change detection purposes.

Design and Implementation of High-dimensional Index Structure for the support of Concurrency Control (필터링에 기반한 고차원 색인구조의 동시성 제어기법의 설계 및 구현)

  • Lee, Yong-Ju;Chang, Jae-Woo;Kim, Hang-Young;Kim, Myung-Joon
    • The KIPS Transactions:PartD
    • /
    • v.10D no.1
    • /
    • pp.1-12
    • /
    • 2003
  • Recently, there have been many indexing schemes for multimedia data such as image, video data. But recent database applications, for example data mining and multimedia database, are required to support multi-user environment. In order for indexing schemes to be useful in multi-user environment, a concurrency control algorithm is required to handle it. So we propose a concurrency control algorithm that can be applied to CBF (cell-based filtering method), which uses the signature of the cell for alleviating the dimensional curse problem. In addition, we extend the SHORE storage system of Wisconsin university in order to handle high-dimensional data. This extended SHORE storage system provides conventional storage manager functions, guarantees the integrity of high-dimensional data and is flexible to the large scale of feature vectors for preventing the usage of large main memory. Finally, we implement the web-based image retrieval system by using the extended SHORE storage system. The key feature of this system is platform-independent access to the high-dimensional data as well as functionality of efficient content-based queries. Lastly. We evaluate an average response time of point query, range query and k-nearest query in terms of the number of threads.

Analysis and Service Quality Evaluation on NDSL Website (NDSL 웹사이트 분석 및 서비스 품질평가)

  • Lee, Ju-Hyun;Lee, Eung-Bong;Kim, Hwan-Min
    • Journal of Information Management
    • /
    • v.37 no.4
    • /
    • pp.69-91
    • /
    • 2006
  • The purpose of this study is to improve the effectiveness and quality of web service by analyzing the web service problems and suggesting the solutions through the expert service quality evaluation from the point of view of users and website quality evaluation by measurement tools for a whole NDSL website. In case of website analysis, this study analyzed the website completeness of NDSL site and looked into the problem that users can judge by intuition during their use of the site, and evaluated the searchability and usability for web-based service quality evaluation by centering on the service quality of database quality items. After the results of this analysis, it appeared that there was not a big problem on the use. But after searching, several problems were found on loading rates, website completeness, user sensitiveness, the protection of private information, metadata completeness, website accessability, etc. And as a result of the evaluation of website service quality, it does not show the all satisfactory results in the function of search methods and search result printing, mark list and the items related to full-text in the part of searchability and usability. However, comparing with the results of other information organizations, it shows the similar level of quality.

35-Year Research History of Cytotoxicity and Cancer: a Quantitative and Qualitative Analysis

  • Farghadani, Reyhaneh;Haerian, Batoul Sadat;Ebrahim, Nader Ale;Muniandy, Sekaran
    • Asian Pacific Journal of Cancer Prevention
    • /
    • v.17 no.7
    • /
    • pp.3139-3145
    • /
    • 2016
  • Cancer is the leading cause of morbidity and mortality worldwide, characterized by irregular cell growth. Cytotoxicity or killing tumor cells that divide rapidly is the basic function of chemotherapeutic drugs. However, these agents can damage normal dividing cells, leading to adverse effects in the body. In view of great advances in cancer therapy, which are increasingly reported each year, we quantitatively and qualitatively evaluated the papers published between 1981 and December 2015, with a closer look at the highly cited papers (HCPs), for a better understanding of literature related to cytotoxicity in cancer therapy. Online documents in the Web of Science (WOS) database were analyzed based on the publication year, the number of times they were cited, research area, source, language, document type, countries, organization-enhanced and funding agencies. A total of 3,473 publications relevant to the target key words were found in the WOS database over 35 years and 86% of them (n=2,993) were published between 2000-2015. These papers had been cited 54,330 times without self-citation from 1981 to 2015. Of the 3,473 publications, 17 (3,557citations) were the most frequently cited ones between 2005 and 2015. The topmost HCP was about generating a comprehensive preclinical database (CCLE) with 825 (23.2%) citations. One third of the remaining HCPs had focused on drug discovery through improving conventional therapeutic agents such as metformin and ginseng. Another 33% of the HCPs concerned engineered nanoparticles (NPs) such as polyamidoamine (PAMAM) dendritic polymers, PTX/SPIO-loaded PLGAs and cell-derived NPs to increase drug effectiveness and decrease drug toxicity in cancer therapy. The remaining HCPs reported novel factors such as miR-205, Nrf2 and p27 suggesting their interference with development of cancer in targeted cancer therapy. In conclusion, analysis of 35-year publications and HCPs on cytotoxicity in cancer in the present report provides opportunities for a better understanding the extent of topics published and may help future research in this area.

Text-mining Techniques for Metabolic Pathway Reconstruction (대사경로 재구축을 위한 텍스트 마이닝 기법)

  • Kwon, Hyuk-Ryul;Na, Jong-Hwa;Yoo, Jae-Soo;Cho, Wan-Sup
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.12 no.4
    • /
    • pp.138-147
    • /
    • 2007
  • Metabolic pathway is a series of chemical reactions occuning within a cell and can be used for drug development and understanding of life phenomenon. Many biologists are trying to extract metabolic pathway information from huge literatures for their metabolic-circuit regulation study. We propose a text-mining technique based on the keyword and pattern. Proposed technique utilizes a web robot to collect huge papers and stores them into a local database. We use gene ontology to increase compound recognition rate and NCBI Tokenizer library to recognize useful information without compound destruction. Furthermore, we obtain useful sentence patterns representing metabolic pathway from papers and KEGG database. We have extracted 66 patterns in 20,000 documents for Glycosphingolipid species from KEGG, a representative metabolic database. We verify our system for nineteen compounds in Glycosphingolipid species. The result shows that the recall is 95.1%, the precision 96.3%, and the processing time 15 seconds. Proposed text mining system is expected to be used for metabolic pathway reconstruction.

  • PDF

Bioinformatic Analysis of NLS (Nuclear Localization Signals)-containing Proteins from Mollusks (생물정보학을 이용한 연체동물의 NLS (Nuclear Localization Signals) 포함 단백질의 분석)

  • Lee, Yong-Seok;Kang, Se-Won;Jo, Yong-Hun;Gwak, Heui-Chul;Chae, Sung-Hwa;Choi, Sang-Haeng;Ahn, In-Young;Park, Hong-Seog;Han, Yeon-Soo;Kho, Weon-Gyu
    • The Korean Journal of Malacology
    • /
    • v.22 no.2
    • /
    • pp.109-113
    • /
    • 2006
  • Subcellular localization of a protein containing nuclear localization signals (NLS) has been well studied in many organisms ranging from invertebrates to vertebrates. However, no systematic analysis of NLS-containing proteins available from Mollusks has been reported. Here, we describe in silico screening of NLS-containing proteins using the mollusks database that contains 22,138 amino acids. To screen putative proteins with NLS-motif, we used both predict NLS and perl script. As a result, we have found 266 proteins containing NLS sequences which are about 1.2% out of the entire proteins. On the basis of KOG (The eukaryotic orthologous groups) analysis, we can't predict the precise functions of the NLS-containing proteins. However, we found out that these proteins belong to several types of proteins such as chromatin structure and dynamics, translation, ribosomal structure, biogenesis, and signal transduction mechanism. In addition, we have analysed these sequences based on the classes of mollusks. We could not find many from the species that are the main subjects of phylogenetic studies. In contrast, we noticed that cephalopods has the highest number of NLS-containing proteins. Thus, we have constructed mollusks NLS database and added these information and data to the mollusks database by constructing web interface. Taken together, these information will be very useful for those who are or will be studying NLS-containing proteins from mollusks.

  • PDF

Development of Sentiment Analysis Model for the hot topic detection of online stock forums (온라인 주식 포럼의 핫토픽 탐지를 위한 감성분석 모형의 개발)

  • Hong, Taeho;Lee, Taewon;Li, Jingjing
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.187-204
    • /
    • 2016
  • Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.