The research status of endangered amphibians in Korea was mainly studied the life-cycle and distribution status of species from the 1980s to the early 2000s. Although a relatively diverse range of studies have been conducted on mackerels, studies on habitat prediction, analysis, change and management are insufficient. WEB, which provides biota information using location information in Korea, is a site operated by the National Bio Resource Center under the Ministry of Environment, but there is no information on endangered species and general species information has also been found to be very scantily. For this research, we secured a database of location information of Narrow-mouth frog, an endangered species, by combining literature and field research, and established a system by applying new technologies and open-based platform technologies that can be easily accessed by non-technical personnel of IT among IT technologies. The system was divided into administrator functions and user functions to prevent indiscriminate sharing of information through authentication procedures through user membership of users. The established system was authorized to show the distance between the current location and the location of the Narrow-mouth frog. Considering the ecological characteristics of the Narrow-mouth frog, which is an endangered species, a radius of 500m was marked to determine the habitat range. The system is expected to be applied to the legal system to change existing protected areas, etc. and to select new ones. It is estimated that practical reduction measures can be derived by utilizing the development plan for reviewing the natural environment. In addition, the deployed system has the advantage of being able to apply to a wide variety of endangered species by modifying the information entered.
To overcome the weakness of the image retrieval system using the existing Ontology and the distributed image based on the database having a simple structure, HERMES was suggested to ensure the self-control of various image suppliers and support the image retrieval based on semantic, the mentioned framework could not solve the problems which are not considered the deterioration in the capacity and scalability when many users connect to broker server simultaneously. In this paper the tables are written which in the case numerous users connect at the same time to the supply analogous level of services without the deterioration in the capacity installs Broker servers and then measures the performance time of each inner Broker Component through Monitoring System and saved and decides the ranking in saved data. As many Query performances are dispersed into several Servers User inputted from the users Interface with reference to Broker Ranking Table, Load Balancing system improving reliability in capacity is proposed. Through the experiment, the scheduling technique has proved that this schedule is faster than existing techniques.
It was essential to develop the DB-type teaching and teaming material for geography instruction using a method of ICT. The DB-type teaching and learning material was considered as a alternative in solving the problems of web-based geography instruction. Accordingly, in this study, the geography image DB program as developed, and based on this program the CD-ROM called GEO-DB, having the function of electronic dictionary of geography image for geography teaching and teaming was made. The GEO-DB was composed of 3,060 geography images collected by teachers and learners. The GEO-DB was made to be used simply by teachers and learners. Especially, the portfolio function was Included in the GEO-DB, and that was focused to the instructional system design of teacher and the self-directed teaming ability development of learner. Teachers and learners using this GEO-DB assessed that because the GEO-DB had the easiness of use, the speed of reference and the unlimitedness of extension, it could enlarge the possibility of using a method of In, and it could contribute to the development of geography teaming ability and the change of geography teaming attitude.
In the information communication technology, it is world-widely apparent that trend movement from internet web to smartphone app by users demand and developers environment. So it needs kinds of appropriate technological responses from geo-spatial domain regarding this trend. However, most cases in the smartphone app are the map service and location recognition service, and uses of geo-spatial contents are somewhat on the limited level or on the prototype developing stage. In this study, app for extraction of corner point features using geo-spatial imagery and their linkage to database system are developed. Corner extraction is based on Harris algorithm, and all processing modules in database server, application server, and client interface composing app are designed and implemented based on open source. Extracted corner points are applied LOD(Level of Details) process to optimize on display panel. Additional useful function is provided that geo-spatial imagery can be superimposed with the digital map in the same area. It is expected that this app can be utilized to automatic establishment of POI (Point of Interests) or point-based land change detection purposes.
Recently, there have been many indexing schemes for multimedia data such as image, video data. But recent database applications, for example data mining and multimedia database, are required to support multi-user environment. In order for indexing schemes to be useful in multi-user environment, a concurrency control algorithm is required to handle it. So we propose a concurrency control algorithm that can be applied to CBF (cell-based filtering method), which uses the signature of the cell for alleviating the dimensional curse problem. In addition, we extend the SHORE storage system of Wisconsin university in order to handle high-dimensional data. This extended SHORE storage system provides conventional storage manager functions, guarantees the integrity of high-dimensional data and is flexible to the large scale of feature vectors for preventing the usage of large main memory. Finally, we implement the web-based image retrieval system by using the extended SHORE storage system. The key feature of this system is platform-independent access to the high-dimensional data as well as functionality of efficient content-based queries. Lastly. We evaluate an average response time of point query, range query and k-nearest query in terms of the number of threads.
The purpose of this study is to improve the effectiveness and quality of web service by analyzing the web service problems and suggesting the solutions through the expert service quality evaluation from the point of view of users and website quality evaluation by measurement tools for a whole NDSL website. In case of website analysis, this study analyzed the website completeness of NDSL site and looked into the problem that users can judge by intuition during their use of the site, and evaluated the searchability and usability for web-based service quality evaluation by centering on the service quality of database quality items. After the results of this analysis, it appeared that there was not a big problem on the use. But after searching, several problems were found on loading rates, website completeness, user sensitiveness, the protection of private information, metadata completeness, website accessability, etc. And as a result of the evaluation of website service quality, it does not show the all satisfactory results in the function of search methods and search result printing, mark list and the items related to full-text in the part of searchability and usability. However, comparing with the results of other information organizations, it shows the similar level of quality.
Cancer is the leading cause of morbidity and mortality worldwide, characterized by irregular cell growth. Cytotoxicity or killing tumor cells that divide rapidly is the basic function of chemotherapeutic drugs. However, these agents can damage normal dividing cells, leading to adverse effects in the body. In view of great advances in cancer therapy, which are increasingly reported each year, we quantitatively and qualitatively evaluated the papers published between 1981 and December 2015, with a closer look at the highly cited papers (HCPs), for a better understanding of literature related to cytotoxicity in cancer therapy. Online documents in the Web of Science (WOS) database were analyzed based on the publication year, the number of times they were cited, research area, source, language, document type, countries, organization-enhanced and funding agencies. A total of 3,473 publications relevant to the target key words were found in the WOS database over 35 years and 86% of them (n=2,993) were published between 2000-2015. These papers had been cited 54,330 times without self-citation from 1981 to 2015. Of the 3,473 publications, 17 (3,557citations) were the most frequently cited ones between 2005 and 2015. The topmost HCP was about generating a comprehensive preclinical database (CCLE) with 825 (23.2%) citations. One third of the remaining HCPs had focused on drug discovery through improving conventional therapeutic agents such as metformin and ginseng. Another 33% of the HCPs concerned engineered nanoparticles (NPs) such as polyamidoamine (PAMAM) dendritic polymers, PTX/SPIO-loaded PLGAs and cell-derived NPs to increase drug effectiveness and decrease drug toxicity in cancer therapy. The remaining HCPs reported novel factors such as miR-205, Nrf2 and p27 suggesting their interference with development of cancer in targeted cancer therapy. In conclusion, analysis of 35-year publications and HCPs on cytotoxicity in cancer in the present report provides opportunities for a better understanding the extent of topics published and may help future research in this area.
Journal of Korea Society of Industrial Information Systems
/
v.12
no.4
/
pp.138-147
/
2007
Metabolic pathway is a series of chemical reactions occuning within a cell and can be used for drug development and understanding of life phenomenon. Many biologists are trying to extract metabolic pathway information from huge literatures for their metabolic-circuit regulation study. We propose a text-mining technique based on the keyword and pattern. Proposed technique utilizes a web robot to collect huge papers and stores them into a local database. We use gene ontology to increase compound recognition rate and NCBI Tokenizer library to recognize useful information without compound destruction. Furthermore, we obtain useful sentence patterns representing metabolic pathway from papers and KEGG database. We have extracted 66 patterns in 20,000 documents for Glycosphingolipid species from KEGG, a representative metabolic database. We verify our system for nineteen compounds in Glycosphingolipid species. The result shows that the recall is 95.1%, the precision 96.3%, and the processing time 15 seconds. Proposed text mining system is expected to be used for metabolic pathway reconstruction.
Subcellular localization of a protein containing nuclear localization signals (NLS) has been well studied in many organisms ranging from invertebrates to vertebrates. However, no systematic analysis of NLS-containing proteins available from Mollusks has been reported. Here, we describe in silico screening of NLS-containing proteins using the mollusks database that contains 22,138 amino acids. To screen putative proteins with NLS-motif, we used both predict NLS and perl script. As a result, we have found 266 proteins containing NLS sequences which are about 1.2% out of the entire proteins. On the basis of KOG (The eukaryotic orthologous groups) analysis, we can't predict the precise functions of the NLS-containing proteins. However, we found out that these proteins belong to several types of proteins such as chromatin structure and dynamics, translation, ribosomal structure, biogenesis, and signal transduction mechanism. In addition, we have analysed these sequences based on the classes of mollusks. We could not find many from the species that are the main subjects of phylogenetic studies. In contrast, we noticed that cephalopods has the highest number of NLS-containing proteins. Thus, we have constructed mollusks NLS database and added these information and data to the mollusks database by constructing web interface. Taken together, these information will be very useful for those who are or will be studying NLS-containing proteins from mollusks.
Document classification based on emotional polarity has become a welcomed emerging task owing to the great explosion of data on the Web. In the big data age, there are too many information sources to refer to when making decisions. For example, when considering travel to a city, a person may search reviews from a search engine such as Google or social networking services (SNSs) such as blogs, Twitter, and Facebook. The emotional polarity of positive and negative reviews helps a user decide on whether or not to make a trip. Sentiment analysis of customer reviews has become an important research topic as datamining technology is widely accepted for text mining of the Web. Sentiment analysis has been used to classify documents through machine learning techniques, such as the decision tree, neural networks, and support vector machines (SVMs). is used to determine the attitude, position, and sensibility of people who write articles about various topics that are published on the Web. Regardless of the polarity of customer reviews, emotional reviews are very helpful materials for analyzing the opinions of customers through their reviews. Sentiment analysis helps with understanding what customers really want instantly through the help of automated text mining techniques. Sensitivity analysis utilizes text mining techniques on text on the Web to extract subjective information in the text for text analysis. Sensitivity analysis is utilized to determine the attitudes or positions of the person who wrote the article and presented their opinion about a particular topic. In this study, we developed a model that selects a hot topic from user posts at China's online stock forum by using the k-means algorithm and self-organizing map (SOM). In addition, we developed a detecting model to predict a hot topic by using machine learning techniques such as logit, the decision tree, and SVM. We employed sensitivity analysis to develop our model for the selection and detection of hot topics from China's online stock forum. The sensitivity analysis calculates a sentimental value from a document based on contrast and classification according to the polarity sentimental dictionary (positive or negative). The online stock forum was an attractive site because of its information about stock investment. Users post numerous texts about stock movement by analyzing the market according to government policy announcements, market reports, reports from research institutes on the economy, and even rumors. We divided the online forum's topics into 21 categories to utilize sentiment analysis. One hundred forty-four topics were selected among 21 categories at online forums about stock. The posts were crawled to build a positive and negative text database. We ultimately obtained 21,141 posts on 88 topics by preprocessing the text from March 2013 to February 2015. The interest index was defined to select the hot topics, and the k-means algorithm and SOM presented equivalent results with this data. We developed a decision tree model to detect hot topics with three algorithms: CHAID, CART, and C4.5. The results of CHAID were subpar compared to the others. We also employed SVM to detect the hot topics from negative data. The SVM models were trained with the radial basis function (RBF) kernel function by a grid search to detect the hot topics. The detection of hot topics by using sentiment analysis provides the latest trends and hot topics in the stock forum for investors so that they no longer need to search the vast amounts of information on the Web. Our proposed model is also helpful to rapidly determine customers' signals or attitudes towards government policy and firms' products and services.
본 웹사이트에 게시된 이메일 주소가 전자우편 수집 프로그램이나
그 밖의 기술적 장치를 이용하여 무단으로 수집되는 것을 거부하며,
이를 위반시 정보통신망법에 의해 형사 처벌됨을 유념하시기 바랍니다.
[게시일 2004년 10월 1일]
이용약관
제 1 장 총칙
제 1 조 (목적)
이 이용약관은 KoreaScience 홈페이지(이하 “당 사이트”)에서 제공하는 인터넷 서비스(이하 '서비스')의 가입조건 및 이용에 관한 제반 사항과 기타 필요한 사항을 구체적으로 규정함을 목적으로 합니다.
제 2 조 (용어의 정의)
① "이용자"라 함은 당 사이트에 접속하여 이 약관에 따라 당 사이트가 제공하는 서비스를 받는 회원 및 비회원을
말합니다.
② "회원"이라 함은 서비스를 이용하기 위하여 당 사이트에 개인정보를 제공하여 아이디(ID)와 비밀번호를 부여
받은 자를 말합니다.
③ "회원 아이디(ID)"라 함은 회원의 식별 및 서비스 이용을 위하여 자신이 선정한 문자 및 숫자의 조합을
말합니다.
④ "비밀번호(패스워드)"라 함은 회원이 자신의 비밀보호를 위하여 선정한 문자 및 숫자의 조합을 말합니다.
제 3 조 (이용약관의 효력 및 변경)
① 이 약관은 당 사이트에 게시하거나 기타의 방법으로 회원에게 공지함으로써 효력이 발생합니다.
② 당 사이트는 이 약관을 개정할 경우에 적용일자 및 개정사유를 명시하여 현행 약관과 함께 당 사이트의
초기화면에 그 적용일자 7일 이전부터 적용일자 전일까지 공지합니다. 다만, 회원에게 불리하게 약관내용을
변경하는 경우에는 최소한 30일 이상의 사전 유예기간을 두고 공지합니다. 이 경우 당 사이트는 개정 전
내용과 개정 후 내용을 명확하게 비교하여 이용자가 알기 쉽도록 표시합니다.
제 4 조(약관 외 준칙)
① 이 약관은 당 사이트가 제공하는 서비스에 관한 이용안내와 함께 적용됩니다.
② 이 약관에 명시되지 아니한 사항은 관계법령의 규정이 적용됩니다.
제 2 장 이용계약의 체결
제 5 조 (이용계약의 성립 등)
① 이용계약은 이용고객이 당 사이트가 정한 약관에 「동의합니다」를 선택하고, 당 사이트가 정한
온라인신청양식을 작성하여 서비스 이용을 신청한 후, 당 사이트가 이를 승낙함으로써 성립합니다.
② 제1항의 승낙은 당 사이트가 제공하는 과학기술정보검색, 맞춤정보, 서지정보 등 다른 서비스의 이용승낙을
포함합니다.
제 6 조 (회원가입)
서비스를 이용하고자 하는 고객은 당 사이트에서 정한 회원가입양식에 개인정보를 기재하여 가입을 하여야 합니다.
제 7 조 (개인정보의 보호 및 사용)
당 사이트는 관계법령이 정하는 바에 따라 회원 등록정보를 포함한 회원의 개인정보를 보호하기 위해 노력합니다. 회원 개인정보의 보호 및 사용에 대해서는 관련법령 및 당 사이트의 개인정보 보호정책이 적용됩니다.
제 8 조 (이용 신청의 승낙과 제한)
① 당 사이트는 제6조의 규정에 의한 이용신청고객에 대하여 서비스 이용을 승낙합니다.
② 당 사이트는 아래사항에 해당하는 경우에 대해서 승낙하지 아니 합니다.
- 이용계약 신청서의 내용을 허위로 기재한 경우
- 기타 규정한 제반사항을 위반하며 신청하는 경우
제 9 조 (회원 ID 부여 및 변경 등)
① 당 사이트는 이용고객에 대하여 약관에 정하는 바에 따라 자신이 선정한 회원 ID를 부여합니다.
② 회원 ID는 원칙적으로 변경이 불가하며 부득이한 사유로 인하여 변경 하고자 하는 경우에는 해당 ID를
해지하고 재가입해야 합니다.
③ 기타 회원 개인정보 관리 및 변경 등에 관한 사항은 서비스별 안내에 정하는 바에 의합니다.
제 3 장 계약 당사자의 의무
제 10 조 (KISTI의 의무)
① 당 사이트는 이용고객이 희망한 서비스 제공 개시일에 특별한 사정이 없는 한 서비스를 이용할 수 있도록
하여야 합니다.
② 당 사이트는 개인정보 보호를 위해 보안시스템을 구축하며 개인정보 보호정책을 공시하고 준수합니다.
③ 당 사이트는 회원으로부터 제기되는 의견이나 불만이 정당하다고 객관적으로 인정될 경우에는 적절한 절차를
거쳐 즉시 처리하여야 합니다. 다만, 즉시 처리가 곤란한 경우는 회원에게 그 사유와 처리일정을 통보하여야
합니다.
제 11 조 (회원의 의무)
① 이용자는 회원가입 신청 또는 회원정보 변경 시 실명으로 모든 사항을 사실에 근거하여 작성하여야 하며,
허위 또는 타인의 정보를 등록할 경우 일체의 권리를 주장할 수 없습니다.
② 당 사이트가 관계법령 및 개인정보 보호정책에 의거하여 그 책임을 지는 경우를 제외하고 회원에게 부여된
ID의 비밀번호 관리소홀, 부정사용에 의하여 발생하는 모든 결과에 대한 책임은 회원에게 있습니다.
③ 회원은 당 사이트 및 제 3자의 지적 재산권을 침해해서는 안 됩니다.
제 4 장 서비스의 이용
제 12 조 (서비스 이용 시간)
① 서비스 이용은 당 사이트의 업무상 또는 기술상 특별한 지장이 없는 한 연중무휴, 1일 24시간 운영을
원칙으로 합니다. 단, 당 사이트는 시스템 정기점검, 증설 및 교체를 위해 당 사이트가 정한 날이나 시간에
서비스를 일시 중단할 수 있으며, 예정되어 있는 작업으로 인한 서비스 일시중단은 당 사이트 홈페이지를
통해 사전에 공지합니다.
② 당 사이트는 서비스를 특정범위로 분할하여 각 범위별로 이용가능시간을 별도로 지정할 수 있습니다. 다만
이 경우 그 내용을 공지합니다.
제 13 조 (홈페이지 저작권)
① NDSL에서 제공하는 모든 저작물의 저작권은 원저작자에게 있으며, KISTI는 복제/배포/전송권을 확보하고
있습니다.
② NDSL에서 제공하는 콘텐츠를 상업적 및 기타 영리목적으로 복제/배포/전송할 경우 사전에 KISTI의 허락을
받아야 합니다.
③ NDSL에서 제공하는 콘텐츠를 보도, 비평, 교육, 연구 등을 위하여 정당한 범위 안에서 공정한 관행에
합치되게 인용할 수 있습니다.
④ NDSL에서 제공하는 콘텐츠를 무단 복제, 전송, 배포 기타 저작권법에 위반되는 방법으로 이용할 경우
저작권법 제136조에 따라 5년 이하의 징역 또는 5천만 원 이하의 벌금에 처해질 수 있습니다.
제 14 조 (유료서비스)
① 당 사이트 및 협력기관이 정한 유료서비스(원문복사 등)는 별도로 정해진 바에 따르며, 변경사항은 시행 전에
당 사이트 홈페이지를 통하여 회원에게 공지합니다.
② 유료서비스를 이용하려는 회원은 정해진 요금체계에 따라 요금을 납부해야 합니다.
제 5 장 계약 해지 및 이용 제한
제 15 조 (계약 해지)
회원이 이용계약을 해지하고자 하는 때에는 [가입해지] 메뉴를 이용해 직접 해지해야 합니다.
제 16 조 (서비스 이용제한)
① 당 사이트는 회원이 서비스 이용내용에 있어서 본 약관 제 11조 내용을 위반하거나, 다음 각 호에 해당하는
경우 서비스 이용을 제한할 수 있습니다.
- 2년 이상 서비스를 이용한 적이 없는 경우
- 기타 정상적인 서비스 운영에 방해가 될 경우
② 상기 이용제한 규정에 따라 서비스를 이용하는 회원에게 서비스 이용에 대하여 별도 공지 없이 서비스 이용의
일시정지, 이용계약 해지 할 수 있습니다.
제 17 조 (전자우편주소 수집 금지)
회원은 전자우편주소 추출기 등을 이용하여 전자우편주소를 수집 또는 제3자에게 제공할 수 없습니다.
제 6 장 손해배상 및 기타사항
제 18 조 (손해배상)
당 사이트는 무료로 제공되는 서비스와 관련하여 회원에게 어떠한 손해가 발생하더라도 당 사이트가 고의 또는 과실로 인한 손해발생을 제외하고는 이에 대하여 책임을 부담하지 아니합니다.
제 19 조 (관할 법원)
서비스 이용으로 발생한 분쟁에 대해 소송이 제기되는 경우 민사 소송법상의 관할 법원에 제기합니다.
[부 칙]
1. (시행일) 이 약관은 2016년 9월 5일부터 적용되며, 종전 약관은 본 약관으로 대체되며, 개정된 약관의 적용일 이전 가입자도 개정된 약관의 적용을 받습니다.