• Title/Summary/Keyword: datasets

Search Result 2,091, Processing Time 0.031 seconds

Application of Geo-Segment Anything Model (SAM) Scheme to Water Body Segmentation: An Experiment Study Using CAS500-1 Images (수체 추출을 위한 Geo-SAM 기법의 응용: 국토위성영상 적용 실험)

  • Hayoung Lee;Kwangseob Kim;Kiwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.4
    • /
    • pp.343-350
    • /
    • 2024
  • Since the release of Meta's Segment Anything Model (SAM), a large-scale vision transformer generation model with rapid image segmentation capabilities, several studies have been conducted to apply this technology in various fields. In this study, we aimed to investigate the applicability of SAM for water bodies detection and extraction using the QGIS Geo-SAM plugin, which enables the use of SAM with satellite imagery. The experimental data consisted of Compact Advanced Satellite 500 (CAS500)-1 images. The results obtained by applying SAM to these data were compared with manually digitized water objects, Open Street Map (OSM), and water body data from the National Geographic Information Institute (NGII)-based hydrological digital map. The mean Intersection over Union (mIoU) calculated for all features extracted using SAM and these three-comparison data were 0.7490, 0.5905, and 0.4921, respectively. For features commonly appeared or extracted in all datasets, the results were 0.9189, 0.8779, and 0.7715, respectively. Based on analysis of the spatial consistency between SAM results and other comparison data, SAM showed limitations in detecting small-scale or poorly defined streams but provided meaningful segmentation results for water body classification.

Geospatial Data Pipeline to Study the Health Effects of Environments -Limitations and Solutions- (환경의 건강 영향 연구를 위한 공간지리정보 데이터 파이프라인 -자료활용의 제한점과 극복방안-)

  • Won Kyung Kim;Goeun Jung;Dongook Son;Sun-Young Kim
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.27 no.3
    • /
    • pp.60-75
    • /
    • 2024
  • Research on health outcomes of environmental factors has been implemented by multiple and interacting factors, including environmental, socio-demographic, economic, and traffic aspects. There are still significant challenges and limitations in constructing databases for the connections between contributing factors and an integrated approach to environmental health research even though there has been a dramatic increase in data availability and incredible technological advance in data storage and processing. This study emphasizes the necessity of establishing a geospatial data pipeline to analyze the impact of environmental factors on health. It also highlights the difficulties and solutions related to the construction and utilization of a geospatial database. Key challenges include diverse data sources and formats, different spatio-temporal data structures, and coordinate system inconsistencies over time within the same geospatial data. To address these issues, a data pipeline was constructed with pre-processing and post-processing for the data, resulting in refined datasets that could be used for calculating geographic variables. In addition, an AWS-based relational database and shared platform were established to provide an efficient environment for data storage and analysis. Guidelines for each step of the process, including data management and analysis, were developed to enable future researchers to effectively use the data pipeline.

Sex-biased differences in the correlation between epithelial-to-mesenchymal transition-associated genes in cancer cell lines

  • Sun Young Kim;Seungeun Lee;Eunhye Lee;Hyesol Lim;Ji Yoon Shin;Joohee Jung;Sang Geon Kim;Aree Moon
    • Oncology Letters
    • /
    • v.18 no.6
    • /
    • pp.6852-6868
    • /
    • 2019
  • There is a wide disparity in the incidence, malignancy and mortality of different types of cancer between each sex. The sex-specificity of cancer seems to be dependent on the type of cancer. Cancer incidence and mortality have been demonstrated as sex-specific in a number of different types of cancer, such as liver cancer, whereas sex-specificity is not noticeable in certain other types of cancer, including colon and lung cancer. The present study aimed to elucidate the molecular basis for sex-biased gene expression in cancer. The mRNA expression of the epithelial-to-mesenchymal transition-associated genes was investigated, including E-cadherin (also termed CDH1), vimentin (VIM), discoidin domain receptor 1 (DDR1) and zinc finger E-box binding homeobox 1 (ZEB1) in female- and male-derived cancer cell lines by reverse transcription (RT)-PCR and the Broad-Novartis Cancer Cell Line Encyclopedia (CCLE) database analysis. A negative correlation was observed between DDR1 and ZEB1 only in the female-derived cancer cell lines via RT-PCR analysis. A negative correlation between DDR1 index (defined by the logarithmic value of DDR1 divided by ZEB1, based on the mRNA data from the RT-PCR analysis) and an invasive phenotype was observed in cancer cell lines in a sex-specific manner. Analysis of the CCLE database demonstrated that DDR1 and ZEB1, which are already known to be sex-biased, were negatively correlated in female-derived liver cancer cell lines, but not in male-derived liver cancer cell lines. In contrast, cell lines of colon and lung cancer did not reveal any sex-dependent difference in the correlation between DDR1 and ZEB1. Kaplan-Meier survival curves using the transcriptomic datasets such as Gene Expression Omnibus, European Genome-phenome Archiva and The Cancer Genome Atlas databases suggested a sex-biased difference in the correlation between DDR1 expression pattern and overall survival in patients with liver cancer. The results of the present study indicate that sex factors may affect the regulation of gene expression, contributing to the sex-biased progression of the different types of cancer, particularly liver cancer. Overall, these findings suggest that analyses of the correlation between DDR1 and ZEB1 may prove useful when investigating sex-biased cancers.

Improving minority prediction performance of support vector machine for imbalanced text data via feature selection and SMOTE (단어선택과 SMOTE 알고리즘을 이용한 불균형 텍스트 데이터의 소수 범주 예측성능 향상 기법)

  • Jongchan Kim;Seong Jun Chang;Won Son
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.4
    • /
    • pp.395-410
    • /
    • 2024
  • Text data is usually made up of a wide variety of unique words. Even in standard text data, it is common to find tens of thousands of different words. In text data analysis, usually, each unique word is treated as a variable. Thus, text data can be regarded as a dataset with a large number of variables. On the other hand, in text data classification, we often encounter class label imbalance problems. In the cases of substantial imbalances, the performance of conventional classification models can be severely degraded. To improve the classification performance of support vector machines (SVM) for imbalanced data, algorithms such as the Synthetic Minority Over-sampling Technique (SMOTE) can be used. The SMOTE algorithm synthetically generates new observations for the minority class based on the k-Nearest Neighbors (kNN) algorithm. However, in datasets with a large number of variables, such as text data, errors may accumulate. This can potentially impact the performance of the kNN algorithm. In this study, we propose a method for enhancing prediction performance for the minority class of imbalanced text data. Our approach involves employing variable selection to generate new synthetic observations in a reduced space, thereby improving the overall classification performance of SVM.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.

Performance Analysis of Top-K High Utility Pattern Mining Methods (상위 K 하이 유틸리티 패턴 마이닝 기법 성능분석)

  • Ryang, Heungmo;Yun, Unil;Kim, Chulhong
    • Journal of Internet Computing and Services
    • /
    • v.16 no.6
    • /
    • pp.89-95
    • /
    • 2015
  • Traditional frequent pattern mining discovers valid patterns with no smaller frequency than a user-defined minimum threshold from databases. In this framework, an enormous number of patterns may be extracted by a too low threshold, which makes result analysis difficult, and a too high one may generate no valid pattern. Setting an appropriate threshold is not an easy task since it requires the prior knowledge for its domain. Therefore, a pattern mining approach that is not based on the domain knowledge became needed due to inability of the framework to predict and control mining results precisely according to the given threshold. Top-k frequent pattern mining was proposed to solve the problem, and it mines top-k important patterns without any threshold setting. Through this method, users can find patterns from ones with the highest frequency to ones with the k-th highest frequency regardless of databases. In this paper, we provide knowledge both on frequent and top-k pattern mining. Although top-k frequent pattern mining extracts top-k significant patterns without the setting, it cannot consider both item quantities in transactions and relative importance of items in databases, and this is why the method cannot meet requirements of many real-world applications. That is, patterns with low frequency can be meaningful, and vice versa, in the applications. High utility pattern mining was proposed to reflect the characteristics of non-binary databases and requires a minimum threshold. Recently, top-k high utility pattern mining has been developed, through which users can mine the desired number of high utility patterns without the prior knowledge. In this paper, we analyze two algorithms related to top-k high utility pattern mining in detail. We also conduct various experiments for the algorithms on real datasets and study improvement point and development direction of top-k high utility pattern mining through performance analysis with respect to the experimental results.

Improving the Accuracy of Document Classification by Learning Heterogeneity (이질성 학습을 통한 문서 분류의 정확성 향상 기법)

  • Wong, William Xiu Shun;Hyun, Yoonjin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.21-44
    • /
    • 2018
  • In recent years, the rapid development of internet technology and the popularization of smart devices have resulted in massive amounts of text data. Those text data were produced and distributed through various media platforms such as World Wide Web, Internet news feeds, microblog, and social media. However, this enormous amount of easily obtained information is lack of organization. Therefore, this problem has raised the interest of many researchers in order to manage this huge amount of information. Further, this problem also required professionals that are capable of classifying relevant information and hence text classification is introduced. Text classification is a challenging task in modern data analysis, which it needs to assign a text document into one or more predefined categories or classes. In text classification field, there are different kinds of techniques available such as K-Nearest Neighbor, Naïve Bayes Algorithm, Support Vector Machine, Decision Tree, and Artificial Neural Network. However, while dealing with huge amount of text data, model performance and accuracy becomes a challenge. According to the type of words used in the corpus and type of features created for classification, the performance of a text classification model can be varied. Most of the attempts are been made based on proposing a new algorithm or modifying an existing algorithm. This kind of research can be said already reached their certain limitations for further improvements. In this study, aside from proposing a new algorithm or modifying the algorithm, we focus on searching a way to modify the use of data. It is widely known that classifier performance is influenced by the quality of training data upon which this classifier is built. The real world datasets in most of the time contain noise, or in other words noisy data, these can actually affect the decision made by the classifiers built from these data. In this study, we consider that the data from different domains, which is heterogeneous data might have the characteristics of noise which can be utilized in the classification process. In order to build the classifier, machine learning algorithm is performed based on the assumption that the characteristics of training data and target data are the same or very similar to each other. However, in the case of unstructured data such as text, the features are determined according to the vocabularies included in the document. If the viewpoints of the learning data and target data are different, the features may be appearing different between these two data. In this study, we attempt to improve the classification accuracy by strengthening the robustness of the document classifier through artificially injecting the noise into the process of constructing the document classifier. With data coming from various kind of sources, these data are likely formatted differently. These cause difficulties for traditional machine learning algorithms because they are not developed to recognize different type of data representation at one time and to put them together in same generalization. Therefore, in order to utilize heterogeneous data in the learning process of document classifier, we apply semi-supervised learning in our study. However, unlabeled data might have the possibility to degrade the performance of the document classifier. Therefore, we further proposed a method called Rule Selection-Based Ensemble Semi-Supervised Learning Algorithm (RSESLA) to select only the documents that contributing to the accuracy improvement of the classifier. RSESLA creates multiple views by manipulating the features using different types of classification models and different types of heterogeneous data. The most confident classification rules will be selected and applied for the final decision making. In this paper, three different types of real-world data sources were used, which are news, twitter and blogs.

Validation of Extreme Rainfall Estimation in an Urban Area derived from Satellite Data : A Case Study on the Heavy Rainfall Event in July, 2011 (위성 자료를 이용한 도시지역 극치강우 모니터링: 2011년 7월 집중호우를 중심으로)

  • Yoon, Sun-Kwon;Park, Kyung-Won;Kim, Jong Pil;Jung, Il-Won
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.4
    • /
    • pp.371-384
    • /
    • 2014
  • This study developed a new algorithm of extreme rainfall extraction based on the Communication, Ocean and Meteorological Satellite (COMS) and the Tropical Rainfall Measurement Mission (TRMM) Satellite image data and evaluated its applicability for the heavy rainfall event in July-2011 in Seoul, South Korea. The power-series-regression-based Z-R relationship was employed for taking into account for empirical relationships between TRMM/PR, TRMM/VIRS, COMS, and Automatic Weather System(AWS) at each elevation. The estimated Z-R relationship ($Z=303R^{0.72}$) agreed well with observation from AWS (correlation coefficient=0.57). The estimated 10-minute rainfall intensities from the COMS satellite using the Z-R relationship generated underestimated rainfall intensities. For a small rainfall event the Z-R relationship tended to overestimated rainfall intensities. However, the overall patterns of estimated rainfall were very comparable with the observed data. The correlation coefficients and the Root Mean Square Error (RMSE) of 10-minute rainfall series from COMS and AWS gave 0.517, and 3.146, respectively. In addition, the averaged error value of the spatial correlation matrix ranged from -0.530 to -0.228, indicating negative correlation. To reduce the error by extreme rainfall estimation using satellite datasets it is required to take into more extreme factors and improve the algorithm through further study. This study showed the potential utility of multi-geostationary satellite data for building up sub-daily rainfall and establishing the real-time flood alert system in ungauged watersheds.

An Quantitative Analysis of Severity Classification and Burn Severity for the Large Forest Fire Areas using Normalized Burn Ratio of Landsat Imagery (Landsat 영상으로부터 정규탄화지수 추출과 산불피해지역 및 피해강도의 정량적 분석)

  • Won, Myoung-Soo;Koo, Kyo-Sang;Lee, Myung-Bo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.10 no.3
    • /
    • pp.80-92
    • /
    • 2007
  • Forest fire is the dominant large-scale disturbance mechanism in the Korean temperate forest, and it strongly influences forest structure and function. Moreover burn severity incorporates both short- and long-term post-fire effects on the local and regional environment. Burn severity is defined by the degree to which an ecosystem has changed owing to the fire. Vegetation rehabilitation may specifically vary according to burn severity after fire. To understand burn severity and process of vegetation rehabilitation at the damaged area after large-fire is required a lot of man powers and budgets. However the analysis of burn severity in the forest area using satellite imagery can acquire rapidly information and more objective results remotely in the large-fire area. Space and airbone sensors have been used to map area burned, assess characteristics of active fires, and characterize post-fire ecological effects. For classifying fire damaged area and analyzing burn severity of Samcheok fire area occurred in 2000, Cheongyang fire in 2002, and Yangyang fire in 2005 we utilized Normalized Burn Ratio(NBR) technique. The NBR is temporally differenced between pre- and post-fire datasets to determine the extent and degree of change detected from burning. In this paper we use pre- and post-fire imagery from the Landsat TM and ETM+ imagery to compute the NBR and evaluate large-scale patterns of burn severity at 30m spatial resolution. 65% in the Samcheok fire area, 91% in the Cheongyang fire area and 65% in the Yangyang fire area were corresponded to burn severity class above 'High'. Therefore the use of a remotely sensed Differenced Normalized Burn Ratio(${\Delta}NBR$) by RS and GIS allows for the burn severity to be quantified spatially by mapping damaged domain and burn severity across large-fire area.

  • PDF