• Title/Summary/Keyword: weighted similarity

Search Result 129, Processing Time 0.028 seconds

Graph-based High-level Motion Segmentation using Normalized Cuts (Normalized Cuts을 이용한 그래프 기반의 하이레벨 모션 분할)

  • Yun, Sung-Ju;Park, An-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.11
    • /
    • pp.671-680
    • /
    • 2008
  • Motion capture devices have been utilized in producing several contents, such as movies and video games. However, since motion capture devices are expensive and inconvenient to use, motions segmented from captured data was recycled and synthesized to utilize it in another contents, but the motions were generally segmented by contents producers in manual. Therefore, automatic motion segmentation is recently getting a lot of attentions. Previous approaches are divided into on-line and off-line, where ow line approaches segment motions based on similarities between neighboring frames and off-line approaches segment motions by capturing the global characteristics in feature space. In this paper, we propose a graph-based high-level motion segmentation method. Since high-level motions consist of repeated frames within temporal distances, we consider similarities between neighboring frames as well as all similarities among all frames within the temporal distance. This is achieved by constructing a graph, where each vertex represents a frame and the edges between the frames are weighted by their similarity. Then, normalized cuts algorithm is used to partition the constructed graph into several sub-graphs by globally finding minimum cuts. In the experiments, the results using the proposed method showed better performance than PCA-based method in on-line and GMM-based method in off-line, as the proposed method globally segment motions from the graph constructed based similarities between neighboring frames as well as similarities among all frames within temporal distances.

Developing a Korean Standard Brain Atlas on the basis of Statistical and Probabilistic Approach and Visualization tool for Functional image analysis (확률 및 통계적 개념에 근거한 한국인 표준 뇌 지도 작성 및 기능 영상 분석을 위한 가시화 방법에 관한 연구)

  • Koo, B.B.;Lee, J.M.;Kim, J.S.;Lee, J.S.;Kim, I.Y.;Kim, J.J.;Lee, D.S.;Kwon, J.S.;Kim, S.I.
    • The Korean Journal of Nuclear Medicine
    • /
    • v.37 no.3
    • /
    • pp.162-170
    • /
    • 2003
  • The probabilistic anatomical maps are used to localize the functional neuro-images and morphological variability. The quantitative indicator is very important to inquire the anatomical position of an activated legion because functional image data has the low-resolution nature and no inherent anatomical information. Although previously developed MNI probabilistic anatomical map was enough to localize the data, it was not suitable for the Korean brains because of the morphological difference between Occidental and Oriental. In this study, we develop a probabilistic anatomical map for Korean normal brain. Normal 75 blains of T1-weighted spoiled gradient echo magnetic resonance images were acquired on a 1.5-T GESIGNA scanner. Then, a standard brain is selected in the group through a clinician searches a brain of the average property in the Talairach coordinate system. With the standard brain, an anatomist delineates 89 regions of interest (ROI) parcellating cortical and subcortical areas. The parcellated ROIs of the standard are warped and overlapped into each brain by maximizing intensity similarity. And every brain is automatically labeledwith the registered ROIs. Each of the same-labeled region is linearly normalize to the standard brain, and the occurrence of each legion is counted. Finally, 89 probabilistic ROI volumes are generated. This paper presents a probabilistic anatomical map for localizing the functional and structural analysis of Korean normal brain. In the future, we'll develop the group specific probabilistic anatomical maps of OCD and schizophrenia disease.

A News Video Mining based on Multi-modal Approach and Text Mining (멀티모달 방법론과 텍스트 마이닝 기반의 뉴스 비디오 마이닝)

  • Lee, Han-Sung;Im, Young-Hee;Yu, Jae-Hak;Oh, Seung-Geun;Park, Dai-Hee
    • Journal of KIISE:Databases
    • /
    • v.37 no.3
    • /
    • pp.127-136
    • /
    • 2010
  • With rapid growth of information and computer communication technologies, the numbers of digital documents including multimedia data have been recently exploded. In particular, news video database and news video mining have became the subject of extensive research, to develop effective and efficient tools for manipulation and analysis of news videos, because of their information richness. However, many research focus on browsing, retrieval and summarization of news videos. Up to date, it is a relatively early state to discover and to analyse the plentiful latent semantic knowledge from news videos. In this paper, we propose the news video mining system based on multi-modal approach and text mining, which uses the visual-textual information of news video clips and their scripts. The proposed system systematically constructs a taxonomy of news video stories in automatic manner with hierarchical clustering algorithm which is one of text mining methods. Then, it multilaterally analyzes the topics of news video stories by means of time-cluster trend graph, weighted cluster growth index, and network analysis. To clarify the validity of our approach, we analyzed the news videos on "The Second Summit of South and North Korea in 2007".

The Role of Double Inversion Recovery Imaging in Acute Ischemic Stroke

  • Choi, Na Young;Park, Soonchan;Lee, Chung Min;Ryu, Chang-Woo;Jahng, Geon-Ho
    • Investigative Magnetic Resonance Imaging
    • /
    • v.23 no.3
    • /
    • pp.210-219
    • /
    • 2019
  • Purpose: The purpose of this study was to investigate if double inversion recovery (DIR) imaging can have a role in the evaluation of brain ischemia, compared with diffusion-weighted imaging (DWI) and fluid-attenuated inversion recovery (FLAIR) imaging. Materials and Methods: Sixty-seven patients within 48 hours of onset, underwent MRI scans with FLAIR, DWI with b-value of 0 (B0) and $1000s/mm^2$, and DIR sequences. Patients were categorized into four groups: within three hours, three to six hours, six to 24 hours, and 24 to 48 hours after onset. Lesion-to-normal ratio (LNR) value was calculated and compared among all sequences within each group, by the Friedman test and conducted among all groups, for each sequence by the Kruskal-Wallis test. In qualitative assessment, signal intensity changes of DIR, B0, and FLAIR based on similarity with DWI and image quality of each sequence, were graded on a 3-point scale, respectively. Scores for detectability of lesions were compared by the McNemar's test. Results: LNR values from DWI were higher than DIR, but not statistically significant in all groups (P > 0.05). LNR values of DIR were significantly higher than FLAIR within 24 hours of onset (P < 0.05). LNR values were significantly different between, before, and after six hours onset time for DIR (P = 0.016), B0 (P = 0.008), and FLAIR (P = 0.018) but not for DWI (P = 0.051). Qualitative analysis demonstrated that detectability of DIR was higher, compared to that of FLAIR within 4.5 hours and six hours of onset (P < 0.05). Also, the DWI quality score was lower than that of DIR, particularly relative to infratentorial lesions. Conclusion: DIR provides higher detectability of hyperacute brain ischemia than B0 and FLAIR, and does not suffer from susceptibility artifact, unlike DWI. So, DIR can be used to replace evaluation of the FLAIR-DWI mismatch.

Managing the Reverse Extrapolation Model of Radar Threats Based Upon an Incremental Machine Learning Technique (점진적 기계학습 기반의 레이더 위협체 역추정 모델 생성 및 갱신)

  • Kim, Chulpyo;Noh, Sanguk
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.13 no.4
    • /
    • pp.29-39
    • /
    • 2017
  • Various electronic warfare situations drive the need to develop an integrated electronic warfare simulator that can perform electronic warfare modeling and simulation on radar threats. In this paper, we analyze the components of a simulation system to reversely model the radar threats that emit electromagnetic signals based on the parameters of the electronic information, and propose a method to gradually maintain the reverse extrapolation model of RF threats. In the experiment, we will evaluate the effectiveness of the incremental model update and also assess the integration method of reverse extrapolation models. The individual model of RF threats are constructed by using decision tree, naive Bayesian classifier, artificial neural network, and clustering algorithms through Euclidean distance and cosine similarity measurement, respectively. Experimental results show that the accuracy of reverse extrapolation models improves, while the size of the threat sample increases. In addition, we use voting, weighted voting, and the Dempster-Shafer algorithm to integrate the results of the five different models of RF threats. As a result, the final decision of reverse extrapolation through the Dempster-Shafer algorithm shows the best performance in its accuracy.

Alice Springs Orogeny (ASO) Footprints Tracing in Fresh Rocks in Arunta Region, Central Australia, Using Uranium/Lead (U-Pb) Geochronology

  • Kouame Yao;Mohammed O. Idrees;Abdul-Lateef Balogun;Mohamed Barakat A. Gibril
    • Economic and Environmental Geology
    • /
    • v.56 no.6
    • /
    • pp.817-830
    • /
    • 2023
  • This study investigates the age of the surficial rocks in the Arunta region using Uranium-Lead (U-Pb) geochronological dating. Rock samples were collected at four locations, Cattle-Water Pass (CP 1610), Gough Dam (GD 1622 and GD 1610), and London-Eye (LE 1601), within the Strangways Metamorphic Complex and crushed by selFragging. Subsequently, the zircon grains were imaged using Cathodoluminescence (CL) analysis and the U-Pb (uranium and lead) isotope ratios and the chrono-stratigraphy were measured. The imaged zircon revealed an anomalous heterogeneous crystal structure. Ellipses of the samples at locations GD1601, CP1610, and GD1622 fall below the intercept indicating the ages produced discordant patterns, whereas LE1601 intersects the Concordia curve at two points, implying the occurrence of an event of significant impact. For the rock sample at CP1610, the estimated mean age is 1742.2 ± 9.2 Ma with mean squared weighted deviation (MSWD) = 0.49 and probability of equivalence of 0.90; 1748 ± 15 Ma - MSWD = 1.02 and probability of equivalence of 0.40 for GD1622; and 1784.4 ± 9.1 Ma with MSWD of 1.09 and probability of equivalence of 0.37 for LE1601. But for samples at GD1601, two different age groups with different means occurred: 1) below the global mean (1792.2 ± 32 Ma) estimated at 1738.2 ± 14 Ma with MSWD of 0.109 and probability of equivalence of 0.95 and 2) above it with mean of 1838.22 ± 14 Ma, MSWD of 1.6 and probability of equivalence of 0.95. Analysis of the zircon grains has shown a discrepancy in the age range between 1700 Ma and 1800 Ma compared to the ASO dated to have occurred between 440 and 300 Ma. Moreover, apparent similarity in age of the core and rim means that the mineral crystallized relatively quickly without significant interruptions and effect on the isotopic system. This may have constraint the timing and extent of geological events that might have affected the mineral, such as metamorphism or hydrothermal alteration.

A Groundwater Potential Map for the Nakdonggang River Basin (낙동강권역의 지하수 산출 유망도 평가)

  • Soonyoung Yu;Jaehoon Jung;Jize Piao;Hee Sun Moon;Heejun Suk;Yongcheol Kim;Dong-Chan Koh;Kyung-Seok Ko;Hyoung-Chan Kim;Sang-Ho Moon;Jehyun Shin;Byoung Ohan Shim;Hanna Choi;Kyoochul Ha
    • Journal of Soil and Groundwater Environment
    • /
    • v.28 no.6
    • /
    • pp.71-89
    • /
    • 2023
  • A groundwater potential map (GPM) was built for the Nakdonggang River Basin based on ten variables, including hydrogeologic unit, fault-line density, depth to groundwater, distance to surface water, lineament density, slope, stream drainage density, soil drainage, land cover, and annual rainfall. To integrate the thematic layers for GPM, the criteria were first weighted using the Analytic Hierarchical Process (AHP) and then overlaid using the Technique for Ordering Preferences by Similarity to Ideal Solution (TOPSIS) model. Finally, the groundwater potential was categorized into five classes (very high (VH), high (H), moderate (M), low (L), very low (VL)) and verified by examining the specific capacity of individual wells on each class. The wells in the area categorized as VH showed the highest median specific capacity (5.2 m3/day/m), while the wells with specific capacity < 1.39 m3/day/m were distributed in the areas categorized as L or VL. The accuracy of GPM generated in the work looked acceptable, although the specific capacity data were not enough to verify GPM in the studied large watershed. To create GPMs for the determination of high-yield well locations, the resolution and reliability of thematic maps should be improved. Criterion values for groundwater potential should be established when machine learning or statistical models are used in the GPM evaluation process.

Research on the Development of Distance Metrics for the Clustering of Vessel Trajectories in Korean Coastal Waters (국내 연안 해역 선박 항적 군집화를 위한 항적 간 거리 척도 개발 연구)

  • Seungju Lee;Wonhee Lee;Ji Hong Min;Deuk Jae Cho;Hyunwoo Park
    • Journal of Navigation and Port Research
    • /
    • v.47 no.6
    • /
    • pp.367-375
    • /
    • 2023
  • This study developed a new distance metric for vessel trajectories, applicable to marine traffic control services in the Korean coastal waters. The proposed metric is designed through the weighted summation of the traditional Hausdorff distance, which measures the similarity between spatiotemporal data and incorporates the differences in the average Speed Over Ground (SOG) and the variance in Course Over Ground (COG) between two trajectories. To validate the effectiveness of this new metric, a comparative analysis was conducted using the actual Automatic Identification System (AIS) trajectory data, in conjunction with an agglomerative clustering algorithm. Data visualizations were used to confirm that the results of trajectory clustering, with the new metric, reflect geographical distances and the distribution of vessel behavioral characteristics more accurately, than conventional metrics such as the Hausdorff distance and Dynamic Time Warping distance. Quantitatively, based on the Davies-Bouldin index, the clustering results were found to be superior or comparable and demonstrated exceptional efficiency in computational distance calculation.

Resolving the 'Gray sheep' Problem Using Social Network Analysis (SNA) in Collaborative Filtering (CF) Recommender Systems (소셜 네트워크 분석 기법을 활용한 협업필터링의 특이취향 사용자(Gray Sheep) 문제 해결)

  • Kim, Minsung;Im, Il
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.2
    • /
    • pp.137-148
    • /
    • 2014
  • Recommender system has become one of the most important technologies in e-commerce in these days. The ultimate reason to shop online, for many consumers, is to reduce the efforts for information search and purchase. Recommender system is a key technology to serve these needs. Many of the past studies about recommender systems have been devoted to developing and improving recommendation algorithms and collaborative filtering (CF) is known to be the most successful one. Despite its success, however, CF has several shortcomings such as cold-start, sparsity, gray sheep problems. In order to be able to generate recommendations, ordinary CF algorithms require evaluations or preference information directly from users. For new users who do not have any evaluations or preference information, therefore, CF cannot come up with recommendations (Cold-star problem). As the numbers of products and customers increase, the scale of the data increases exponentially and most of the data cells are empty. This sparse dataset makes computation for recommendation extremely hard (Sparsity problem). Since CF is based on the assumption that there are groups of users sharing common preferences or tastes, CF becomes inaccurate if there are many users with rare and unique tastes (Gray sheep problem). This study proposes a new algorithm that utilizes Social Network Analysis (SNA) techniques to resolve the gray sheep problem. We utilize 'degree centrality' in SNA to identify users with unique preferences (gray sheep). Degree centrality in SNA refers to the number of direct links to and from a node. In a network of users who are connected through common preferences or tastes, those with unique tastes have fewer links to other users (nodes) and they are isolated from other users. Therefore, gray sheep can be identified by calculating degree centrality of each node. We divide the dataset into two, gray sheep and others, based on the degree centrality of the users. Then, different similarity measures and recommendation methods are applied to these two datasets. More detail algorithm is as follows: Step 1: Convert the initial data which is a two-mode network (user to item) into an one-mode network (user to user). Step 2: Calculate degree centrality of each node and separate those nodes having degree centrality values lower than the pre-set threshold. The threshold value is determined by simulations such that the accuracy of CF for the remaining dataset is maximized. Step 3: Ordinary CF algorithm is applied to the remaining dataset. Step 4: Since the separated dataset consist of users with unique tastes, an ordinary CF algorithm cannot generate recommendations for them. A 'popular item' method is used to generate recommendations for these users. The F measures of the two datasets are weighted by the numbers of nodes and summed to be used as the final performance metric. In order to test performance improvement by this new algorithm, an empirical study was conducted using a publically available dataset - the MovieLens data by GroupLens research team. We used 100,000 evaluations by 943 users on 1,682 movies. The proposed algorithm was compared with an ordinary CF algorithm utilizing 'Best-N-neighbors' and 'Cosine' similarity method. The empirical results show that F measure was improved about 11% on average when the proposed algorithm was used

    . Past studies to improve CF performance typically used additional information other than users' evaluations such as demographic data. Some studies applied SNA techniques as a new similarity metric. This study is novel in that it used SNA to separate dataset. This study shows that performance of CF can be improved, without any additional information, when SNA techniques are used as proposed. This study has several theoretical and practical implications. This study empirically shows that the characteristics of dataset can affect the performance of CF recommender systems. This helps researchers understand factors affecting performance of CF. This study also opens a door for future studies in the area of applying SNA to CF to analyze characteristics of dataset. In practice, this study provides guidelines to improve performance of CF recommender systems with a simple modification.


  • (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.