• Title/Summary/Keyword: Time-based Clustering

Search Result 716, Processing Time 0.028 seconds

Depth Map Pre-processing using Gaussian Mixture Model and Mean Shift Filter (혼합 가우시안 모델과 민쉬프트 필터를 이용한 깊이 맵 부호화 전처리 기법)

  • Park, Sung-Hee;Yoo, Ji-Sang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.5
    • /
    • pp.1155-1163
    • /
    • 2011
  • In this paper, we propose a new pre-processing algorithm applied to depth map to improve the coding efficiency. Now, 3DV/FTV group in the MPEG is working for standard of 3DVC(3D video coding), but compression method for depth map images are not confirmed yet. In the proposed algorithm, after dividing the histogram distribution of a given depth map by EM clustering method based on GMM, we classify the depth map into several layered images. Then, we apply different mean shift filter to each classified image according to the existence of background or foreground in it. In other words, we try to maximize the coding efficiency while keeping the boundary of each object and taking average operation toward inner field of the boundary. The experiments are performed with many test images and the results show that the proposed algorithm achieves bits reduction of 19% ~ 20% and computation time is also reduced.

SPEC: Space Efficient Cubes for Data Warehouses (SPEC : 데이타 웨어하우스를 위한 저장 공간 효율적인 큐브)

  • Chun Seok-Ju;Lee Seok-Lyong;Kang Heum-Geun;Chung Chin-Wan
    • Journal of KIISE:Databases
    • /
    • v.32 no.1
    • /
    • pp.1-11
    • /
    • 2005
  • An aggregation query computes aggregate information over a data cube in the query range specified by a user Existing methods based on the prefix-sum approach use an additional cube called the prefix-sum cube(PC), to store the cumulative sums of data, causing a high space overhead. This space overhead not only leads to extra costs for storage devices, but also causes additional propagations of updates and longer access time on physical devices. In this paper, we propose a new prefix-sum cube called 'SPEC' which drastically reduces the space of the PC in a large data warehouse. The SPEC decreases the update propagation caused by the dependency between values in cells of the PC. We develop an effective algorithm which finds dense sub-cubes from a large data cube. We perform an extensive experiment with respect to various dimensions of the data cube and query sizes, and examine the effectiveness and performance ot our proposed method. Experimental results show that the SPEC significantly reduces the space of the PC while maintaining a reasonable query performance.

A Method for the Classification of Water Pollutants using Machine Learning Model with Swimming Activities Videos of Caenorhabditis elegans (예쁜꼬마선충의 수영 행동 영상과 기계학습 모델을 이용한 수질 오염 물질 구분 방법)

  • Kang, Seung-Ho;Jeong, In-Seon;Lim, Hyeong-Seok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.903-909
    • /
    • 2021
  • Caenorhabditis elegans whose DNA sequence was completely identified is a representative species used in various research fields such as gene functional analysis and animal behavioral research. In the mean time, many researches on the bio-monitoring system to determine whether water is contaminated or not by using the swimming activities of nematodes. In this paper, we show the possibility of using the swimming activities of C. elegans in the development of a machine learning based bio-monitoring system which identifies chemicals that cause water pollution. To characterize swimming activities of nematode, BLS entropy is computed for the nematode in a frame. And, BLS entropy profile, an assembly of entropies, are classified into several patterns using clustering algorithms. Finally these patterns are used to construct data sets. We recorded images of swimming behavior of nematodes in the arenas in which formaldehyde, benzene and toluene were added at a concentration of 0.1 ppm, respectively, and evaluate the performance of the developed HMM.

Hyaluronic acid reduces inflammation and crevicular fluid IL-1β concentrations in peri-implantitis: a randomized controlled clinical trial

  • Sanchez-Fernandez, Elena;Magan-Fernandez, Antonio;O'Valle, Francisco;Bravo, Manuel;Mesa, Francisco
    • Journal of Periodontal and Implant Science
    • /
    • v.51 no.1
    • /
    • pp.63-74
    • /
    • 2021
  • Purpose: This study investigated the effects of hyaluronic acid (HA) on peri-implant clinical variables and crevicular concentrations of the proinflammatory biomarkers interleukin (IL)-1β and tumor necrosis factor (TNF)-α in patients with peri-implantitis. Methods: A randomized controlled trial was conducted in peri-implantitis patients. Patients were randomized to receive a 0.8% HA gel (test group), an excipient-based gel (control group 1), or no gel (control group 2). Clinical periodontal variables and marginal bone loss after 0, 45, and 90 days of treatment were assessed. IL-1β and TNF-α levels in crevicular fluid were measured by enzyme-linked immunosorbent assays at baseline and after 45 days of treatment. Clustering analysis was performed, considering the possibility of multiple implants in a single patient. Results: Sixty-one patients with 100 dental implants were assigned to the test group, control group 1, or control group 2. Probing pocket depth (PPD) was significantly lower in the test group than in both control groups at 45 days (control 1: 95% CI, -1.66, -0.40 mm; control 2: 95% CI, -1.07, -0.01 mm) and 90 days (control 1: 95% CI, -1.72, -0.54 mm; control 2: 95% CI, -1.13, -0.15 mm). There was a trend towards less bleeding on probing in the test group than in control group 2 at 90 days (P=0.07). Implants with a PPD ≥5 mm showed higher levels of IL-1β in the control group 2 at 45 days than in the test group (P=0.04). Conclusions: This study demonstrates for the first time that the topical application of a HA gel in the peri-implant pocket and around implants with peri-implantitis may reduce inflammation and crevicular fluid IL-1β levels.

Technology Development Strategy of Piggyback Transportation System Using Topic Modeling Based on LDA Algorithm

  • Jun, Sung-Chan;Han, Seong-Ho;Kim, Sang-Baek
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.12
    • /
    • pp.261-270
    • /
    • 2020
  • In this study, we identify promising technologies for Piggyback transportation system by analyzing the relevant patent information. In order for this, we first develop the patent database by extracting relevant technology keywords from the pioneering research papers for the Piggyback flactcar system. We then employed textmining to identify the frequently referred words from the patent database, and using these words, we applied the LDA (Latent Dirichlet Allocation) algorithm in order to identify "topics" that are corresponding to "key" technologies for the Piggyback system. Finally, we employ the ARIMA model to forecast the trends of these "key" technologies for technology forecasting, and identify the promising technologies for the Piggyback system. with keyword search method the patent analysis. The results show that data-driven integrated management system, operation planning system and special cargo (especially fluid and gas) handling/storage technologies are identified to be the "key" promising technolgies for the future of the Piggyback system, and data reception/analysis techniques must be developed in order to improve the system performance. The proposed procedure and analysis method provides useful insights to develop the R&D strategy and the technology roadmap for the Piggyback system.

Evaluation of Water Quality Characteristics in the Nakdong River using Statistical Analysis (통계분석을 이용한 낙동강유역의 수질변화 특성 조사)

  • Choi, Kil Yong;Im, Toe Hyo;Lee, Jae Woon;Cheon, Se Uk
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.11
    • /
    • pp.1157-1168
    • /
    • 2012
  • In this study, we assess changes in water quality trends over time based on certain control measurements in order to identify and analyze the cause of the trend in water quality. The current water pollution in the Nakdong River was analyzed, as it suggests that the significant changes in water quality have occurred in between 2006 and 2010. Based on monthly average data, we have examined for trends of the Nakdong River watershed in water temperature, Biological Oxygen Demand (BOD), Chemical Oxygen Demand (COD), Total Nitrogen (TN), and Total Phosphorus (TP). Moreover, we have investigated seasonal variation of water quality of sites within the Nakdong River Basin by implementing further analyses such as, Correlation Coefficient, Regression Analysis, Hierarchical Clustering Method, and Time Series Analysis on SPSS. Geology and topography of the watershed, controlled by various conditions such as, climate, vegetation, topography, soil, and rain medium, have been affected by the non-homogeneity. Our study suggests that such variables could possibly cause eutrophication problems in the river. One possible way to overcome this particular problem is to lay up a ship on the river by increasing the nasal flow measurement of the Nakdong River during rainy season. Moreover, the water management requires arranging the measurement of the flow in order to secure the river while the numerous construction projects need to be continuously observed. However, the water is not flowing tributary of the reason for the timing to be flowing in a natural state of river water and industrial water intake because agriculture. Therefore, ongoing research is needed in addition to configuration of all observations.

Design and Implementation of Unified Index for Moving Objects Databases (이동체 데이타베이스를 위한 통합 색인의 설계 및 구현)

  • Park Jae-Kwan;An Kyung-Hwan;Jung Ji-Won;Hong Bong-Hee
    • Journal of KIISE:Databases
    • /
    • v.33 no.3
    • /
    • pp.271-281
    • /
    • 2006
  • Recently the need for Location-Based Service (LBS) has increased due to the development and widespread use of the mobile devices (e.g., PDAs, cellular phones, labtop computers, GPS, and RFID etc). The core technology of LBS is a moving-objects database that stores and manages the positions of moving objects. To search for information quickly, the database needs to contain an index that supports both real-time position tracking and management of large numbers of updates. As a result, the index requires a structure operating in the main memory for real-time processing and requires a technique to migrate part of the index from the main memory to disk storage (or from disk storage to the main memory) to manage large volumes of data. To satisfy these requirements, this paper suggests a unified index scheme unifying the main memory and the disk as well as migration policies for migrating part of the index from the memory to the disk during a restriction in memory space. Migration policy determines a group of nodes, called the migration subtree, and migrates the group as a unit to reduce disk I/O. This method takes advantage of bulk operations and dynamic clustering. The unified index is created by applying various migration policies. This paper measures and compares the performance of the migration policies using experimental evaluation.

Automatic Tumor Segmentation Method using Symmetry Analysis and Level Set Algorithm in MR Brain Image (대칭성 분석과 레벨셋을 이용한 자기공명 뇌영상의 자동 종양 영역 분할 방법)

  • Kim, Bo-Ram;Park, Keun-Hye;Kim, Wook-Hyun
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.12 no.4
    • /
    • pp.267-273
    • /
    • 2011
  • In this paper, we proposed the method to detect brain tumor region in MR images. Our method is composed of 3 parts, detection of tumor slice, detection of tumor region and tumor boundary detection. In the tumor slice detection step, a slice which contains tumor regions is distinguished using symmetric analysis in 3D brain volume. The tumor region detection step is the process to segment the tumor region in the slice distinguished as a tumor slice. And tumor region is finally detected, using spatial feature and symmetric analysis based on the cluster information. The process for detecting tumor slice and tumor region have advantages which are robust for noise and requires less computational time, using the knowledge of the brain tumor and cluster-based on symmetric analysis. And we use the level set method with fast marching algorithm to detect the tumor boundary. It is performed to find the tumor boundary for all other slices using the initial seeds derived from the previous or later slice until the tumor region is vanished. It requires less computational time because every procedure is not performed for all slices.

A Dynamic Server Power Mode Control for Saving Energy in a Server Cluster Environment (서버 클러스터 환경에서 에너지 절약을 위한 동적 서버 전원 모드 제어)

  • Kim, Ho-Yeon;Ham, Chi-Hwan;Kwak, Hu-Keun;Kwon, Hui-Ung;Kim, Young-Jong;Chung, Kyu-Sik
    • The KIPS Transactions:PartC
    • /
    • v.19C no.2
    • /
    • pp.135-144
    • /
    • 2012
  • All the servers in a traditional server cluster environment are kept On. If the request load reaches to the maximum, we exploit its maximum possible performance, otherwise, we exploit only some portion of maximum possible performance so that the efficiency of server power consumption becomes low. We can improve the efficiency of power consumption by controlling power mode of servers according to load situation, that is, by making On only minimum number of servers needed to handle current load while making Off the remaining servers. In the existing power mode control method, they used a static policy to decide server power mode at a fixed time interval so that it cannot adapt well to the dynamically changing load situation. In order to improve the existing method, we propose a dynamic server power control algorithm. In the proposed method, we keep the history of server power consumption and, based on it, predict whether power consumption increases in the near future. Based on this prediction, we dynamically change the time interval to decide server power mode. We performed experiments with a cluster of 30 PCs. Experimental results show that our proposed method keeps the same performance while reducing 29% of power consumption compared to the existing method. In addition, our proposed method allows to increase the average CPU utilization by 66%.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.