• Title/Summary/Keyword: Singular Decomposition

Search Result 399, Processing Time 0.023 seconds

A Feasibility Study on Adopting Individual Information Cognitive Processing as Criteria of Categorization on Apple iTunes Store

  • Zhang, Chao;Wan, Lili
    • The Journal of Information Systems
    • /
    • v.27 no.2
    • /
    • pp.1-28
    • /
    • 2018
  • Purpose More than 7.6 million mobile apps could be approved on both Apple iTunes Store and Google Play. For managing those existed Apps, Apple Inc. established twenty-four primary categories, as well as Google Play had thirty-three primary categories. However, all of their categorizations have appeared more and more problems in managing and classifying numerous apps, such as app miscategorized, cross-attribution problems, lack of categorization keywords index, etc. The purpose of this study focused on introducing individual information cognitive processing as the classification criteria to update the current categorization on Apple iTunes Store. Meanwhile, we tried to observe the effectiveness of the new criteria from a classification process on Apple iTunes Store. Design/Methodology/Approach A research approach with four research stages were performed and a series of mixed methods was developed to identify the feasibility of adopting individual information cognitive processing as categorization criteria. By using machine-learning techniques with Term Frequency-Inverse Document Frequency and Singular Value Decomposition, keyword lists were extracted. By using the prior research results related to car app's categorization, we developed individual information cognitive processing. Further keywords extracting process from the extracted keyword lists was performed. Findings By TF-IDF and SVD, keyword lists from more than five thousand apps were extracted. Furthermore, we developed individual information cognitive processing that included a categorization teaching process and learning process. Three top three keywords for each category were extracted. By comparing the extracted results with prior studies, the inter-rater reliability for two different methods shows significant reliable, which proved the individual information cognitive processing to be reliable as criteria of categorization on Apple iTunes Store. The updating suggestions for Apple iTunes Store were discussed in this paper and the results of this paper may be useful for app store hosts to improve the current categorizations on app stores as well as increasing the efficiency of app discovering and locating process for both app developers and users.

Determination of Excitation and Response Measurement Points for an Efficient Modal Testing (효율적 모우드시험을 위한 가진점과 응답측정점의 결정)

  • 박종필;김광준;박영진
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.16 no.9
    • /
    • pp.1643-1653
    • /
    • 1992
  • A method, which uses analytical or numerical modal analysis results, e.g. from finite element analysis, to select desirable response measurement and excitation points for an efficient modal testing is introduced. First, points of master degree of freedom(DOP) are determined so as to statistically minimize errors between responses of a full order model and those estimated from the reduced order model. Such master DOF's are selected as the response measurement points. Then a criterion named 'driving point model constant(DPMC)' related to the magnitudes of resonance peaks of the driving point freqency response functions used to select the point of excitation out of the master DOF's. In this work, the method is demonstrated through applications to modal testing on a one dimensional cantilever beam and an aluminum plate and the results are compared with those by another technique. also, the method is applied to a two dimensional structural component of a passenger car.

Correlation-based Automatic Image Captioning (상호 관계 기반 자동 이미지 주석 생성)

  • Hyungjeong, Yang;Pinar, Duygulu;Christos, Falout
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1386-1399
    • /
    • 2004
  • This paper presents correlation-based automatic image captioning. Given a training set of annotated images, we want to discover correlations between visual features and textual features, so that we can automatically generate descriptive textual features for a new unseen image. We develop models with multiple design alternatives such as 1) adaptively clustering visual features, 2) weighting visual features and textual features, and 3) reducing dimensionality for noise sup-Pression. We experiment thoroughly on 10 data sets of various content styles from the Corel image database, about 680MB. The major contributions of this work are: (a) we show that careful weighting visual and textual features, as well as clustering visual features adaptively leads to consistent performance improvements, and (b) our proposed methods achieve a relative improvement of up to 45% on annotation accuracy over the state-of-the-art, EM approach.

Holographic Forensic Mark based on DWT-SVD for Tracing of the Multilevel Distribution (다단계 유통 추적을 위한 DWT-SVD 기반의 홀로그래피 포렌식마크)

  • Li, De;Kim, Jong-Weon
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.2C
    • /
    • pp.155-160
    • /
    • 2010
  • In this paper, we proposed a forensic mark algorithm which can embed the distributor's information at each distribution step to trace the illegal distribution path. For this purpose, the algorithm has to have the high capacity payload for embedding the copyright and user information at each step, and the embedded information at a step should not interfere with the information at other step. The proposed algorithm can trace the multilevel distribution because the forensic mark is generated by digital hologram and embedded in the DWT-SVD domain. For the high capacity embedding, the off-axis hologram is generated from the forensic mark and the hologram is embedded in the HL, LH, HH bands of the DWT to reduce the signal interference. The SVD which is applied the holographic signal enhanced the detection performance and the safety of the forensic mark algorithm. As the test results, this algorithm was able to embed 128bits information for the copyright and user information at each step. In this paper, we can embed total 384bits information for 3 steps and the algorithm is also robust to the JPEG compression.

Graph Construction Based on Fast Low-Rank Representation in Graph-Based Semi-Supervised Learning (그래프 기반 준지도 학습에서 빠른 낮은 계수 표현 기반 그래프 구축)

  • Oh, Byonghwa;Yang, Jihoon
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.15-21
    • /
    • 2018
  • Low-Rank Representation (LRR) based methods are widely used in many practical applications, such as face clustering and object detection, because they can guarantee high prediction accuracy when used to constructing graphs in graph - based semi-supervised learning. However, in order to solve the LRR problem, it is necessary to perform singular value decomposition on the square matrix of the number of data points for each iteration of the algorithm; hence the calculation is inefficient. To solve this problem, we propose an improved and faster LRR method based on the recently published Fast LRR (FaLRR) and suggests ways to introduce and optimize additional constraints on the underlying optimization goals in order to address the fact that the FaLRR is fast but actually poor in classification problems. Our experiments confirm that the proposed method finds a better solution than LRR does. We also propose Fast MLRR (FaMLRR), which shows better results when the goal of minimizing is added.

Analysis of Characteristics of Satellite-derived Air Pollutant over Southeast Asia and Evaluation of Tropospheric Ozone using Statistical Methods (통계적 방법을 이용한 동남아시아지역 위성 대기오염물질 분석과 검증)

  • Baek, K.H.;Kim, Jae-Hwan
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.27 no.6
    • /
    • pp.650-662
    • /
    • 2011
  • The statistical tools such as empirical orthogonal function (EOF), and singular value decomposition (SVD) have been applied to analyze the characteristic of air pollutant over southeast Asia as well as to evaluate Zimeke's tropospheric column ozone (ZTO) determined by tropospheric residual method. In this study, we found that the EOF and SVD analyses are useful methods to extract the most significant temporal and spatial pattern from enormous amounts of satellite data. The EOF analyses with OMI $NO_2$ and OMI HCHO over southeast Asia revealed that the spatial pattern showed high correlation with fire count (r=0.8) and the EOF analysis of CO (r=0.7). This suggests that biomass burning influences a major seasonal variability on $NO_2$ and HCHO over this region. The EOF analysis of ZTO has indicated that the location of maximum ZTO was considerably shifted westward from the location of maximum of fire count and maximum month of ZTO occurred a month later than maximum month (March) of $NO_2$, HCHO and CO. For further analyses, we have performed the SVD analyses between ZTO and ozone precursor to examine their correlation and to check temporal and spatial consistency between two variables. The spatial pattern of ZTO showed latitudinal gradient that could result from latitudinal gradient of stratospheric ozone and temporal maximum of ZTO in March appears to be associated with stratospheric ozone variability that shows maximum in March. These results suggest that there are some sources of error in the tropospheric residual method associated with cloud height error, low efficiency of tropospheric ozone, and low accuracy in lower stratospheric ozone.

No-reference Image Blur Assessment Based on Multi-scale Spatial Local Features

  • Sun, Chenchen;Cui, Ziguan;Gan, Zongliang;Liu, Feng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.10
    • /
    • pp.4060-4079
    • /
    • 2020
  • Blur is an important type of image distortion. How to evaluate the quality of blurred image accurately and efficiently is a research hotspot in the field of image processing in recent years. Inspired by the multi-scale perceptual characteristics of the human visual system (HVS), this paper presents a no-reference image blur/sharpness assessment method based on multi-scale local features in the spatial domain. First, considering various content has different sensitivity to blur distortion, the image is divided into smooth, edge, and texture regions in blocks. Then, the Gaussian scale space of the image is constructed, and the categorized contrast features between the original image and the Gaussian scale space images are calculated to express the blur degree of different image contents. To simulate the impact of viewing distance on blur distortion, the distribution characteristics of local maximum gradient of multi-resolution images were also calculated in the spatial domain. Finally, the image blur assessment model is obtained by fusing all features and learning the mapping from features to quality scores by support vector regression (SVR). Performance of the proposed method is evaluated on four synthetically blurred databases and one real blurred database. The experimental results demonstrate that our method can produce quality scores more consistent with subjective evaluations than other methods, especially for real burred images.

Improving Collaborative Filtering with Rating Prediction Based on Taste Space (협업 필터링 추천시스템에서의 취향 공간을 이용한 평가 예측 기법)

  • Lee, Hyung-Dong;Kim, Hyoung-Joo
    • Journal of KIISE:Databases
    • /
    • v.34 no.5
    • /
    • pp.389-395
    • /
    • 2007
  • Collaborative filtering is a popular technique for information filtering to reduce information overload and widely used in application such as recommender system in the E-commerce domain. Collaborative filtering systems collect human ratings and provide Predictions based on the ratings of other people who share the same tastes. The quality of predictions depends on the number of items which are commonly rated by people. Therefore, it is difficult to apply pure collaborative filtering algorithm directly to dynamic collections where items are constantly added or removed. In this paper we suggest a method for managing dynamic collections. It creates taste space for items using a technique called Singular Vector Decomposition (SVD) and maintains clusters of core items on the space to estimate relevance of past and future items. To evaluate the proposed method, we divide database of user ratings into those of old and new items and analyze predicted ratings of the latter. And we experimentally show our method is efficiently applied to dynamic collections.

A Mobile P2P Semantic Information Retrieval System with Effective Updates

  • Liu, Chuan-Ming;Chen, Cheng-Hsien;Chen, Yen-Lin;Wang, Jeng-Haur
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1807-1824
    • /
    • 2015
  • As the technologies advance, mobile peer-to-peer (MP2P) networks or systems become one of the major ways to share resources and information. On such a system, the information retrieval (IR), including the development of scalable infrastructures for indexing, becomes more complicated due to a huge increase on the amount of information and rapid information change. To keep the systems on MP2P networks more reliable and consistent, the index structures need to be updated frequently. For a semantic IR system, the index structure is even more complicated than a classic IR system and generally has higher update cost. The most well-known indexing technique used in semantic IR systems is Latent Semantic Indexing (LSI), of which the index structure is generated by singular value decomposition (SVD). Although LSI performs well, updating the index structure is not easy and time consuming. In an MP2P environment, which is fully distributed and dynamic, the update becomes more challenging. In this work, we consider how to update the sematic index generated by LSI and keep the index consistent in the whole MP2P network. The proposed Concept Space Update (CSU) protocol, based on distributed 2-Phase locking strategy, can effectively achieve the objectives in terms of two measurements: coverage speed and update cost. Using the proposed effective synchronization mechanism with the efficient updates on the SVD, re-computing the whole index on the P2P overlay can be avoided and the consistency can be achieved. Simulated experiments are also performed to validate our analysis on the proposed CSU protocol. The experimental results indicate that CSU is effective on updating the concept space with LSI/SVD index structure in MP2P semantic IR systems.

Novel Robust High Dynamic Range Image Watermarking Algorithm Against Tone Mapping

  • Bai, Yongqiang;Jiang, Gangyi;Jiang, Hao;Yu, Mei;Chen, Fen;Zhu, Zhongjie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4389-4411
    • /
    • 2018
  • High dynamic range (HDR) images are becoming pervasive due to capturing or rendering of a wider range of luminance, but their special display equipment is difficult to be popularized because of high cost and technological problem. Thus, HDR images must be adapted to the conventional display devices by applying tone mapping (TM) operation, which puts forward higher requirements for intellectual property protection of HDR images. As the robustness presents regional diversity in the low dynamic range (LDR) watermarked image after TM, which is different from the traditional watermarking technologies, a concept of watermarking activity is defined and used to distinguish the essential distinction of watermarking between LDR image and HDR image in this paper. Then, a novel robust HDR image watermarking algorithm is proposed against TM operations. Firstly, based on the hybrid processing of redundant discrete wavelet transform and singular value decomposition, the watermark is embedded by modifying the structure information of the HDR image. Distinguished from LDR image watermarking, the high embedding strength can cause more obvious distortion in the high brightness regions of HDR image than the low brightness regions. Thus, a perceptual brightness mask with low complexity is designed to improve the imperceptibility further. Experimental results show that the proposed algorithm is robust to the existing TM operations, with taking into account the imperceptibility and embedded capacity, which is superior to the current state-of-art HDR image watermarking algorithms.