• Title/Summary/Keyword: 웹 분할

Search Result 212, Processing Time 0.028 seconds

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

Motion Based Serious Game Using Spatial Information of Game and Web-cam (웹캠과 공간정보를 이용한 체감형 기능성게임)

  • Lee, Young-Jae;Lee, Dae-Ho;Yi, Sang-Hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.9
    • /
    • pp.1795-1802
    • /
    • 2009
  • Motion based serious game is a new style of game and exercise using hands, arms, head and whole body. At that time gamer's reachable movement space is an important game space and interaction happening place. We propose efficient game spatial division and analysis algorithm that gives special information for collision avoidance of game objects. We devide into 9 parts of game space and check the enemy position and upper, down, left and right side movement information of gamer and calculate optimal path for collide avoidance of the enemy. To evaluate the method, we implemented a motion base serious game that consists of a web cam, a player, an enemy, and we obtained some valid results of our method for the collision avoidance. The resole demonstrated that the proposed approach is robust. If movement information is in front of enemy, then the enemy waits and finds the place and runs to avoid collision. This algorithm can be used basic development of level control and effective interaction method for motion based serious game.

Time-Series based Dataset Selection Method for Effective Text Classification (효율적인 문헌 분류를 위한 시계열 기반 데이터 집합 선정 기법)

  • Chae, Yeonghun;Jeong, Do-Heon
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.1
    • /
    • pp.39-49
    • /
    • 2017
  • As the Internet technology advances, data on the web is increasing sharply. Many research study about incremental learning for classifying effectively in data increasing. Web document contains the time-series data such as published date. If we reflect time-series data to classification, it will be an effective classification. In this study, we analyze the time-series variation of the words. We propose an efficient classification through dividing the dataset based on the analysis of time-series information. For experiment, we corrected 1 million online news articles including time-series information. We divide the dataset and classify the dataset using SVM and $Na{\ddot{i}}ve$ Bayes. In each model, we show that classification performance is increasing. Through this study, we showed that reflecting time-series information can improve the classification performance.

Development of Structural Analysis Platform through Internet-based Technology Using Component Models (컴포넌트 모델을 이용한 인터넷 기반 구조해석 플랫폼 개발)

  • Shin Soo-Bong;Park Hun-Sung
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.19 no.2 s.72
    • /
    • pp.161-169
    • /
    • 2006
  • The study proposes component models in developing an efficient platform for internet-based structural analysis. Since a structural analysis requires an operation of complicated algorithms, a client-side computation using X-Internet is preferred to a server-side computation to provide a flexible service for multi-users. To compete with the user-friendly interfaces of available commercial analysis programs, a window-based interface using Smart Client was applied. Also, component-based programming was performed with the considerations on reusability and expandability so that active Preparation for future change or modification could be feasible. The components describe the whole system by subdivision and simplification. In the relationship between upper-and lower-level components and also in the relationship between components and objects, a unified interface was used to clearly classify the connection between the libraries. By performing data communication between different types of platforms using XML WebService, a conner-stone of data transfer is proposed for the future integrated CAE. The efficiency of the developed platform has been examined through a sample structural analysis and design on planar truss structures.

Semantic Information Inference among Objects in Image Using Ontology (온톨로지를 이용한 이미지 내 객체사이의 의미 정보 추론)

  • Kim, Ji-Won;Kim, Chul-Won
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.3
    • /
    • pp.579-586
    • /
    • 2020
  • There is a large amount of multimedia data on the web page, and a method of extracting semantic information from low level visual information for accurate retrieval is being studied. However, most of these techniques extract one of information from a single image, so it is difficult to extract semantic information when multiple objects are combined in the image. In this paper, each low-level feature is extracted to extract various objects and backgrounds in an image, and these are divided into predefined backgrounds and objects using SVM. The objects and backgrounds divided in this way are constructed with ontology, infer the semantic information of location and association using inference engine. It's possible to extract the semantic information. We propose this method process the complex and high-level semantic information in image.

A study of improve vectorising technique on the internet (인터넷에서의 개선된 벡터라이징 기법에 관한 연구)

  • 김용호;이윤배
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.2
    • /
    • pp.271-281
    • /
    • 2002
  • Currently, most web designers guarante high quality using bitmap graphics as fixed font size, but that has defects about file size and flexibility. Especially, to provide high quality of banner and advertise characters, after you should use a bitmap edit program, and then we should follow the method we add that program to HTML documents as bitmap data. In this study, as I show a couple of new tags in front of HTML documents, I show methods which can be presented diverse effects. When text information are stored, because we print out a screen with simple control points and outside information, it can be possible for us to express the same quality of Hangul characters like printed documents in a web browser. Regardless of the second class of platform, we can make it possible the character expression with exact character expressions and diverse effects.

A Study on Video Search Method using the Image map (이미지 맵을 이용한 동영상 검색 제공방법에 관한 연구 - IPTV 환경을 중심으로)

  • Lee, Ju-Hwan;Lea, Jong-Ho
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.298-303
    • /
    • 2008
  • Watching a program on IPTV among the numerous choices from the internet requires a burden of searching and browsing for a favorite one. This paper introduces a new concept called Mosaic Map and presents how it provides preview information of image map links to other programs. In Mosaic Map the pixels in the still image are used both as shading the background and as thumbnails which can link up with other programs. This kind of contextualized preview of choices can help IPTV users to associate the image with related programs without making visual saccades between watching IPTV and browsing many choices. The experiments showed that the Mosaic Map reduces the time to complete search and browsing, comparing to the legacy menu and web search.

  • PDF

Effect of Solubilization Conditions on Molecular Weight Distribution of Enzymatically-Hydrolyzed Silk Peptides (실크의 가용화 조건이 효소분해 실크 펩타이드의 분자량 분포에 미치는 영향)

  • 채희정;인만진;김의용
    • KSBB Journal
    • /
    • v.13 no.1
    • /
    • pp.114-118
    • /
    • 1998
  • The effects of fibron solubilization conditions on molecular weight distribution of enzymatically-hydrolyzed silk peptides were investigated. The weight-averaged molecular weights of silk proteins prepared by solubilization with calcium chloride, ethylenediamine and sulfuric acid were 41600, 3308, and 1268 dalton, respectively. Silk peptides in the average molecular weight range of 600-1200 dalton were obtained by protease treatment from solubilized silk fibroin. After the acid hydrolysis of silk protein using hydrochloric acid for 24 hr, silk protein was hydrolyzed to peptides whose average molecular weight and free amino acid content were 145 dalton and 80%, respectively. It was possible to control molecular weight distribution of silk peptides by the combination of solubilization and hydrolysis methods. Among the various treatment methods, acid solubilization followed by protease treatment had an advantage of molecular weight control for the peptide production.

  • PDF

Analytic Approach to e-Transformation of Intermediary (중개유통기업의 e-트랜스포메이션: 분석적 접근)

  • Han, Hyun-Soo
    • Information Systems Review
    • /
    • v.5 no.2
    • /
    • pp.1-21
    • /
    • 2003
  • In this paper, we investigate industrial product intermediary's transformation strategy by exploiting advantages afforded by web based information technologies. Our motivation for this research stems from exploring intermediary's responding strategy to cope with supplier's threatening to disintermediation. From transaction cost perspective, internet can induce both the vertical quasi-integration (electronic hierarchy) and outsourcing (electronic market). Our rationale on directing one of these bi-directional movements is specified on intermediary's value adding on the supply chain. As such, we investigated supply chain performance, IT effects on customer's requirement of channel functions, and channel power structure. Propositions to suggest contingent e-transforming strategic alternatives are logically derived from dyadic nature of supply chain characteristics such as efficiency versus customer services, and supplier dominant versus easy replaceability of suppliers. The contingent e-transformation framework developed from intermediary's perspective is reviewed through longitudinal industry case analysis. Implications from the industry case analysis give us insights for the effectiveness of the framework to combine supply chain characteristics with intermediary's e-transformation.

Combining deep learning-based online beamforming with spectral subtraction for speech recognition in noisy environments (잡음 환경에서의 음성인식을 위한 온라인 빔포밍과 스펙트럼 감산의 결합)

  • Yoon, Sung-Wook;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.5
    • /
    • pp.439-451
    • /
    • 2021
  • We propose a deep learning-based beamformer combined with spectral subtraction for continuous speech recognition operating in noisy environments. Conventional beamforming systems were mostly evaluated by using pre-segmented audio signals which were typically generated by mixing speech and noise continuously on a computer. However, since speech utterances are sparsely uttered along the time axis in real environments, conventional beamforming systems degrade in case when noise-only signals without speech are input. To alleviate this drawback, we combine online beamforming algorithm and spectral subtraction. We construct a Continuous Speech Enhancement (CSE) evaluation set to evaluate the online beamforming algorithm in noisy environments. The evaluation set is built by mixing sparsely-occurring speech utterances of the CHiME3 evaluation set and continuously-played CHiME3 background noise and background music of MUSDB. Using a Kaldi-based toolkit and Google web speech recognizer as a speech recognition back-end, we confirm that the proposed online beamforming algorithm with spectral subtraction shows better performance than the baseline online algorithm.