• Title/Summary/Keyword: vector computer

Search Result 2,007, Processing Time 0.023 seconds

An Enhanced Search Algorithm for Fast Motion Estimation using Sub-Pixel (부화소 단위의 빠른 움직임 예측을 위한 개선된 탐색 알고리즘)

  • Kim, Dae-Gon;Yoo, Cheol-Jung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.12
    • /
    • pp.103-112
    • /
    • 2011
  • Motion estimation (ME) is regarded as an important component in a video encoding process, because it consumes a large computation complexity. H.264/AVC requires additional computation overheads for fractional search and interpolation. This causes a problem that computational complexity is increased. In Motion estimation, SATD(Sum of Transform Difference) has the characteristics of a parabolic based on the minimum point. In this paper, we propose new prediction algorithm to reduce search point in motion estimation by sub-pixel interpolation characteristics. The proposed algorithm reduces the time of encoding process by decreasing computational complexity. Experimental results show that the proposed method reduces 20% of the computation complexity of motion estimation, while the degradation in video quality is negligible.

Building a Business Knowledge Base by a Supervised Learning and Rule-Based Method

  • Shin, Sungho;Jung, Hanmin;Yi, Mun Yong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.1
    • /
    • pp.407-420
    • /
    • 2015
  • Natural Language Question Answering (NLQA) and Prescriptive Analytics (PA) have been identified as innovative, emerging technologies in 2015 by the Gartner group. These technologies require knowledge bases that consist of data that has been extracted from unstructured texts. Every business requires a knowledge base for business analytics as it can enhance companies' competitiveness in their industry. Most intelligent or analytic services depend a lot upon on knowledge bases. However, building a qualified knowledge base is very time consuming and requires a considerable amount of effort, especially if it is to be manually created. Another problem that occurs when creating a knowledge base is that it will be outdated by the time it is completed and will require constant updating even when it is ready in use. For these reason, it is more advisable to create a computerized knowledge base. This research focuses on building a computerized knowledge base for business using a supervised learning and rule-based method. The method proposed in this paper is based on information extraction, but it has been specialized and modified to extract information related only to a business. The business knowledge base created by our system can also be used for advanced functions such as presenting the hierarchy of technologies and products, and the relations between technologies and products. Using our method, these relations can be expanded and customized according to business requirements.

A Study on the Development and Surface Roughness of Roller Cam SCM415 by 5-Axis Machining (5축 가공에 의한 SCM415 롤러 캠 개발과 표면조도 연구)

  • Kim, Jin Su;Lee, Dong Seop;Kang, Seong Ki
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.4
    • /
    • pp.397-402
    • /
    • 2013
  • In this study, we carried out the each lines of section, using GC (green silicon carbide) whetstone, the SCM415 material which separated by after and before heat treatments process, in 3+2 axis machining centers for integrated grinding after cutting end mill works, the spindle speed 8000 rpm and feed rate 150 mm/min. For the analysis of the centerline average roughness (Ra), we measured by 10 steps stages. Using Finite element analysis, we found the result of the load analysis effect of the assembly parts, when applied the 11 kg's load on both side of the ATC (Automatic tool change) arm. The result is as follows. For the centerline average roughness (Ra) in the non-heat treatment work pieces, are appeared the most favorable in the tenth section are $0.510{\mu}m$, that were shown in the near the straight line section which is the smallest deformation of curve. In addition, the bad surface roughness appears on the path is to long by changing angle, the more inclined depth of cut, because the chip discharging is not smoothly.

Interaction Technique in Smoke Simulations using Mouth-Wind on Mobile Devices (모바일 디바이스에서 사용자의 입 바람을 이용한 연기 시뮬레이션의 상호작용 방법)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.4
    • /
    • pp.21-27
    • /
    • 2018
  • In this paper, we propose a real-time interaction method using user's mouth wind in mobile device. In mobile and virtual reality, user interaction technology is important, but various user interface methods is still lacking. Most of the interaction technologies are hand touch screen touch or motion recognition. In this study, we propose an interface technology that can interact with real time using user's mouth wind. The direction of the wind is determined by using the angle and the position between the user and the mobile device, and the size of the wind is calculated by using the magnitude of user's mouth wind. To show the superiority of the proposed technique, we show the result of visualizing the flow of the vector field in real time by integrating the mouth-wind interface into the Navier-Stokes equations. We show the results of the paper on mobile devices, but can be applied in the Agumented reality(AR) and Virtual reality(VR) fields requiring interface technology.

3D Wave Propagation Loss Modeling in Mobile Communication using MLP's Function Approximation Capability (MLP의 함수근사화 능력을 이용한 이동통신 3차원 전파 손실 모델링)

  • Yang, Seo-Min;Lee, Hyeok-Jun
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.10
    • /
    • pp.1143-1155
    • /
    • 1999
  • 셀룰러 방식의 이동통신 시스템에서 전파의 유효신호 도달범위를 예측하기 위해서는 전파전파 모델을 이용한 예측기법이 주로 사용된다. 그러나, 전파과정에서 주변 지형지물에 의해 발생하는 전파손실은 매우 복잡한 비선형적인 특성을 가지며 수식으로는 정확한 표현이 불가능하다. 본 논문에서는 신경회로망의 함수 근사화 능력을 이용하여 전파손실 예측모델을 생성하는 방법을 제안한다. 즉, 전파손실을 송수신 안테나간의 거리, 송신안테나의 특성, 장애물 투과영향, 회절특성, 도로, 수면에 의한 영향 등과 같은 전파환경 변수들의 함수로 가정하고, 신경회로망 학습을 통하여 함수를 근사화한다. 전파환경 변수들이 신경회로망 입력으로 사용되기 위해서는 3차원 지형도와 벡터지도를 이용하여 전파의 반사, 회절, 산란 등의 물리적인 특성이 고려된 특징 추출을 통해 정량적인 수치들을 계산한다. 이와 같이 얻어진 훈련데이타를 이용한 신경회로망 학습을 통해 전파손실 모델을 완성한다. 이 모델을 이용하여 서울 도심 지역의 실제 서비스 환경에 대한 타 모델과의 비교실험결과를 통해 제안하는 모델의 우수성을 보인다.Abstract In cellular mobile communication systems, wave propagation models are used in most cases to predict cell coverage. The amount of propagation loss induced by the obstacles in the propagation path, however, is a highly non-linear function, which cannot be easily represented mathematically. In this paper, we introduce the method of producing propagation loss prediction models by function approximation using neural networks. In this method, we assume the propagation loss is a function of the relevant parameters such as the distance from the base station antenna, the specification of the transmitter antenna, obstacle profile, diffraction effect, road, and water effect. The values of these parameters are produced from the field measurement data, 3D digital terrain maps, and vector maps as its inputs by a feature extraction process, which takes into account the physical characteristics of electromagnetic waves such as reflection, diffraction and scattering. The values produced are used as the input to the neural network, which are then trained to become the propagation loss prediction model. In the experimental study, we obtain a considerable amount of improvement over COST-231 model in the prediction accuracy using this model.

A Study on the simulation software of tapestry in textile design (타피스트리 제작 시뮬레이션 소프트웨어 개발 연구)

  • 손은하;김성곤
    • Archives of design research
    • /
    • v.15 no.1
    • /
    • pp.359-368
    • /
    • 2002
  • Fabric art is most distinguished field among the Modern art. Among them, tapestry which begins human history is investigated more and more deeply and till now it displays with various type and exhibition. But, because of the huge scale of the working process, it requires much time and hard endeavor. After sampling, it begins main process in the present situation. But we cannot know exactly whether it goes well or not until it ends. So after fulfilled whole process, we often try new work again. Because of this reason, we devise computer simulation. With computer simulation we can preview whole working and lengthen the planning time, shorten the needless time. Simulation is made up of Scan, Drawing, Line clean up, Rendering, Parity, and Printing. Scan and Drawing can male new idea. And during the Line clean up we can make vector image and modify the image. And then rendering the image and inquiring the length and weight of yarn. Last process is printing an then making a package soft ware by means of prototype. Through these process, many producers and student can easily access to tapestry and reduce the needless time and endeavor.

  • PDF

Rapid Implementation of 3D Facial Reconstruction from a Single Image on an Android Mobile Device

  • Truong, Phuc Huu;Park, Chang-Woo;Lee, Minsik;Choi, Sang-Il;Ji, Sang-Hoon;Jeong, Gu-Min
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.5
    • /
    • pp.1690-1710
    • /
    • 2014
  • In this paper, we propose the rapid implementation of a 3-dimensional (3D) facial reconstruction from a single frontal face image and introduce a design for its application on a mobile device. The proposed system can effectively reconstruct human faces in 3D using an approach robust to lighting conditions, and a fast method based on a Canonical Correlation Analysis (CCA) algorithm to estimate the depth. The reconstruction system is built by first creating 3D facial mapping from a personal identity vector of a face image. This mapping is then applied to real-world images captured with a built-in camera on a mobile device to form the corresponding 3D depth information. Finally, the facial texture from the face image is extracted and added to the reconstruction results. Experiments with an Android phone show that the implementation of this system as an Android application performs well. The advantage of the proposed method is an easy 3D reconstruction of almost all facial images captured in the real world with a fast computation. This has been clearly demonstrated in the Android application, which requires only a short time to reconstruct the 3D depth map.

A Layer-by-Layer Learning Algorithm using Correlation Coefficient for Multilayer Perceptrons (상관 계수를 이용한 다층퍼셉트론의 계층별 학습)

  • Kwak, Young-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.8
    • /
    • pp.39-47
    • /
    • 2011
  • Ergezinger's method, one of the layer-by-layer algorithms used for multilyer perceptrons, consists of an output node and can make premature saturations in the output's weight because of using linear least squared method in the output layer. These saturations are obstacles to learning time and covergence. Therefore, this paper expands Ergezinger's method to be able to use an output vector instead of an output node and introduces a learning rate to improve learning time and convergence. The learning rate is a variable rate that reflects the correlation coefficient between new weight and previous weight while updating hidden's weight. To compare the proposed method with Ergezinger's method, we tested iris recognition and nonlinear approximation. It was found that the proposed method showed better results than Ergezinger's method in learning convergence. In the CPU time considering correlation coefficient computation, the proposed method saved about 35% time than the previous method.

Color Image Segmentation Based on Edge Salience Map and Region Merging (경계 중요도 맵 및 영역 병합에 기반한 칼라 영상 분할)

  • Kim, Sung-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.3
    • /
    • pp.105-113
    • /
    • 2007
  • In this paper, an image segmentation method which is based on edge salience map and region merging is presented. The edge salience map is calculated by combining a texture edge map with a color edge map. The texture edge map is computed over multiple spatial orientations and frequencies by using Gabor filter. A color edge is computed over the H component of the HSI color model. Then the Watershed transformation technique is applied to the edge salience map to and homogeneous regions where the dissimilarity of color and texture distribution is relatively low. The Watershed transformation tends to over-segment images. To merge the over-segmented regions, first of all, morphological operation is applied to the edge salience map to enhance a contrast of it and also to find mark regions. Then the region characteristics, a Gabor texture vector and a mean color, in the segmented regions is defined and regions that have the similar characteristics, are merged. Experimental results have demonstrated the superiority in segmentation results for various images.

  • PDF

Self-Supervised Document Representation Method

  • Yun, Yeoil;Kim, Namgyu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.187-197
    • /
    • 2020
  • Recently, various methods of text embedding using deep learning algorithms have been proposed. Especially, the way of using pre-trained language model which uses tremendous amount of text data in training is mainly applied for embedding new text data. However, traditional pre-trained language model has some limitations that it is hard to understand unique context of new text data when the text has too many tokens. In this paper, we propose self-supervised learning-based fine tuning method for pre-trained language model to infer vectors of long-text. Also, we applied our method to news articles and classified them into categories and compared classification accuracy with traditional models. As a result, it was confirmed that the vector generated by the proposed model more accurately expresses the inherent characteristics of the document than the vectors generated by the traditional models.