• Title/Summary/Keyword: 벡터요소

Search Result 480, Processing Time 0.039 seconds

P-Version Model Based on Hierarchical Axisymmetric Element (계층적 축대칭요소에 의한 P-version모델)

  • Woo, Kwang Sung;Chang, Yong Chai;Jung, Woo Sung
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.12 no.4_1
    • /
    • pp.67-76
    • /
    • 1992
  • A hierarchical formulation based on p-version of the finite element method for linear elastic axisymmetric stress analysis is presented. This is accomplished by introducing additional nodal variables in the element displacement approximation on the basis of integrals of Legendre polynomials. Since the displacement approximation is hierarchical, the resulting element stiffness matrix and equivalent nodal load vectors are hierarchical also. The merits of the propoosed element are as follow: i) improved conditioning, ii) ease of joining finite elements of different polynomial order, and iii) utilizing previous solutions and computation when attempting a refinement. Numerical examples are presented to demonstrate the accuracy, efficiency, modeling convenience, robustness and overall superiority of the present formulation. The results obtained from the present formulation are also compared with those available in the literature as well as with the analytical solutions.

  • PDF

Analysis of Skin Color Pigments from Camera RGB Signal Using Skin Pigment Absorption Spectrum (피부색소 흡수 스펙트럼을 이용한 카메라 RGB 신호의 피부색 성분 분석)

  • Kim, Jeong Yeop
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.1
    • /
    • pp.41-50
    • /
    • 2022
  • In this paper, a method to directly calculate the major elements of skin color such as melanin and hemoglobin from the RGB signal of the camera is proposed. The main elements of skin color typically measure spectral reflectance using specific equipment, and reconfigure the values at some wavelengths of the measured light. The values calculated by this method include such things as melanin index and erythema index, and require special equipment such as a spectral reflectance measuring device or a multi-spectral camera. It is difficult to find a direct calculation method for such component elements from a general digital camera, and a method of indirectly calculating the concentration of melanin and hemoglobin using independent component analysis has been proposed. This method targets a region of a certain RGB image, extracts characteristic vectors of melanin and hemoglobin, and calculates the concentration in a manner similar to that of Principal Component Analysis. The disadvantage of this method is that it is difficult to directly calculate the pixel unit because a group of pixels in a certain area is used as an input, and since the extracted feature vector is implemented by an optimization method, it tends to be calculated with a different value each time it is executed. The final calculation is determined in the form of an image representing the components of melanin and hemoglobin by converting it back to the RGB coordinate system without using the feature vector itself. In order to improve the disadvantages of this method, the proposed method is to calculate the component values of melanin and hemoglobin in a feature space rather than an RGB coordinate system using a feature vector, and calculate the spectral reflectance corresponding to the skin color using a general digital camera. Methods and methods of calculating detailed components constituting skin pigments such as melanin, oxidized hemoglobin, deoxidized hemoglobin, and carotenoid using spectral reflectance. The proposed method does not require special equipment such as a spectral reflectance measuring device or a multi-spectral camera, and unlike the existing method, direct calculation of the pixel unit is possible, and the same characteristics can be obtained even in repeated execution. The standard diviation of density for melanin and hemoglobin of proposed method was 15% compared to conventional and therefore gives 6 times stable.

The Effects of the Walking Exercise on ST/HR Slope and QRS Vector in the Middle-Aged Men (운동부하 심전도를 이용한 중년 남성들의 걷기 운동이 ST/HR 경사 및 QRS 벡터에 미치는 영향)

  • Kim, Duk-Jung
    • Journal of Life Science
    • /
    • v.20 no.1
    • /
    • pp.71-76
    • /
    • 2010
  • The purpose of this study was to investigate the changes of long term ECG response in a company with middle-aged male employees. Subjects were 60 men who were 40~55 years old. We enrolled 30 exercise group subjects into a 3-year exercise program. In measurement index, body composition was measured by % body fat and BMI. Exercise stress test analyses were measured using ST/HR slope and QRS vector. Statistical analysis was performed using analysis of repeated ANOVA. Results of this study were as follows: In ST/HR slope, the control group showed symptoms of ischemia after nine minutes of exercise. In the rest frontal axis of the QRS vector, the control group had a tendency towards right axis deviation. In the rest horizontal amplitude of the QRS vector, the control group had a tendency to show a significant decrease, but it was increased significantly in the exercise group. These findings suggest that inactive company workers was showed a decrease of exercise capacity, early diagnosis exercise-induced ST depression, and prolonged deviation of QRS vector, but that cardiac function could be elevated in active middle aged men through regular exercise program participation.

Metamorphosis Hierarchical Motion Vector Estimation Algorithm for Multidimensional Image System (다차원 영상 시스템을 위한 변형계층 모션벡터 추정알고리즘)

  • Kim Jeong-Woong;Yang Hae-Sool
    • The KIPS Transactions:PartB
    • /
    • v.13B no.2 s.105
    • /
    • pp.105-114
    • /
    • 2006
  • In ubiquitous environment where various kinds of computers are embedded in persons, objects and environment and they are interconnected and can be used in my place as necessary, different types of data need to be exchanged between heterogeneous machines through home network. In the environment, the efficient processing, transmission and monitoring of image data are essential technologies. We need to make research not only on traditional image processing such as spatial and visual resolution, color expression and methods of measuring image quality but also on transmission rate on home network that has a limited bandwidth. The present study proposes a new motion vector estimation algorithm for transmitting, processing and controlling image data, which is the core part of contents in home network situation and, using algorithm, implements a real time monitoring system of multi dimensional images transmitted from multiple cameras. Image data of stereo cameras to be transmitted in different environment in angle, distance, etc. are preprocessed through reduction, magnification, shift or correction, and compressed and sent using the proposed metamorphosis hierarchical motion vector estimation algorithm for the correction of motion. The proposed algorithm adopts advantages and complements disadvantages of existing motion vector estimation algorithms such as whole range search, three stage search and hierarchical search, and estimates efficiently the motion of images with high variation of brightness using an atypical small size macro block. The proposed metamorphosis hierarchical motion vector estimation algorithm and implemented image systems can be utilized in various ways in ubiquitous environment.

A Semi-Automatic Semantic Mark Tagging System for Building Dialogue Corpus (대화 말뭉치 구축을 위한 반자동 의미표지 태깅 시스템)

  • Park, Junhyeok;Lee, Songwook;Lim, Yoonseob;Choi, Jongsuk
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.5
    • /
    • pp.213-222
    • /
    • 2019
  • Determining the meaning of a keyword in a speech dialogue system is an important technology for the future implementation of an intelligent speech dialogue interface. After extracting keywords to grasp intention from user's utterance, the intention of utterance is determined by using the semantic mark of keyword. One keyword can have several semantic marks, and we regard the task of attaching the correct semantic mark to the user's intentions on these keyword as a problem of word sense disambiguation. In this study, about 23% of all keywords in the corpus is manually tagged to build a semantic mark dictionary, a synonym dictionary, and a context vector dictionary, and then the remaining 77% of all keywords is automatically tagged. The semantic mark of a keyword is determined by calculating the context vector similarity from the context vector dictionary. For an unregistered keyword, the semantic mark of the most similar keyword is attached using a synonym dictionary. We compare the performance of the system with manually constructed training set and semi-automatically expanded training set by selecting 3 high-frequency keywords and 3 low-frequency keywords in the corpus. In experiments, we obtained accuracy of 54.4% with manually constructed training set and 50.0% with semi-automatically expanded training set.

A Study on GPU Computing of Bi-conjugate Gradient Method for Finite Element Analysis of the Incompressible Navier-Stokes Equations (유한요소 비압축성 유동장 해석을 위한 이중공액구배법의 GPU 기반 연산에 대한 연구)

  • Yoon, Jong Seon;Jeon, Byoung Jin;Jung, Hye Dong;Choi, Hyoung Gwon
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.40 no.9
    • /
    • pp.597-604
    • /
    • 2016
  • A parallel algorithm of bi-conjugate gradient method was developed based on CUDA for parallel computation of the incompressible Navier-Stokes equations. The governing equations were discretized using splitting P2P1 finite element method. Asymmetric stenotic flow problem was solved to validate the proposed algorithm, and then the parallel performance of the GPU was examined by measuring the elapsed times. Further, the GPU performance for sparse matrix-vector multiplication was also investigated with a matrix of fluid-structure interaction problem. A kernel was generated to simultaneously compute the inner product of each row of sparse matrix and a vector. In addition, the kernel was optimized to improve the performance by using both parallel reduction and memory coalescing. In the kernel construction, the effect of warp on the parallel performance of the present CUDA was also examined. The present GPU computation was more than 7 times faster than the single CPU by double precision.

REAL - TIME ORBIT DETERMINATION OF LOW EARTH ORBIT SATELLITES USING RADAR SYSTEM AND SGP4 MODEL (RADAR 시스템과 SGP4 모델을 이용한 저궤도 위성의 실시간 궤도결정)

  • 이재광;이성섭;윤재철;최규홍
    • Journal of Astronomy and Space Sciences
    • /
    • v.20 no.1
    • /
    • pp.21-28
    • /
    • 2003
  • In case that we independently obtain orbital informations about the low earth satellites of foreign countries using radar systems, we develop the orbit determination algorithm for this purpose using a SGP4 model with an analytical orbit model and the extended Kalman filter with a real-time processing method. When the state vector is Keplerian orbital elements, singularity problems happen to compute partial derivative with respect to inclination and eccentricity orbit elements. To cope with this problem, we set state vector osculating to mean equinox and true equator cartesian elements with coordinate transformation. The state transition matrix and the covariance matrix are numerically computed using a SGP4 model. Observational measurements are the type of azimuth, elevation and range, filter process to each measurement in a lump. After analyzing performance of the developed orbit determination algorithm using TOPEX/POSEIDON POE(precision 0.bit Ephemeris), its position error has about 1 km. To be similar to performance of NORAD system that has up to 3km position accuracy during 7 days need to radar system performance that have accuracy within 0.1 degree for azimuth and elevation and 50m for range.

Mesh Simplification for Preservation of Characteristic Features using Surface Orientation (표면의 방향정보를 고려한 메쉬의 특성정보의 보존)

  • 고명철;최윤철
    • Journal of Korea Multimedia Society
    • /
    • v.5 no.4
    • /
    • pp.458-467
    • /
    • 2002
  • There has been proposed many simplification algorithms for effectively decreasing large-volumed polygonal surface data. These algorithms apply their own cost function for collapse to one of fundamental simplification unit, such as vertex, edge and triangle, and minimize the simplification error occurred in each simplification steps. Most of cost functions adopted in existing works use the error estimation method based on distance optimization. Unfortunately, it is hard to define the local characteristics of surface data using distance factor alone, which is basically scalar component. Therefore, the algorithms cannot preserve the characteristic features in surface areas with high curvature and, consequently, loss the detailed shape of original mesh in high simplification ratio. In this paper, we consider the vector component, such as surface orientation, as one of factors for cost function. The surface orientation is independent upon scalar component, distance value. This means that we can reconsider whether or not to preserve them as the amount of vector component, although they are elements with low scalar values. In addition, we develop a simplification algorithm based on half-edge collapse manner, which use the proposed cost function as the criterion for removing elements. In half-edge collapse, using one of endpoints in the edge represents a new vertex after collapse operation. The approach is memory efficient and effectively applicable to the rendering system requiring real-time transmission of large-volumed surface data.

  • PDF

Card Transaction Data-based Deep Tourism Recommendation Study (카드 데이터 기반 심층 관광 추천 연구)

  • Hong, Minsung;Kim, Taekyung;Chung, Namho
    • Knowledge Management Research
    • /
    • v.23 no.2
    • /
    • pp.277-299
    • /
    • 2022
  • The massive card transaction data generated in the tourism industry has become an important resource that implies tourist consumption behaviors and patterns. Based on the transaction data, developing a smart service system becomes one of major goals in both tourism businesses and knowledge management system developer communities. However, the lack of rating scores, which is the basis of traditional recommendation techniques, makes it hard for system designers to evaluate a learning process. In addition, other auxiliary factors such as temporal, spatial, and demographic information are needed to increase the performance of a recommendation system; but, gathering those are not easy in the card transaction context. In this paper, we introduce CTDDTR, a novel approach using card transaction data to recommend tourism services. It consists of two main components: i) Temporal preference Embedding (TE) represents tourist groups and services into vectors through Doc2Vec. And ii) Deep tourism Recommendation (DR) integrates the vectors and the auxiliary factors from a tourism RDF (resource description framework) through MLP (multi-layer perceptron) to provide services to tourist groups. In addition, we adopt RFM analysis from the field of knowledge management to generate explicit feedback (i.e., rating scores) used in the DR part. To evaluate CTDDTR, the card transactions data that happened over eight years on Jeju island is used. Experimental results demonstrate that the proposed method is more positive in effectiveness and efficacies.

An Analytical Study on Automatic Classification of Domestic Journal articles Based on Machine Learning (기계학습에 기초한 국내 학술지 논문의 자동분류에 관한 연구)

  • Kim, Pan Jun
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.2
    • /
    • pp.37-62
    • /
    • 2018
  • This study examined the factors affecting the performance of automatic classification based on machine learning for domestic journal articles in the field of LIS. In particular, In view of the classification performance that assigning automatically the class labels to the articles in "Journal of the Korean Society for Information Management", I investigated the characteristics of the key factors(weighting schemes, training set size, classification algorithms, label assigning methods) through the diversified experiments. Consequently, It is effective to apply each element appropriately according to the classification environment and the characteristics of the document set, and a fairly good performance can be obtained by using a simpler model. In addition, the classification of domestic journals can be considered as a multi-label classification that assigns more than one category to a specific article. Therefore, I proposed an optimal classification model using simple and fast classification algorithm and small learning set considering this environment.