• Title/Summary/Keyword: Kernel Space

Search Result 237, Processing Time 0.024 seconds

Thermo-mechanical response of size-dependent piezoelectric materials in thermo-viscoelasticity theory

  • Ezzat, Magdy A.;Al-Muhiameed, Zeid I.A.
    • Steel and Composite Structures
    • /
    • v.45 no.4
    • /
    • pp.535-546
    • /
    • 2022
  • The memory response of nonlocal systematical formulation size-dependent coupling of viscoelastic deformation and thermal fields for piezoelectric materials with dual-phase lag heat conduction law is constructed. The method of the matrix exponential, which constitutes the basis of the state-space approach of modern control theory, is applied to the non-dimensional equations. The resulting formulation together with the Laplace transform technique is applied to solve a problem of a semi-infinite piezoelectric rod subjected to a continuous heat flux with constant time rates. The inversion of the Laplace transforms is carried out using a numerical approach. Some comparisons of the impacts of nonlocal parameters and time-delay constants for various forms of kernel functions on thermal spreads and thermo-viscoelastic response are illustrated graphically.

Multiple Cause Model-based Topic Extraction and Semantic Kernel Construction from Text Documents (다중요인모델에 기반한 텍스트 문서에서의 토픽 추출 및 의미 커널 구축)

  • 장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.595-604
    • /
    • 2004
  • Automatic analysis of concepts or semantic relations from text documents enables not only an efficient acquisition of relevant information, but also a comparison of documents in the concept level. We present a multiple cause model-based approach to text analysis, where latent topics are automatically extracted from document sets and similarity between documents is measured by semantic kernels constructed from the extracted topics. In our approach, a document is assumed to be generated by various combinations of underlying topics. A topic is defined by a set of words that are related to the same topic or cooccur frequently within a document. In a network representing a multiple-cause model, each topic is identified by a group of words having high connection weights from a latent node. In order to facilitate teaming and inferences in multiple-cause models, some approximation methods are required and we utilize an approximation by Helmholtz machines. In an experiment on TDT-2 data set, we extract sets of meaningful words where each set contains some theme-specific terms. Using semantic kernels constructed from latent topics extracted by multiple cause models, we also achieve significant improvements over the basic vector space model in terms of retrieval effectiveness.

A Study on the Deriving of Areas of Concern for Crime using the Mental Map (멘탈 맵을 이용한 범죄발생 우려 지역 도출에 관한 연구)

  • Park, Su Jeong;Shin, Dong Bin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.3
    • /
    • pp.177-188
    • /
    • 2019
  • Recently, citizens are feeling anxious as 'Motiveless Crime' increases. The quality of citizens life is degraded and the degree of crime fear is increasing. In this study, based on various variables related to crime other than actual crime occurrence status, crime occurrence points (point line polygon) felt by citizens are created by using mental map methodology. And the purpose of this study is to derive the area of concern for crime through spatial overlap analysis using kernel density estimation analysis. It also uses spatial overlay analysis using kernel density estimation to derive areas of concern for crime occurrence. As a result, the local residents' request point and the areas of concern for crime were overlapped. In addition, the mental map indicating the fear of crime was constructed by mapping mainly the areas between the facilities, the non-construction area such as the narrow area, the security CCTV, the streetlight. This study is meaningful in that it tried to derive a crime occurrence concern area by using mental map method unlike the previous study related to crime. The results of this study, such as mental map, could be used in various fields such as construction of fragile crime map, guideline of crime prevention through environment design.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

A Heuristic Search Planner Based on Component Services (컴포넌트 서비스 기반의 휴리스틱 탐색 계획기)

  • Kim, In-Cheol;Shin, Hang-Cheol
    • The KIPS Transactions:PartB
    • /
    • v.15B no.2
    • /
    • pp.159-170
    • /
    • 2008
  • Nowadays, one of the important functionalities required from robot task planners is to generate plans to compose existing component services into a new service. In this paper, we introduce the design and implementation of a heuristic search planner, JPLAN, as a kernel module for component service composition. JPLAN uses a local search algorithm and planning graph heuristics. The local search algorithm, EHC+, is an extended version of the Enforced Hill-Climbing(EHC) which have shown high efficiency applied in state-space planners including FF. It requires some amount of additional local search, but it is expected to reduce overall amount of search to arrive at a goal state and get shorter plans. We also present some effective heuristic extraction methods which are necessarily needed for search on a large state-space. The heuristic extraction methods utilize planning graphs that have been first used for plan generation in Graphplan. We introduce some planning graph heuristics and then analyze their effects on plan generation through experiments.

Page-level Incremental Checkpointing for Efficient Use of Stable Storage (안정 저장장치의 효율적 사용을 위한 페이지 기반 점진적 검사점 기법)

  • Heo, Jun-Young;Yi, Sang-Ho;Gu, Bon-Cheol;Cho, Yoo-Kun;Hong, Ji-Man
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.12
    • /
    • pp.610-617
    • /
    • 2007
  • Incremental checkpointing, which is intended to minimize checkpointing overhead, saves only the modified pages of a process. However, the cumulative site of incremental checkpoints increases at a steady rate over time because a number of updated values may be saved for the same page. In this paper, we present a comprehensive overview of Pickpt, a page-level incremental checkpointing facility. Pickpt provides space-efficient techniques aiming to minimizing the use of disk space. For our experiments, the results showed that the use of disk space using Pickpt was significantly reduced, compared with existing incremental checkpointing.

SCALE TRANSFORMATIONS FOR PRESENT POSITION-INDEPENDENT CONDITIONAL EXPECTATIONS

  • Cho, Dong Hyun
    • Journal of the Korean Mathematical Society
    • /
    • v.53 no.3
    • /
    • pp.709-723
    • /
    • 2016
  • Let C[0, t] denote a generalized Wiener space, the space of real-valued continuous functions on the interval [0, t] and define a random vector $Z_n:C[0,t]{\rightarrow}{\mathbb{R}}^n$ by $Zn(x)=(\int_{0}^{t_1}h(s)dx(s),{\cdots},\int_{0}^{t_n}h(s)dx(s))$, where 0 < $t_1$ < ${\cdots}$ < $t_n$ < t is a partition of [0, t] and $h{\in}L_2[0,t]$ with $h{\neq}0$ a.e. In this paper we will introduce a simple formula for a generalized conditional Wiener integral on C[0, t] with the conditioning function $Z_n$ and then evaluate the generalized analytic conditional Wiener and Feynman integrals of the cylinder function $F(x)=f(\int_{0}^{t}e(s)dx(s))$ for $x{\in}C[0,t]$, where $f{\in}L_p(\mathbb{R})(1{\leq}p{\leq}{\infty})$ and e is a unit element in $L_2[0,t]$. Finally we express the generalized analytic conditional Feynman integral of F as two kinds of limits of non-conditional generalized Wiener integrals of polygonal functions and of cylinder functions using a change of scale transformation for which a normal density is the kernel. The choice of a complete orthonormal subset of $L_2[0,t]$ used in the transformation is independent of e and the conditioning function $Z_n$ does not contain the present positions of the generalized Wiener paths.

A Study on the Application of Zero Copy Technology to Improve the Transmission Efficiency and Recording Performance of Massive Data (대용량 데이터의 전송 효율 및 기록 성능 향상을 위한 Zero Copy 기술 적용에 관한 연구)

  • Song, Min-Gyu;Kim, Hyo-Ryoung;Kang, Yong-Woo;Je, Do-Heung;Wi, Seog-Oh;Lee, Sung-Mo;Kim, Seung-Rae
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1133-1144
    • /
    • 2021
  • Zero-copy is a technology that is also called no-memory copy, and through its use, context switching between the user space and the kernel space can be reduced to minimize the load on the CPU. However, this technology is only used to transmit small random files, and has not yet been widely used for large file transfers. This paper intends to discuss the practical application of zero-copy in processing large files via a network. To this end, we first developed a small test bed and program that can transmit and store data based on zero-copy. Afterwards, we intend to verify the usefulness of the applied technology in detail through detailed performance evaluation

Density Estimation Technique for Effective Representation of Light In-scattering (빛의 내부산란의 효과적인 표현을 위한 밀도 추정기법)

  • Min, Seung-Ki;Ihm, In-Sung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.1
    • /
    • pp.9-20
    • /
    • 2010
  • In order to visualize participating media in 3D space, they usually calculate the incoming radiance by subdividing the ray path into small subintervals, and accumulating their respective light energy due to direct illumination, scattering, absorption, and emission. Among these light phenomena, scattering behaves in very complicated manner in 3D space, often requiring a great deal of simulation efforts. To effectively simulate the light scattering effect, several approximation techniques have been proposed. Volume photon mapping takes a simple approach where the light scattering phenomenon is represented in volume photon map through a stochastic simulation, and the stored information is explored in the rendering stage. While effective, this method has a problem that the number of necessary photons increases very fast when a higher variance reduction is needed. In an attempt to resolve such problem, we propose a different approach for rendering particle-based volume data where kernel smoothing, one of several density estimation methods, is explored to represent and reconstruct the light in-scattering effect. The effectiveness of the presented technique is demonstrated with several examples of volume data.

A Study on the Mapping of Fishing Activity using V-Pass Data - Focusing on the Southeast Sea of Korea - (선박패스(V-Pass) 자료를 활용한 어업활동 지도 제작 연구 - 남해동부해역을 중심으로 -)

  • HAN, Jae-Rim;KIM, Tae-Hoon;CHOI, Eun Yeong;CHOI, Hyun-Woo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.24 no.1
    • /
    • pp.112-125
    • /
    • 2021
  • Marine spatial planning(MSP) designates the marine as nine kinds of use zones for the systematic and rational management of marine spaces. One of them is the fishery protection zone, which is necessary for the sustainable production of fishery products, including the protection and fosterage of fishing activities. This study intends to quantitatively identify the fishing activity space, one of the elements necessary for the designation of fisheries protection zones, by mapping of fishery activities using V-Pass data and deriving the fishery activity concentrated zone. To this end, pre-processing of V-Pass data was performed, such as constructing a dataset that combines static and dynamic information, calculating the speed of fishing vessels, extracting fishing activity points, and removing data in non-fishing activity zone. Finally, using the selected V-Pass point data, a fishery activity map was made by kernel density estimation, and the concentrated space of fishery activity was analyzed. In addition, it was confirmed that there is a difference in the spatial distribution of fishing activities according to the type of fishing vessel and the season. The pre-processing technique of large volume V-Pass data and the mapping method of fishing activities performed through this study are expected to contribute to the study of spatial characteristics evaluation of fishing activities in the future.