• Title/Summary/Keyword: Vertex Data

Search Result 207, Processing Time 0.023 seconds

Forming Limit Diagrams of Zircaloy-4 and Zirlo Sheets for Stamping of Spacer Grids of Nuclear Fuel Rods (핵연료 지지격자 성형을 위한 Zircaloy-4와 Zirlo 판재의 성형한계도 예측)

  • Seo, Yun-Mi;Hyun, Hong-Chul;Lee, Hyung-Yil;Kim, Nak-Soo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.8
    • /
    • pp.889-897
    • /
    • 2011
  • In this work, we investigated the theoretical forming limit models for Zircaloy-4 and Zirlo used for spacer grid of nuclear fuel rods. Tensile and anisotropy tests were performed to obtain stress-strain curves and anisotropic coefficients. The experimental forming limit diagrams (FLD) for two materials were obtained by dome stretching tests following NUMISHEET 96. Theoretical FLD depends on FL models and yield criteria. To obtain the right hand side (RHS) of FLD, we applied the FL models (Swift's diffuse necking, M-K theory, S-R vertex theory) to Zircaloy-4 and Zirlo sheets. Hill's local necking theory was adopted for the left hand side (LHS) of FLD. To consider the anisotropy of sheets, the yield criteria of Hill and Hosford were applied. Comparing the predicted curves with the experimental data, we found that the RHS of FLD for Zircaloy-4 can be described by the Swift model (with the Hill's criterion), while the LHS of the FLD can be explained by Hill model. The FLD for Zirlo can be explained by the S-R model and the Hosford's criterion (a = 8).

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

A study on the shear bond strength between Co-Cr denture base and relining materials (금속의치상과 의치이장재료 간의 결합력에 관한 연구)

  • Lee, Na-Young;Kim, Doo-Yong;Lee, Young-Soo;Park, Won-Hee
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.49 no.1
    • /
    • pp.8-15
    • /
    • 2011
  • Purpose: This study evaluated the bonding strength of direct relining resin to Co-Cr denture base material according to surface treatment and immersion time. Materials and methods: In this study, Co-Cr alloy was used in hexagon shape. Each specimen was cut in flat surface, and sandblasted with $110\;{\mu}m$ $Al_2O_3$ for 1 minute. 54 specimens were divided into 3 groups; group A-control group, group B-applied with surface primer A, group C-applied with surface primer B. Self curing direct resin was used for this study. Each group was subdivided into another 3 groups according to the immersion time. After the wetting storage, shear bond strength of the specimens were measured with universal testing machine. The data were analyzed using two-way analysis of variance and Tukey post hoc method. Results: In experiment of sandblasting specimens, surface roughness of the alloy was the highest after 1 minute sandblasting. In experiment of testing shear bond strength, bonding strength was lowered on group B, C, A. There were significant differences between 3 groups. According to period, Bonding strength was the highest on 0 week storage group, and the weakest on 2 week storage group. But there were no significant differences between 3 periods. According to group and period, bonding strength of all group were lowered according to immersion time but there were no significant differences on group B and group C, but there was significant difference according to immersion time on group A. Conclusion: It is useful to sandblast and adopt metal primers when relining Co-Cr metal base dentures in chair-side.

A Study on the Intelligent Service Selection Reasoning for Enhanced User Satisfaction : Appliance to Cloud Computing Service (사용자 만족도 향상을 위한 지능형 서비스 선정 방안에 관한 연구 : 클라우드 컴퓨팅 서비스에의 적용)

  • Shin, Dong Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.35-51
    • /
    • 2012
  • Cloud computing is internet-based computing where computing resources are offered over the Internet as scalable and on-demand services. In particular, in case a number of various cloud services emerge in accordance with development of internet and mobile technology, to select and provide services with which service users satisfy is one of the important issues. Most of previous works show the limitation in the degree of user satisfaction because they are based on so called concept similarity in relation to user requirements or are lack of versatility of user preferences. This paper presents cloud service selection reasoning which can be applied to the general cloud service environments including a variety of computing resource services, not limited to web services. In relation to the service environments, there are two kinds of services: atomic service and composite service. An atomic service consists of service attributes which represent the characteristics of service such as functionality, performance, or specification. A composite service can be created by composition of atomic services and other composite services. Therefore, a composite service inherits attributes of component services. On the other hand, the main participants in providing with cloud services are service users, service suppliers, and service operators. Service suppliers can register services autonomously or in accordance with the strategic collaboration with service operators. Service users submit request queries including service name and requirements to the service management system. The service management system consists of a query processor for processing user queries, a registration manager for service registration, and a selection engine for service selection reasoning. In order to enhance the degree of user satisfaction, our reasoning stands on basis of the degree of conformance to user requirements of service attributes in terms of functionality, performance, and specification of service attributes, instead of concept similarity as in ontology-based reasoning. For this we introduce so called a service attribute graph (SAG) which is generated by considering the inclusion relationship among instances of a service attribute from several perspectives like functionality, performance, and specification. Hence, SAG is a directed graph which shows the inclusion relationships among attribute instances. Since the degree of conformance is very close to the inclusion relationship, we can say the acceptability of services depends on the closeness of inclusion relationship among corresponding attribute instances. That is, the high closeness implies the high acceptability because the degree of closeness reflects the degree of conformance among attributes instances. The degree of closeness is proportional to the path length between two vertex in SAG. The shorter path length means more close inclusion relationship than longer path length, which implies the higher degree of conformance. In addition to acceptability, in this paper, other user preferences such as priority for attributes and mandatary options are reflected for the variety of user requirements. Furthermore, to consider various types of attribute like character, number, and boolean also helps to support the variety of user requirements. Finally, according to service value to price cloud services are rated and recommended to users. One of the significances of this paper is the first try to present a graph-based selection reasoning unlike other works, while considering various user preferences in relation with service attributes.

Development of Quantification Methods for the Myocardial Blood Flow Using Ensemble Independent Component Analysis for Dynamic $H_2^{15}O$ PET (동적 $H_2^{15}O$ PET에서 앙상블 독립성분분석법을 이용한 심근 혈류 정량화 방법 개발)

  • Lee, Byeong-Il;Lee, Jae-Sung;Lee, Dong-Soo;Kang, Won-Jun;Lee, Jong-Jin;Kim, Soo-Jin;Choi, Seung-Jin;Chung, June-Key;Lee, Myung-Chul
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.486-491
    • /
    • 2004
  • Purpose: factor analysis and independent component analysis (ICA) has been used for handling dynamic image sequences. Theoretical advantages of a newly suggested ICA method, ensemble ICA, leaded us to consider applying this method to the analysis of dynamic myocardial $H_2^{15}O$ PET data. In this study, we quantified patients' blood flow using the ensemble ICA method. Materials and Methods: Twenty subjects underwent $H_2^{15}O$ PET scans using ECAT EXACT 47 scanner and myocardial perfusion SPECT using Vertex scanner. After transmission scanning, dynamic emission scans were initiated simultaneously with the injection of $555{\sim}740$ MBq $H_2^{15}O$. Hidden independent components can be extracted from the observed mixed data (PET image) by means of ICA algorithms. Ensemble learning is a variational Bayesian method that provides an analytical approximation to the parameter posterior using a tractable distribution. Variational approximation forms a lower bound on the ensemble likelihood and the maximization of the lower bound is achieved through minimizing the Kullback-Leibler divergence between the true posterior and the variational posterior. In this study, posterior pdf was approximated by a rectified Gaussian distribution to incorporate non-negativity constraint, which is suitable to dynamic images in nuclear medicine. Blood flow was measured in 9 regions - apex, four areas in mid wall, and four areas in base wall. Myocardial perfusion SPECT score and angiography results were compared with the regional blood flow. Results: Major cardiac components were separated successfully by the ensemble ICA method and blood flow could be estimated in 15 among 20 patients. Mean myocardial blood flow was $1.2{\pm}0.40$ ml/min/g in rest, $1.85{\pm}1.12$ ml/min/g in stress state. Blood flow values obtained by an operator in two different occasion were highly correlated (r=0.99). In myocardium component image, the image contrast between left ventricle and myocardium was 1:2.7 in average. Perfusion reserve was significantly different between the regions with and without stenosis detected by the coronary angiography (P<0.01). In 66 segment with stenosis confirmed by angiography, the segments with reversible perfusion decrease in perfusion SPECT showed lower perfusion reserve values in $H_2^{15}O$ PET. Conclusions: Myocardial blood flow could be estimated using an ICA method with ensemble learning. We suggest that the ensemble ICA incorporating non-negative constraint is a feasible method to handle dynamic image sequence obtained by the nuclear medicine techniques.

An Analysis of the Psychiatric Characteristics of the Alopecia Areata in Female (여성 탈모증의 정신의학적 특성 분석)

  • Lee, Kil-Hong;Na, Chul;Lee, Young-Sik;Lee, Chang-Hoon;No, Byung-In;Hong, Chang-Kwon
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.8 no.1
    • /
    • pp.31-45
    • /
    • 2000
  • Objectives : The present study was performed to reveal differences between female and male cases of alopecia in their alopecia related variables such as patterns of hair loss, psychiatric characteristics, associate illnesses, and methods of treatment, and to use them as basic materials for proper management and early prevention of the alopecia prone cases. Methods : In order to analysis the gender difference in hair losses, the subjects were divided into two subgroups as the 51 cases of female alopecia and the 42 cases of male alopecia, who had visited to the department of psychiatry consulted from the department of dermatology, Yongsan hopital, ChungAng University, Seoul, Korea, from January 1998 to December 1998. In data analysis, the subjects were statistically assesed by chi-squre test and analysis of varaiance, through SPSS-$PC^+$ 9.0V. Results : 1) Female subjects were more likely showed lower socio-economical level including lower eonomical level, lower educational level, or lower occupational level in their parent's job, were more likely to have larger number of siblings and to have many sisters comparison to the male cases. 2) Female subjects were more likely visited to the department of dermatology, more history of alopecia in their female family members, lesser history of alopecia in their male family members, more loss of hairs in vertex or frontal region of scalp, lesser loss of hairs in occipital region, and lesser nail changes in comparison to the male cases. 3) Female subjects were more suffered from intra-familial conflicts and economical changes, or their introverted personality makeup, lesser likely suffered from changes of business and health changes, and showed lesser conflicts related with poorer adaptaion in their job life. 4) Female subjects were more likely diagnosed as depression or conversion disorders, more frequently complaint anxiety symptoms or depressive symptoms, higher level of anxiety index, lesser complaint somatization or obsessive compulsive symptoms, and lesser diagnosed as anxiety disorder in comparison to the male cases. 5) Female subjects were more likely tended to show personality makeup such as the introverted, the lie, the repressed, or the feminine trends than the male cases. 6) Female subjects were more significantly treated by antianxiety drug such as etizolam and dermatological therapies include tretinoin, and lesser treated by clotiazepam and prednicarbonate in comparison to the male cases. Conclusion : From the facts that The most important factors in developing hair loss in the female subjects in comparison to the male cases seems to be closely correlated with the serious psychopathology such as the presence of mental disorders including depression, the presence of complaining anxiety or depressive symptomatology, the presence of stressful life events such as intrafamilial life changes, and the presence of personality makeup such as the introverted, the lie, the repressed, or the feminine trends, the authors confirmed that dermatologists act as the primary care physician are in a unique position to recognize psychiatric comorbidity and execute meaningful intervention for female patients with the alopecia with psychiatrists.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.