• Title/Summary/Keyword: Graph Data

Search Result 1,301, Processing Time 0.026 seconds

A Study on the Implement of AI-based Integrated Smart Fire Safety (ISFS) System in Public Facility

  • Myung Sik Lee;Pill Sun Seo
    • International Journal of High-Rise Buildings
    • /
    • v.12 no.3
    • /
    • pp.225-234
    • /
    • 2023
  • Even at this point in the era of digital transformation, we are still facing many problems in the safety sector that cannot prevent the occurrence or spread of human casualties. When you are in an unexpected emergency, it is often difficult to respond only with human physical ability. Human casualties continue to occur at construction sites, manufacturing plants, and multi-use facilities used by many people in everyday life. If you encounter a situation where normal judgment is impossible in the event of an emergency at a life site where there are still many safety blind spots, it is difficult to cope with the existing manual guidance method. New variable guidance technology, which combines artificial intelligence and digital twin, can make it possible to prevent casualties by processing large amounts of data needed to derive appropriate countermeasures in real time beyond identifying what safety accidents occurred in unexpected crisis situations. When a simple control method that divides and monitors several CCTVs is digitally converted and combined with artificial intelligence and 3D digital twin control technology, intelligence augmentation (IA) effect can be achieved that strengthens the safety decision-making ability required in real time. With the enforcement of the Serious Disaster Enterprise Punishment Act, the importance of distributing a smart location guidance system that urgently solves the decision-making delay that occurs in safety accidents at various industrial sites and strengthens the real-time decision-making ability of field workers and managers is highlighted. The smart location guidance system that combines artificial intelligence and digital twin consists of AIoT HW equipment, wireless communication NW equipment, and intelligent SW platform. The intelligent SW platform consists of Builder that supports digital twin modeling, Watch that meets real-time control based on synchronization between real objects and digital twin models, and Simulator that supports the development and verification of various safety management scenarios using intelligent agents. The smart location guidance system provides on-site monitoring using IoT equipment, CCTV-linked intelligent image analysis, intelligent operating procedures that support workflow modeling to immediately reflect the needs of the site, situational location guidance, and digital twin virtual fencing access control technology. This paper examines the limitations of traditional fixed passive guidance methods, analyzes global technology development trends to overcome them, identifies the digital transformation properties required to switch to intelligent variable smart location guidance methods, explains the characteristics and components of AI-based public facility smart fire safety integrated system (ISFS).

Exploring ESG Activities Using Text Analysis of ESG Reports -A Case of Chinese Listed Manufacturing Companies- (ESG 보고서의 텍스트 분석을 이용한 ESG 활동 탐색 -중국 상장 제조 기업을 대상으로-)

  • Wung Chul Jin;Seung Ik Baek;Yu Feng Sun;Xiang Dan Jin
    • Journal of Service Research and Studies
    • /
    • v.14 no.2
    • /
    • pp.18-36
    • /
    • 2024
  • As interest in ESG has been increased, it is easy to find papers that empirically study that a company's ESG activities have a positive impact on the company's performance. However, research on what ESG activities companies should actually engage in is relatively lacking. Accordingly, this study systematically classifies ESG activities of companies and seeks to provide insight to companies seeking to plan new ESG activities. This study analyzes how Chinese manufacturing companies perform ESG activities based on their dynamic capabilities in the global economy and how they differ in their activities. This study used the ESG annual reports of 151 Chinese manufacturing listed companies on the Shanghai & Shenzhen Stock Exchange and ESG indicators of China Securities Index Company (CSI) as data. This study focused on the following three research questions. The first is to determine whether there are any differences in ESG activities between companies with high ESG scores (TOP-25) and companies with low ESG scores (BOT-25), and the second is to determine whether there are any changes in ESG activities over a 10-year period (2010-2019), focusing only on companies with high ESG scores. The results showed that there was a significant difference in ESG activities between high and low ESG scorers, while tracking the year-to-year change in activities of the top-25 companies did not show any difference in ESG activities. In the third study, social network analysis was conducted on the keywords of E/S/G. Through the co-concurrence matrix technique, we visualized the ESG activities of companies in a four-quadrant graph and set the direction for ESG activities based on this.

Analysis of Plants Social Network on Island Area in the Korean Peninsula (한반도 도서지역의 식물사회네트워크 분석)

  • Sang-Cheol Lee;Hyun-Mi Kang;Seok-Gon Park
    • Korean Journal of Environment and Ecology
    • /
    • v.38 no.2
    • /
    • pp.127-142
    • /
    • 2024
  • This study aimed to understand the interrelationships between tree species in plant communities through Plant Social Network (PSN) analysis using a large amount of vegetation data surveyed in an island area belonging to a warm-temperate boreal forest. The Machilus thunbergii, Castanopsis sieboldii, and Ligustrum japonicum, which belong to the canopy layer, Pittosporum tobira and Ardisia japonica, which belong to the shrub layer and Trachelospermum asiaticum and Stauntonia hexaphylla, which belong to the vines, appearing in evergreen broad-leaved climax forest community, showed strong positive association(+) with each other. These tree species had a negative association or no friendly relationship with deciduous broad-leaved species due to the large difference in location environments. Divided into 4 group modularizations in the PSN sociogram, evergreen broad-leaved tree species in Group I and deciduous broad-leaved tree species in Group II showed high centrality and connectivity. It was analyzed that the arrangement of tree species (nodes) and the degree of connection (grouping) of the sociogram can indirectly estimate environmental factors and characteristics of plant communities like DCA. Tree species with high centrality and influence in the PSN included T. asiaticum, Eurya japonica, Lindera obtusiloba, and Styrax japonicus. These tree species are common with a wide range of ecological niches and appear to have the characteristics and survival strategies of opportunistic species that commonly appear in forest gaps and damaged areas. They will play a major role in inter-species interactions and structural and functional changes in plant communities. In the future, long-term research and in-depth discussions are needed to determine how these species actually influence plant community changes through interactions

INVESTIGATION OF THE EFFECT OF AN ANTIBIOTIC "P" ON POTATOES ("감자에 대한 항생제(抗生劑) 피마리신의 통계적(統計的) 효과(效果) 분석(分析)")

  • Kim, Jong-Hoon
    • Journal of Korean Society for Quality Management
    • /
    • v.5 no.2
    • /
    • pp.59-120
    • /
    • 1977
  • An antibiotic 'P', which is one of the products of the Gist Brocades N. V. is being tested by its research department as fungicide on seed-potatoes. For this testing they designed experiments, with two control groups, one competitor's product, eight formulations of the antibiotic to be tested in different concentrations and one mercury treatment which can not be used in practice. The treated potatoes were planted in three different regions, where bifferent conditions prevail. After several months the harvested potatoes are divided in groups according to their diameter, potato illness is analysed and counted. These data were summarised in percentage and given to us for Analysis. We approached and analysed the data by following methods: a. Computation of the mean and standard deviation of the percenage of good results in each size group and treatment. b. Computation of the experimental errors by substraction of each treatment mean from observed data. c. Description of the frequency table, plotting of a histogram and a normal curve on same graph to check normality. d. Test of normality paper and chi-sqeare test to check the goodness of fit to a normal curve. e. Test for homogeneity of variance in each treatment with the Cochran's test and Hartley's test. f. Analysis of Variance for testing the means by one way classifications. g. Drawing of graphs with upper and lower confidence limits to show the effect of different treatments. h. T-test and F-test to two Control mean and variance for making one control of Dunnett's test. i. Dunnett's Test and calculations for numerical comarision of different treatments wth one control. In region R, where the potatoes were planted, it was this year very dry and rather bad conditions to grow potatoes prevailed during the experimental period. The results of this investigation show us that treatment No.2, 3 and 4 are significantly different from other treatments and control groups (none treated, just like natural state). Treatment no.2 is the useless mercury formulation. So only No. 3 and 4, which have high concentrations of antibiotic 'P', gave a good effect to the potatoes. As well as the competitors product, middle and low concentrated formulations are not significantly different from control gro-ups of every size. In region w, where the potatoes got the same treatments as in region R, prevailed better weather conditions and was enough water obtainable from the lake. The results in this region showed that treatment No. 2, 3, 4, and 5 are Significantly different from other treatments and the control groups. Again No.2 is the mercury treatmentin this investigation. Not only high concentrated formulation of antibiotic 'P', but also the competitor's poroduct gave good results. But, the effect of 'P', was better than the competitors porduct. In region G, where the potatoes got the same treatments as in the regions R and w. and the climate conditions were equal to region R, the results showed that most of the treatments are not significantly different from the control groups. Only treatment no. 3 was a little bit different from the others. but not Significantly different. It seems to us that the difference between the results in the three regions was caused by certain conditions like, the nature of the soil the degres of moisture and hours of sunshine, but we are not sure of that. As a conclusion, we can say that antibiotic 'P' has a good effect on potatoes, but in most investigations a rather high concentration of 'P' was required in formulations.

  • PDF

Development of a Stock Trading System Using M & W Wave Patterns and Genetic Algorithms (M&W 파동 패턴과 유전자 알고리즘을 이용한 주식 매매 시스템 개발)

  • Yang, Hoonseok;Kim, Sunwoong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.63-83
    • /
    • 2019
  • Investors prefer to look for trading points based on the graph shown in the chart rather than complex analysis, such as corporate intrinsic value analysis and technical auxiliary index analysis. However, the pattern analysis technique is difficult and computerized less than the needs of users. In recent years, there have been many cases of studying stock price patterns using various machine learning techniques including neural networks in the field of artificial intelligence(AI). In particular, the development of IT technology has made it easier to analyze a huge number of chart data to find patterns that can predict stock prices. Although short-term forecasting power of prices has increased in terms of performance so far, long-term forecasting power is limited and is used in short-term trading rather than long-term investment. Other studies have focused on mechanically and accurately identifying patterns that were not recognized by past technology, but it can be vulnerable in practical areas because it is a separate matter whether the patterns found are suitable for trading. When they find a meaningful pattern, they find a point that matches the pattern. They then measure their performance after n days, assuming that they have bought at that point in time. Since this approach is to calculate virtual revenues, there can be many disparities with reality. The existing research method tries to find a pattern with stock price prediction power, but this study proposes to define the patterns first and to trade when the pattern with high success probability appears. The M & W wave pattern published by Merrill(1980) is simple because we can distinguish it by five turning points. Despite the report that some patterns have price predictability, there were no performance reports used in the actual market. The simplicity of a pattern consisting of five turning points has the advantage of reducing the cost of increasing pattern recognition accuracy. In this study, 16 patterns of up conversion and 16 patterns of down conversion are reclassified into ten groups so that they can be easily implemented by the system. Only one pattern with high success rate per group is selected for trading. Patterns that had a high probability of success in the past are likely to succeed in the future. So we trade when such a pattern occurs. It is a real situation because it is measured assuming that both the buy and sell have been executed. We tested three ways to calculate the turning point. The first method, the minimum change rate zig-zag method, removes price movements below a certain percentage and calculates the vertex. In the second method, high-low line zig-zag, the high price that meets the n-day high price line is calculated at the peak price, and the low price that meets the n-day low price line is calculated at the valley price. In the third method, the swing wave method, the high price in the center higher than n high prices on the left and right is calculated as the peak price. If the central low price is lower than the n low price on the left and right, it is calculated as valley price. The swing wave method was superior to the other methods in the test results. It is interpreted that the transaction after checking the completion of the pattern is more effective than the transaction in the unfinished state of the pattern. Genetic algorithms(GA) were the most suitable solution, although it was virtually impossible to find patterns with high success rates because the number of cases was too large in this simulation. We also performed the simulation using the Walk-forward Analysis(WFA) method, which tests the test section and the application section separately. So we were able to respond appropriately to market changes. In this study, we optimize the stock portfolio because there is a risk of over-optimized if we implement the variable optimality for each individual stock. Therefore, we selected the number of constituent stocks as 20 to increase the effect of diversified investment while avoiding optimization. We tested the KOSPI market by dividing it into six categories. In the results, the portfolio of small cap stock was the most successful and the high vol stock portfolio was the second best. This shows that patterns need to have some price volatility in order for patterns to be shaped, but volatility is not the best.

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Clinical Usefulness of PET-MRI in Lymph Node Metastasis Evaluation of Head and Neck Cancer (두경부암 림프절 전이 평가에서 PET-MRI의 임상적 유용성)

  • Kim, Jung-Soo;Lee, Hong-Jae;Kim, Jin-Eui
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.18 no.1
    • /
    • pp.26-32
    • /
    • 2014
  • Purpose: As PET-MRI which has excellent soft tissue contrast is developed as integration system, many researches about clinical application are being conducted by comparing with existing display equipments. Because PET-MRI is actively used for head and neck cancer diagnosis in our hospital, lymph node metastasis before the patient's surgery was diagnosed and clinical usefulness of head and neck cancer PET-MRI scan was evaluated using pathological opinions and idiopathy surrounding tissue metastasis evaluation method. Materials and Methods: Targeting 100 head and neck cancer patients in SNUH from January to August in 2013. $^{18}F-FDG$ (5.18 MBq/kg) was intravenous injected and after 60 min of rest, torso (body TIM coil, Vibe-Dixon) and dedication (head-neck TIM coil, UTE, Dotarem injection) scans were conducted using $Bio-graph^{TM}$ mMR 3T (SIEMENS, Munich). Data were reorganized using iterative reconstruction and lymph node metastasis was read with Syngo.Via workstation. Subsequently, pathological observations and diagnosis before-and-after surgery were examined with integrated medical information system (EMR, best-care) in SNUH. Patient's diagnostic information was entered in each category of $2{\times}2$ decision matrix and was classified into true positive (TP), true negative (TN), false positive (FP) and false negative (FN). Based on these classified test results, sensitivity, specificity, accuracy, false negative and false positive rate were calculated. Results: In PET-MRI scan results of head and neck cancer patients, positive and negative cases of lymph node metastasis were 49 and 51 cases respectively and positive and negative lymph node metastasis through before-and-after surgery pathological results were 46 and 54 cases respectively. In both tests, TP which received positive lymph node metastasis were analyzed as 34 cases, FP which received positive lymph node metastasis in PET-MRI scan but received negative lymph node metastasis in pathological test were 4 cases, FN which received negative lymph node metastasis but received positive lymph node metastasis in pathological test was 1 case, and TN which received negative lymph node metastasis in both two tests were 50 cases. Based on these data, sensitivity in PET-MRI scan of head and neck cancer patient was identified to be 97.8%, specificity was 92.5%, accuracy was 95%, FN rate was 2.1% and FP rate was 7.00% respectively. Conclusion: PET-MRI which can apply the acquired functional information using high tissue contrast and various sequences was considered to be useful in determining the weapons before-and-after surgery in head and neck cancer diagnosis or in the evaluation of recurrence and remote detection of metastasis and uncertain idiopathy cervical lymph node metastasis. Additionally, clinical usefulness of PET-MRI through pathological test and integrated diagnosis and follow-up scan was considered to be sufficient as a standard diagnosis scan of head and neck cancer, and additional researches about the development of optimum MR sequence and clinical application are required.

  • PDF

Development and Utility Evaluation of Portable Respiration Training Device for Image-guided Stereotactic Body Radiation Therapy (SBRT) (영상유도 체부정위방사선 치료시 호흡동조를 위한 휴대형 호흡연습장치의 개발 및 유용성 평가)

  • Hwang, Seon Bung;Park, Mun Kyu;Park, Seung Woo;Cho, Yu Ra;Lee, Dong Han;Jung, Hai Jo;Ji, Young Hoon;Kwon, Soo-Il
    • Progress in Medical Physics
    • /
    • v.25 no.4
    • /
    • pp.264-270
    • /
    • 2014
  • This study developed a portable respiratory training device to improve breathing stability, which is an important element in using the CyberKnife Synchrony respiratory tracking device, one of the typical Stereotactic Radiation Therapy (SRT) devices. It produced an interface for users to be able to select one of two displays, a graph type and a bar type, supported an auditory system that helps them expect next respiration by improving a sense of rhythm of their respiratory period, and provided comfortable respiratory inducement. By targeting 5 applicants and applying individual respiratory period detected through a self-developed program, it acquired signal data of 'guide respiration' that induces breathing through signal data gained from 'free respiration' and an auditory system, and evaluated the usability by comparing deviation average values of respiratory period and respiratory amplitude. It could be identified that respiratory period decreased $55.74{\pm}0.14%$ compared to free respiration, and respiratory amplitude decreased $28.12{\pm}0.10%$ compared to free respiration, which confirmed the consistency and stability of respiratory. SBRT, developed based on these results, using the portable respiratory training device, for liver cancer or lung cancer, is evaluated to be able to help reduce delayed treatment time due to respiratory instability and improve treatment accuracy, and if it could be applied to developing respiratory training applications targeting an android-based portable device in the future, even use convenience and economic efficiency are expected.