• Title/Summary/Keyword: 논문평가

Search Result 34,202, Processing Time 0.064 seconds

The Evaluation of Attenuation Difference and SUV According to Arm Position in Whole Body PET/CT (전신 PET/CT 검사에서 팔의 위치에 따른 감약 정도와 SUV 변화 평가)

  • Kwak, In-Suk;Lee, Hyuk;Choi, Sung-Wook;Suk, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.21-25
    • /
    • 2010
  • Purpose: For better PET imaging with accuracy the transmission scanning is inevitably required for attenuation correction. The attenuation is affected by condition of acquisition and patient position, consequently quantitative accuracy may be decreased in emission scan imaging. In this paper, the present study aims at providing the measurement for attenuation varying with the positions of the patient's arm in whole body PET/CT, further performing the comparative analysis over its SUV changes. Materials and Methods: NEMA 1994 PET phantom was filled with $^{18}F$-FDG and the concentration ratio of insert cylinder and background water fit to 4:1. Phantom images were acquired through emission scanning for 4min after conducting transmission scanning by using CT. In an attempt to acquire image at the state that the arm of the patient was positioned at the lower of ahead, image was acquired in away that two pieces of Teflon inserts were used additionally by fixing phantoms at both sides of phantom. The acquired imaged at a were reconstructed by applying the iterative reconstruction method (iteration: 2, subset: 28) as well as attenuation correction using the CT, and then VOI was drawn on each image plane so as to measure CT number and SUV and comparatively analyze axial uniformity (A.U=Standard deviation/Average SUV) of PET images. Results: It was found from the above phantom test that, when comparing two cases of whether Teflon insert was fixed or removed, the CT number of cylinder increased from -5.76 HU to 0 HU, while SUV decreased from 24.64 to 24.29 and A.U from 0.064 to 0.052. And the CT number of background water was identified to increase from -6.14 HU to -0.43 HU, whereas SUV decreased from 6.3 to 5.6 and A.U also decreased from 0.12 to 0.10. In addition, as for the patient image, CT number was verified to increase from 53.09 HU to 58.31 HU and SUV decreased from 24.96 to 21.81 when the patient's arm was positioned over the head rather than when it was lowered. Conclusion: When arms up protocol was applied, the SUV of phantom and patient image was decreased by 1.4% and 9.2% respectively. With the present study it was concluded that in case of PET/CT scanning against the whole body of a patient the position of patient's arm was not so much significant. Especially, the scanning under the condition that the arm is raised over to the head gives rise to more probability that the patient is likely to move due to long scanning time that causes the increase of uptake of $^{18}F$-FDG of brown fat at the shoulder part together with increased pain imposing to the shoulder and discomfort to a patient. As regarding consideration all of such factors, it could be rationally drawn that PET/CT scanning could be made with the arm of the subject lowered.

  • PDF

Serum 25-Hydroxy Vitamin $D_3$ Analysis of Korean People (한국인 일반인의 혈청 25-Hydroxy Vitamin $D_3$의 분석)

  • Kim, Bo-Kyung;Jung, Hyun-Mi;Kim, Yun-Kyung;Kim, So-Young;Kim, Jee-Hyun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.133-137
    • /
    • 2010
  • Purpose: The main function of vitamin D is the mineralization of the brain by increase of calcium and phosphorus, in case it is insufficient in children, lime deposition on cartilage cannot occur so it leads to rachitis, and in adults, it leads to osteomalacia or osteoporosis. It is also strongly believed in the academic world that vitamin D can restrict the growth of cancer cells and prevent heart diseases, which is also somewhat proven in epidemiological researches. While the right density of vitamin D is still being studied, 20-32 ng/mL is believed to be the most ideal density. Therefore, I wanted analyze how much density of 25-Hydroxyvitamin D3 that Koreans possess. Materials and Methods: From February 20th, 2008 to April 21st, 2009, the collection of 2800 serums, from medical examination treated subjects by Neodin Medical Institute, have been tested. The targets were tested by 25-Hydroxyvitamin D (125I Kit: Diasorin, USA), and were analyzed by dividing into many different categories (gender, age, season, region). Results: The average density of male were 20 ng/mL, female 17.08 ng/mL. Per age groups, the density of males were as follows: 10~20-18 ng/mL, 21~30-17 ng/mL, 31~40-19 ng/mL, 41~50-21 ng/mL, 51~60-22 ng/mL, 61~70-22 ng/mL, 71~80-22 ng/mL and 81~90-19.9 ng/mL. Average density of females per age groups, were as follows: 10~20-16 ng/mL, 20~30-15.26 ng/mL, 30~40-16 ng/mL, 40~50-17 ng/mL, 50~60-19 ng/mL, 60~70-19 ng/mL, 70~80-19 ng/mL, and 80~90-17 ng/mL. Per seasons, From December to May, the subjects showed the density of 15.97 ng/mL, while from June to November, it showed 21.60 ng/mL. On density of males from January to April regionally, Seoul+Gyeonggi-Do-15.52 ng/mL, Gangwon-Do-15.33 ng/mL, Choongchung-Do-18.03 ng/mL, Jeonla-Do-18.68 ng/mL, Gyungsang-Do-18.76 ng/mL and Cheju Do-21.23 ng/mL. Conclusions: The vitamin D of Koreans is has been insufficient compared to the suggested amount. Ultraviolet rays, which is the main source of vitamin D is critical, therefore it is suggested that more outdoor activities can definitely help.

  • PDF

Image Quality Evaluation of CsI:Tl and Gd2O2S Detectors in the Indirect-Conversion DR System (간접변환방식 DR장비에서 CsI:Tl과 Gd2O2S의 검출기 화질 평가)

  • Kong, Changgi;Choi, Namgil;Jung, Myoyoung;Song, Jongnam;Kim, Wook;Han, Jaebok
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.1
    • /
    • pp.27-35
    • /
    • 2017
  • The purpose of this study was to investigate the features of CsI:Tl and $Gd_2O_2S$ detectors with an indirect conversion method using phantom in the DR (digital radiography) system by obtaining images of thick chest phantom, medium thickness thigh phantom, and thin hand phantom and by analyzing the SNR and CNR. As a result of measuring the SNR and CNR according to the thickness change of the subject, the SNR and CNR were higher in CsI:Tl detector than in $Gd_2O_2S$ detector when the medium thickness thigh phantom and thin hand phantom were scanned. However, when the thick chest phantom was used, for the SNR at 80~125 kVp and the CNR at 80~110 kVp in the $Gd_2O_2S$ detector, the values were higher than those of CsI:Tl detector. The SNR and CNR both increased as the tube voltage increased. The SNR and CNR of CsI:Tl detector in the medium thickness thigh phantom increased at 40~50 kVp and decreased as the tube voltage increased. The SNR and CNR of $Gd_2O_2S$ detector increased at 40~60 kVp and decreased as the tube voltage increased. The SNR and CNR of CsI:Tl detctor in the thin hand phantom decreased at the low tube voltage and increased as the tube voltage increased, but they decreased again at 100~110 kVp, while the SNR and CNR of $Gd_2O_2S$ detector were found to decrease as the tube voltage increased. The MTF of CsI:Tl detector was 6.02~90.90% higher than that of $Gd_2O_2S$ detector at 0.5~3 lp/mm. The DQE of CsI:Tl detector was 66.67~233.33% higher than that of $Gd_2O_2S$ detector. In conclusion, although the values of CsI:Tl detector were higher than those of $Gd_2O_2S$ detector in the comparison of MTF and DQE, the cheaper $Gd_2O_2S$ detector had higher SNR and CNR than the expensive CsI:Tl detector at a certain tube voltage range in the thick check phantom. At chest X-ray, if the $Gd_2O_2S$ detector is used rather than the CsI:Tl detector, chest images with excellent quality can be obtained, which will be useful for examination. Moreover, price/performance should be considered when determining the detector type from the viewpoint of the user.

The Diagnostic Yield and Complications of Percutaneous Needle Aspiration Biopsy for the Intrathoracic Lesions (경피적 폐생검의 진단성적 및 합병증)

  • Jang, Seung Hun;Kim, Cheal Hyeon;Koh, Won Jung;Yoo, Chul-Gyu;Kim, Young Whan;Han, Sung Koo;Shim, Young-Soo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.6
    • /
    • pp.916-924
    • /
    • 1996
  • Bacground : Percutaneous needle aspiration biopsy (PCNA) is one of the most frequently used diagnostic methcxJs for intrathoracic lesions. Previous studies have reponed wide range of diagnostic yield from 28 to 98%. However, diagnostic yield has been increased by accumulation of experience, improvement of needle and the image guiding systems. We analysed the results of PCNA performed for one year to evaluate the diagnostic yield, the rate and severity of complications and factors affecting the diagnostic yield. Method : 287 PCNAs undergone in 236 patients from January, 1994 to December, 1994 were analysed retrospectively. The intrathoracic lesions was targeted and aspirated with 21 - 23 G Chiba needle under fluoroscopic guiding system. Occasionally, 19 - 20 G Biopsy gun was used for core tissue specimen. The specimen was requested for microbiologic, cytologic and histopathologic examination in the case of obtained core tissue. Diagnostic yields and complication rate of benign and malignant lesions were ca1culaled based on patients' chans. The comparison for the diagnostic yields according to size and shape of the lesions was analysed with chi square test (p<0.05). Results : There are 19.9% of consolidative lesion and 80.1% of nodular or mass lesion, and the lesion is located at the right upper lobe in 26.3% of cases, the right middle lobe in 6.4%, the right lower lobe 21.2%, the left upper lobe in 16.8%, the left lower lobe in 10.6%, and mediastinum in 1.3%. The lesion distributed over 2 lobes is as many as 17.4% of cases. There are 74 patients with benign lesions, 142 patients with malignant lesions in final diagnosis and confirmative diagnosis was not made in 22 patients despite of all available diagnostic methods. 2 patients have lung cancer and pulmonary tuberculosis concomittantly. Experience with 236 patients showed that PCNA can diagnose benign lesions in 62.2% (42 patients) of patients with such lesions and malignant lesions in 82.4% (117 patients) of patients. For the patients in whom the first PCNA failed to make diagnosis, the procedure was repeated and the cumulative diagnostic yield was increased as 44.6%, 60.8%, 62.2% in benign lesions and as 73.4%, 81.7%, 82.4% in malignant lesions through serial PCNA. Thoracotomy was performed in 9 patients with benign lesions and in 43 patients with malignant lesions. PCNA and thoracotomy showed the same pathologic result in 44.4% (4 patients) of benign lesions and 58.1% (25 patients) of malignant lesions. Thoracotomy confirmed 4 patients with malignat lesions against benign result of PCNA and 2 patients with benign lesions against malignant result of PCNA. There are 1.0% (3 cases) of hemoptysis, 19.2% (55 cases) of blood tinged sputum, 12.5% (36 cases) of pneumothorax and 1.0% (3 cases) of fever through 287 times of PCNA. Hemoptysis and blood tinged sputum didn't need therapy. 8 cases of pneumothorax needed insertion of classical chest tube or pig-tail catheter. Fever subsided within 48 hours in all cases. There was no difference between size and shape of lesion with diagnostic yield. Conclusion: PCNA shows relatively high diagnostic yield and mild degree complications but the accuracy of histologic diagnosis has to be improved.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Performance of Korean State-owned Enterprises Following Executive Turnover and Executive Resignation During the Term of Office (공기업의 임원교체와 중도퇴임이 경영성과에 미치는 영향)

  • Yu, Seungwon;Kim, Suhee
    • KDI Journal of Economic Policy
    • /
    • v.34 no.3
    • /
    • pp.95-131
    • /
    • 2012
  • This study examines whether the executive turnover and the executive resignation during the term of office affect the performance of Korean state-owned enterprises. The executive turnover in the paper means the comprehensive change of the executives which includes the change after the term of office, the change after consecutive terms and the change during the term of office. The 'resignation' was named for the executive change during the term of office to distinguish from the executive turnover. The study scope of the paper is restrained to the comprehensive executive change itself irrespective of the term of office and the resignation during the term of office. Therefore the natural change of the executive after the term of office or the change after consecutive terms is not included in the study. Spontaneous resignation and forced resignation are not distinguished in the paper as the distinction between the two is not easy. The paper uses both the margin of return on asset and the margin of return on asset adjusted by industry as proxies of the performance of state-owned enterprises. The business nature of state-owned enterprise is considered in the study, the public nature not in it. The paper uses the five year (2004 to 2008) samples of 24 firms designated as public enterprises by Korean government. The analysis results are as follows. First, 45.1% of CEOs were changed a year during the sample period on the average. The average tenure period of CEOs was 2 years and 3 months and 49.9% among the changed CEOs resigned during the term of office. 41.6% of internal auditors were changed a year on the average. The average tenure period of internal auditors was 2 years and 2 months and 51.0% among the changed internal auditors resigned during the term of office. In case of outside directors, on average, 38.2% were changed a year. The average tenure period was 2 years and 7 months and 25.4% among the changed internal directors resigned during the term of office. These statistics show that numerous CEOs resigned before the finish of the three year term in office. Also, considering the tenure of an internal auditor and an outside director which diminished from 3 years to 2 years by an Act on the Management of Public Institutions (applied to the executives appointed since April 2007), it seems most internal auditors resigned during the term of office but most outside directors resigned after the end of the term. Secondly, There was no evidence that the executives were changed during the term of office because of the bad performance of prior year. On the other hand, contrary to the normal expectation, the performance of prior year of the state-owned enterprise where an outside director resigned during the term of office was significantly higher than that of other state-owned enterprises. It means that the clauses in related laws on the executive dismissal on grounds of bad performance did not work normally. Instead it can be said that the executive change was made by non-economic reasons such as a political motivation. Thirdly, the results from a fixed effect model show there were evidences that performance turned negatively when CEOs or outside directors resigned during the term of office. CEO's resignation during the term of office gave a significantly negative effect on the margin of return on asset. Outside director's resignation during the term of office lowered significantly the margin of return on asset adjusted by industry. These results suggest that the executive's change in Korean state-owned enterprises was not made by objective or economic standards such as management performance assessment and the negative effect on performance of the enterprises was had by the unfaithful obeyance of the legal executive term.

  • PDF

Investigation on a Way to Maximize the Productivity in Poultry Industry (양계산업에 있어서 생산성 향상방안에 대한 조사 연구)

  • 오세정
    • Korean Journal of Poultry Science
    • /
    • v.16 no.2
    • /
    • pp.105-127
    • /
    • 1989
  • Although poultry industry in Japan has been much developed in recent years, it still needs to be developed , compared with developed countries. Since the poultry market in Korea is expected to be opened in the near future it is necessary to maximize the Productivity to reduce the production costs and to develop the scientific, technologies and management organization systems for the improvement of the quality in poultry production. Followings ale the summary of poultry industry in Japan. 1. Poultry industry in Japan is almost specized and commercialized and its management system is : integrated, cooperative and developed to industrialized intensive style. Therefore, they have competitive power in the international poultry markets. 2. Average egg weight is 48-50g per day (Max. 54g) and feed requirement is 2. 1-2. 3. 3. The management organization system is specialized and farmers in small scale form complex and farmers in large scale are integrated.

  • PDF

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Analysis of the ESD and DAP According to the Change of the Cine Imaging Condition of Coronary Angiography and Usefulness of SNR and CNR of the Images: Focusing on the Change of Tube Current (관상동맥 조영술(Coronary Angiography)의 씨네(cine) 촬영조건 변화에 따른 입사표면선량(ESD)과 흡수선량(DAP) 및 영상의 SNR·CNR 유용성 분석: 관전류 변화를 중점으로)

  • Seo, Young Hyun;Song, Jong Nam
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.3
    • /
    • pp.371-379
    • /
    • 2019
  • The purpose of this study was to investigate the effect of the change in the X-ray condition on the entrance surface dose (ESD) and dose area product (DAP) in the cine imaging of coronary angiography (CAG), and to analyze the usefulness of the condition change on the dose relation and image quality by measuring and analyzing the Signal to Noise Radio (SNR) and Contrast to Nois Ratio (CNR) of the angiographic images taken by the Image J program. Data were collected from 33 patients (24 males and 9 females) who underwent CAG at this hospital from November 2017 to March 2018. In terms of imaging condition and data acquisition, the ESD and DAP of group A with a high tube current of 397.2 mA and group B with a low tube current of 370.7 mA were retrospectively obtained for comparison and analysis. For the SNR and CNR measurement and analysis via Image J, the result values were derived by substituting the obtained data into the formula. The correlations among ESD and DAP according to the change in the imaging condition, SNR, and CNR were analyzed by using the SPSS statistical analysis software. The relationships of groups A and B, having a difference in the imaging condition, mA, with ESD ($A:483.5{\pm}60.1$; $B: 464.4{\pm}39.9$) and DAP ($A:84.3{\pm}10.7$; $B:81.5{\pm}7$) were not statistically significant (p>0.05). In the relationships with SNR and CNR based on Image J, the SNR ($5.451{\pm}0.529$) and CNR ($0.411{\pm}0.0432$) of the images obtained via the left coronary artery (LCA) imaging of group B showed differences of $0.475{\pm}0.096$ and $-0.048{\pm}0.0$, respectively, from the SNR ($4.976{\pm}0.433$) and CNR ($0.459{\pm}0.0431$) of the LCA of group A. However, the differences were not statistically significant (p<0.05). In the SNR and CNR obtained via the right coronary artery (RCA) imaging, the SNR ($4.731{\pm}0.773$) and CNR ($0.354{\pm}0.083$) of group A showed increased values of $1.491{\pm}0.405$ and $0.188{\pm}0.005$, respectively, from the SNR ($3.24{\pm}0.368$) and CNR ($0.166{\pm}0.033$) of group B. Among these, CNR was statistically significant (p<0.05). In the correlation analysis, statistically significant results were shown in SNR (LCA) and CNR (LCA); SNR (RCA) and CNR (RCA); ESD and DAP; ESD and sec; DAP and CNR (RCA); and DAP and sec (p<0.05). As a result of the analyses on the image quality evaluation and usefulness of the dose change, the SNR and CNR were increased in the RCA images of the CAG obtained by increasing the mA. Based on the result that CNR showed a statistically significant difference, it is believed that the contrast in the image quality can be further improved by increasing the mA in RCA imaging.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.