• Title/Summary/Keyword: Software Quality Evaluation

Search Result 515, Processing Time 0.024 seconds

Quantitative Evaluation of the Accuracy of 3D Imaging with Multi-Detector Computed Tomography Using Human Skull Phantom (두개골 팬텀을 이용한 다검출기 CT 3차원 영상에서의 거리측정을 통한 정량적 영상특성 평가)

  • 김동욱;정해조;김새롬;유영일;김기덕;김희중
    • Progress in Medical Physics
    • /
    • v.14 no.2
    • /
    • pp.131-140
    • /
    • 2003
  • As the importance of accuracy in measurings of 3-D anatomical structures continues to be stressed, an objective and quantitative of assessing image quality and accuracy of 3-D volume-rendered images is required. The purpose of this study was to evaluate the quantitative accuracy of 3-D rendered images obtained with MDCT, scanned at various scanning parameters (scan modes, slice thicknesses and reconstruction slice thickness). Twelve clinically significant points that play an important role for the craniofacial bone in plastic surgery and dentistry were marked on the surface of a dry human skull. The direct distances between the reference points were defined as gold standards to assess the measuring errors of 3-D images. Then, we scanned the specimen with acquisition parameters of 300 mA, In kVp, and 1.0 sec scan time in axial and helical scan modes (pitch 3:1 and 6:1) at 1,25 mm, 2.50 mm, 3.75 mm and 5.00 mm slice thicknesses. We performed 3-D visualizations and distance measurements with volumetric analysis software and statistically evaluated the quantitative accuracy of distance measurements. The accuracy of distance measurements on the 3-D images acquired with 1.25, 2.50, 3,75 and 5.00 mm slice thickness were 48%, 33%, 23%, 14%, respectively, and those of the reconstructed 1.25 mm were 53%, 41%, 43%, 36% respectively. Meanwhile, there were insignificant statistical differences (P-value<0.05) in the accuracy of the distance measurements of 3-D images reconstructed with 1.25 mm thickness. In conclusion, slice thickness, rather than scan mode, influenced the quantitative accuracy of distance measurements in 3-D rendered images with MDCT. The quantitative analysis of distance measurements may be a useful tool for evaluating the accuracy of 3-D rendered images used in diagnosis, surgical planning, and radiotherapeutic treatment.

  • PDF

The Study on the Effect of Target Volume in DQA based on MLC log file (MLC 로그 파일 기반 DQA에서 타깃 용적에 따른 영향 연구)

  • Shin, Dong Jin;Jung, Dong Min;Cho, Kang Chul;Kim, Ji Hoon;Yoon, Jong Won;Cho, Jeong Hee
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.32
    • /
    • pp.53-59
    • /
    • 2020
  • Purpose: The purpose of this study is to compare and analyze the difference between the MLC log file-based software (Mobius) and the conventional phantom-ionization chamber (ArcCheck) dose verification method according to the change of target volume. Material and method: Radius 0.25cm, 0.5cm, 1cm, 2cm, 3cm, 4cm, 5cm, 6cm, 7cm, 8cm, 9cm, 10cm with a Sphere-shaped target Twelve plans were created and dose verification using Mobius and ArcCheck was conducted three times each. The irradiated data were compared and analyzed using the point dose error value and the gamma passing rate (3%/3mm) as evaluation indicators. Result: Mobius point dose error values were -9.87% at a radius of 0.25cm and -4.39% at 0.5cm, and the error value was within 3% at the remaining target volume. The gamma passing rate was 95% at a radius of 9cm and 93.9% at 10cm, and a passing rate of more than 95% was shown in the remaining target volume. In ArcCheck, the average error value of the point dose was about 2% in all target volumes. The gamma passing rate also showed a pass rate of 98% or more in all target volumes. Conclusion: For small targets with a radius of 0.5cm or less or a large target with a radius of 9cm or more, considering the uncertainty of DQA based on MLC log files, phantom-ionized DQA is used in complementary ways to include point dose, gamma index, DVH, and target coverage. It is believed that it is desirable to verify the dose delivery through a comprehensive analysis.

The Correction Effect of Motion Artifacts in PET/CT Image using System (PET/CT 검사 시 움직임 보정 기법의 유용성 평가)

  • Yeong-Hak Jo;Se-Jong Yoo;Seok-Hwan Bae;Jong-Ryul Seon;Seong-Ho Kim;Won-Jeong Lee
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.1
    • /
    • pp.45-52
    • /
    • 2024
  • In this study, an AI-based algorithm was developed to prevent image quality deterioration and reading errors due to patient movement in PET/CT examinations that use radioisotopes in medical institutions to test cancer and other diseases. Using the Mothion Free software developed using, we checked the degree of correction of movement due to breathing, evaluated its usefulness, and conducted a study for clinical application. The experimental method was to use an RPM Phantom to inject the radioisotope 18F-FDG into a vacuum vial and a sphere of a NEMA IEC body Phantom of different sizes, and to produce images by directing the movement of the radioisotope into a moving lesion during respiration. The vacuum vial had different degrees of movement at different positions, and the spheres of the NEMA IEC body Phantom of different sizes produced different sizes of lesions. Through the acquired images, the lesion volume, maximum SUV, and average SUV were each measured to quantitatively evaluate the degree of motion correction by Motion Free. The average SUV of vacuum vial A, with a large degree of movement, was reduced by 23.36 %, and the error rate of vacuum vial B, with a small degree of movement, was reduced by 29.3 %. The average SUV error rate at the sphere 37mm and 22mm of the NEMA IEC body Phantom was reduced by 29.3 % and 26.51 %, respectively. The average error rate of the four measurements from which the error rate was calculated decreased by 30.03 %, indicating a more accurate average SUV value. In this study, only two-dimensional movements could be produced, so in order to obtain more accurate data, a Phantom that can embody the actual breathing movement of the human body was used, and if the diversity of the range of movement was configured, a more accurate evaluation of usability could be made.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.