• Title/Summary/Keyword: Software Quality-in-use

Search Result 469, Processing Time 0.031 seconds

The Actual State and the Utilization for Dental Radiography in Korea (국내 치과방사선의 현황 및 이용 실태)

  • Shin, Gwi-Soon;Kim, You-Hyun;Lee, Bo-Ram;Kim, Se-Young;Lee, Gui-Won;Park, Chang-Seo;Park, Hyok;Chang, Kye-Yong
    • Journal of radiological science and technology
    • /
    • v.33 no.2
    • /
    • pp.109-120
    • /
    • 2010
  • The purpose of this study was first to analyze the utilization of dental examination through questionnaire to develop a diagnostic reference level of patient doses for dental radiography in korea. 77 dental institutions were classified into three groups: A group for the dental hospitals of the college of dentistry (11 institutions), B group for dental hospitals (30 institutions) and C group for dental clinics (36 institutions). The results were as follows : The mean numbers of unit chairs and medical staffs were 140.2, 15.3 and 5.8 sets, 112.6, 7.3 and 1.7 dentists, 3.1, 0.5 and no one radiologic technologists, and 19.7, 12.5 and 3.3 dental hygienists in A, B and C groups, respectively. The mean numbers of dental X-ray equipments were 14.64, 3.21 and 2.19 in A, B and C groups, respectively. Intraoral dental X-ray unit was used the most, the following equipments were panoramic, cephalometric, and cone-beam CT units. The most used X-ray imaging system was also digital system (above 50%) in all three groups. Insight dental film (Kodak, USA) having high sensitivity was routinely used for periapical radiography. The automatic processor was not used in many dental institutions, but the film-holding device was used in many dental institutions. The utilization rates of PACS in A, B and C groups were 90.9%, 83.3% and 16.7% respectively, and the PACS software program was used the most PiView STAR (Infinitt, Korea). The annual mean number of radiographic cases in one dental institution in 2008 for A group was 6.8 times and 21.2 times more than those for B and C groups, and periapical and panoramic radiographs were taken mostly. Tube voltage (kVp) and tube current (mA) for periapical radiography were similar in all three groups, but exposure time in C group was 12.0 times and 3.5 times longer than those in B and C groups. The amount of radiation exposure in C group, in which dental hygienists take dental radiographs, was more than those in other groups. The exposure parameters for panoramic radiography were similar in all three groups. In conclusion, the exposure parameters in dental radiography should be determined with reference level, not past experiences. Use of automatic processor and film-holding devices reduces the radiation exposure in film system. The quality assurance of dental equipments are necessary for the reduction of the patient dose and the improvement of X-ray image quality.

Evaluation of Oven Utilization Effects at School Foodservice Facilities in Daegu and Gyeongbuk Province (대구·경북지역 학교급식소 오븐 사용 효과 평가)

  • Lee, Jung-A;Lee, Jin-Hyang;Bae, Hyun-Joo
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.39 no.7
    • /
    • pp.1064-1072
    • /
    • 2010
  • The objectives of this study were to gain an overview of practices and effect evaluation of oven utilization at school foodservice facilities in Daegu and Gyeongbuk province. Out of 147 dieticians, who responded for questionnaires, 44 dieticians used the oven and 103 dieticians did not use the oven. All statistical analyses were conducted with the SPSS 14.0 statistical software program. With regard to the style of foodservice system, 74.4% were urban, 23.3% were rural, and 2.3% were remote country. Also, 23.3% of school foodservices produced meals by batch cooking. According to the results of the expected effect and using effect analysis for 27 items, the average of evaluation score about expected effect was 1.64 points and that of using effect was 1.61 points. Both expected effect and using effect had higher scores than average points in 13 items out of 27 items. Using effect had higher scores than expected effect in 4 items. In conclusion, using ovens could help to increase foodservice satisfaction of students at school foodservice, because it can improve the various cooking methods and the food safety management. Therefore, it is important to modernize and automate cooking equipment for quality improvement of school foodservice operations.

Usefulness of Xact-bone for the Resolution Advancement of Gamma Camera Image (감마카메라 영상에서 분해능 향상을 위한 Xact-bone의 유용성 평가)

  • Kim, Jong-Pil;Yoon, Seok-Hwan;Lim, Jung-Jin;Woo, Jae-Ryong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.30-35
    • /
    • 2011
  • Purpose: The Boramae Hospital are currently using Wide beam reconstruction (WBR: UltraSPECT, Israel) to improve the resolution. The Xact-bone belongs to the WBR. It has been reported that Xact-bone helps us to improve image resolution and contrast. This study will be evaluated clinical usefulness of Xact-bone method. Materials and Methods: The usefulness evaluation of Xact-bone method was analyzed in resolution test and contrast ratio. The resolution test in Planar image were obtained from Full width at half maximum (FWHM) by using capillary tube. And the contrast ratio was obtained from Bone and Soft tissue (B/S) ratio values that were acquired from bone scan study of 50 patients before and after using the Xact-bone method. We prepared the Triple Line Source Phantom, NEMA IEC Body Phantom and Standard Jaszczak Phantom to acquire the FWHM and Contrast Ratio values of Single photon emission computed tomography (SPECT) image. Subsequently we compared among the Filtered backprojection (FBP), Orderd subset expectation maximization (OSEM) and Xact-Bone image. Results: The results of the planar Xact-bone data improved resolution about 20% by using capillary tube. In addition it was improved B/S ratio about 15%. When using Triple Line Source Phantom, SPECT Xact-bone data improved resolution for both FBP, OSEM methods about 20% and 10%, respectively. Contrast ratio in each spheres has also been increased for both methods that using NEMA IEC body Phantom and Standard Jaszczak Phantom. Conclusion: When we were using Xact-bone method, we could see to improve the resolution and Contrast ratio as compared to do not use the Xact-bone method. Accordingly, by using WBR's Xact-bone method is expected to improve the image quality. However, when introducing new software, it is needed to match the characteristics of the hospital protocol and clinical application.

  • PDF

The Study about Application of LEAP Collimator at Brain Diamox Perfusion Tomography Applied Flash 3D Reconstruction: One Day Subtraction Method (Flash 3D 재구성을 적용한 뇌 혈류 부하 단층 촬영 시 LEAP 검출기의 적용에 관한 연구: One Day Subtraction Method)

  • Choi, Jong-Sook;Jung, Woo-Young;Ryu, Jae-Kwang
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.102-109
    • /
    • 2009
  • Purpose: Flash 3D (pixon(R) method; 3D OSEM) was developed as a software program to shorten exam time and improve image quality through reconstruction, it is an image processing method that usefully be applied to nuclear medicine tomography. If perfoming brain diamox perfusion scan by reconstructing subtracted images by Flash 3D with shortened image acquisition time, there was a problem that SNR of subtracted image is lower than basal image. To increase SNR of subtracted image, we use LEAP collimators, and we emphasized on sensitivity of vessel dilatation than resolution of brain vessel. In this study, our purpose is to confirm possibility of application of LEAP collimators at brain diamox perfusion tomography, identify proper reconstruction factors by using Flash 3D. Materials and methods: (1) The evaluation of phantom: We used Hoffman 3D Brain Phantom with $^{99m}Tc$. We obtained images by LEAP and LEHR collimators (diamox image) and after 6 hours (the half life of $^{99m}Tc$: 6 hours), we use obtained second image (basal image) by same method. Also, we acquired SNR and ratio of white matters/gray matters of each basal image and subtracted image. (2) The evaluation of patient's image: We quantitatively analyzed patients who were examined by LEAP collimators then was classified as a normal group and who were examined by LEHR collimators then was classified as a normal group from 2008. 05 to 2009. 01. We evaluate the results from phantom by substituting factors. We used one-day protocol and injected $^{99m}Tc$-ECD 925 MBq at both basal image acquisition and diamox image acquisition. Results: (1) The evaluation of phantom: After measuring counts from each detector, at basal image 41~46 kcount, stress image 79~90 kcount, subtraction image 40~47 kcount were detected. LEAP was about 102~113 kcount at basal image, 188~210 kcount at stress image and 94~103 at subtraction image kcount were detected. The SNR of LEHR subtraction image was decreased than LEHR basal image about 37%, the SNR of LEAP subtraction image was decreased than LEAP basal image about 17%. The ratio of gray matter versus white matter is 2.2:1 at LEHR basal image and 1.9:1 at subtraction, and at LEAP basal image was 2.4:1 and subtraction image was 2:1. (2) The evaluation of patient's image: the counts acquired by LEHR collimators are about 40~60 kcounts at basal image, and 80~100 kcount at stress image. It was proper to set FWHM as 7 mm at basal and stress image and 11mm at subtraction image. LEAP was about 80~100 kcount at basal image and 180~200 kcount at stress image. LEAP images could reduce blurring by setting FWHM as 5 mm at basal and stress images and 7 mm at subtraction image. At basal and stress image, LEHR image was superior than LEAP image. But in case of subtraction image like a phantom experiment, it showed rough image because SNR of LEHR image was decreased. On the other hand, in case of subtraction LEAP image was better than LEHR image in SNR and sensitivity. In all LEHR and LEAP collimator images, proper subset and iteration frequency was 8 times. Conclusions: We could archive more clear and high SNR subtraction image by using proper filter with LEAP collimator. In case of applying one day protocol and reconstructing by Flash 3D, we could consider application of LEAP collimator to acquire better subtraction image.

  • PDF

Evaluation of the Accuracy for Respiratory-gated RapidArc (RapidArc를 이용한 호흡연동 회전세기조절방사선치료 할 때 전달선량의 정확성 평가)

  • Sung, Jiwon;Yoon, Myonggeun;Chung, Weon Kuu;Bae, Sun Hyun;Shin, Dong Oh;Kim, Dong Wook
    • Progress in Medical Physics
    • /
    • v.24 no.2
    • /
    • pp.127-132
    • /
    • 2013
  • The position of the internal organs can change continually and periodically inside the body due to the respiration. To reduce the respiration induced uncertainty of dose localization, one can use a respiratory gated radiotherapy where a radiation beam is exposed during the specific time of period. The main disadvantage of this method is that it usually requests a long treatment time, the massive effort during the treatment and the limitation of the patient selection. In this sense, the combination of the real-time position management (RPM) system and the volumetric intensity modulated radiotherapy (RapidArc) is promising since it provides a short treatment time compared with the conventional respiratory gated treatments. In this study, we evaluated the accuracy of the respiratory gated RapidArc treatment. Total sic patient cases were used for this study and each case was planned by RapidArc technique using varian ECLIPSE v8.6 planning machine. For the Quality Assurance (QA), a MatriXX detector and I'mRT software were used. The results show that more than 97% of area gives the gamma value less than one with 3% dose and 3 mm distance to agreement condition, which indicates the measured dose is well matched with the treatment plan's dose distribution for the gated RapidArc treatment cases.

The Precision Test Based on States of Bone Mineral Density (골밀도 상태에 따른 검사자의 재현성 평가)

  • Yoo, Jae-Sook;Kim, Eun-Hye;Kim, Ho-Seong;Shin, Sang-Ki;Cho, Si-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.67-72
    • /
    • 2009
  • Purpose: ISCD (International Society for Clinical Densitometry) requests that users perform mandatory Precision test to raise their quality even though there is no recommendation about patient selection for the test. Thus, we investigated the effect on precision test by measuring reproducibility of 3 bone density groups (normal, osteopenia, osteoporosis). Materials and Methods: 4 users performed precision test with 420 patients (age: $57.8{\pm}9.02$) for BMD in Asan Medical Center (JAN-2008 ~ JUN-2008). In first group (A), 4 users selected 30 patient respectively regardless of bone density condition and measured 2 part (L-spine, femur) in twice. In second group (B), 4 users measured bone density of 10 patients respectively in the same manner of first group (A) users but dividing patient into 3 stages (normal, osteopenia, osteoporosis). In third group (C), 2 users measured 30 patients respectively in the same manner of first group (A) users considering bone density condition. We used GE Lunar Prodigy Advance (Encore. V11.4) and analyzed the result by comparing %CV to LSC using precision tool from ISCD. Check back was done using SPSS. Results: In group A, the %CV calculated by 4 users (a, b, c, d) were 1.16, 1.01, 1.19, 0.65 g/$cm^2$ in L-spine and 0.69, 0.58, 0.97, 0.47 g/$cm^2$ in femur. In group B, the %CV calculated by 4 users (a, b, c, d) were 1.01, 1.19, 0.83, 1.37 g/$cm^2$ in L-spine and 1.03, 0.54, 0.69, 0.58 g/$cm^2$ in femur. When comparing results (group A, B), we found no considerable differences. In group C, the user_1's %CV of normal, osteopenia and osteoporosis were 1.26, 0.94, 0.94 g/$cm^2$ in L-spine and 0.94, 0.79, 1.01 g/$cm^2$ in femur. And the user_2's %CV were 0.97, 0.83, 0.72 g/$cm^2$ L-spine and 0.65, 0.65, 1.05 g/$cm^2$ in femur. When analyzing the result, we figured out that the difference of reproducibility was almost not found but the differences of two users' several result values have effect on total reproducibility. Conclusions: Precision test is a important factor of bone density follow up. When Machine and user's reproducibility is getting better, it’s useful in clinics because of low range of deviation. Users have to check machine's reproducibility before the test and keep the same mind doing BMD test for patient. In precision test, the difference of measured value is usually found for ROI change caused by patient position. In case of osteoporosis patient, there is difficult to make initial ROI accurately more than normal and osteopenia patient due to lack of bone recognition even though ROI is made automatically by computer software. However, initial ROI is very important and users have to make coherent ROI because we use ROI Copy function in a follow up. In this study, we performed precision test considering bone density condition and found LSC value was stayed within 3%. There was no considerable difference. Thus, patient selection could be done regardless of bone density condition.

  • PDF

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Efficacy and Accuracy of Patient Specific Customize Bolus Using a 3-Dimensional Printer for Electron Beam Therapy (전자선 빔 치료 시 삼차원프린터를 이용하여 제작한 환자맞춤형 볼루스의 유용성 및 선량 정확도 평가)

  • Choi, Woo Keun;Chun, Jun Chul;Ju, Sang Gyu;Min, Byung Jun;Park, Su Yeon;Nam, Hee Rim;Hong, Chae-Seon;Kim, MinKyu;Koo, Bum Yong;Lim, Do Hoon
    • Progress in Medical Physics
    • /
    • v.27 no.2
    • /
    • pp.64-71
    • /
    • 2016
  • We develop a manufacture procedure for the production of a patient specific customized bolus (PSCB) using a 3D printer (3DP). The dosimetric accuracy of the 3D-PSCB is evaluated for electron beam therapy. In order to cover the required planning target volume (PTV), we select the proper electron beam energy and the field size through initial dose calculation using a treatment planning system. The PSCB is delineated based on the initial dose distribution. The dose calculation is repeated after applying the PSCB. We iteratively fine-tune the PSCB shape until the plan quality is sufficient to meet the required clinical criteria. Then the contour data of the PSCB is transferred to an in-house conversion software through the DICOMRT protocol. This contour data is converted into the 3DP data format, STereoLithography data format and then printed using a 3DP. Two virtual patients, having concave and convex shapes, were generated with a virtual PTV and an organ at risk (OAR). Then, two corresponding electron treatment plans with and without a PSCB were generated to evaluate the dosimetric effect of the PSCB. The dosimetric characteristics and dose volume histograms for the PTV and OAR are compared in both plans. Film dosimetry is performed to verify the dosimetric accuracy of the 3D-PSCB. The calculated planar dose distribution is compared to that measured using film dosimetry taken from the beam central axis. We compare the percent depth dose curve and gamma analysis (the dose difference is 3%, and the distance to agreement is 3 mm) results. No significant difference in the PTV dose is observed in the plan with the PSCB compared to that without the PSCB. The maximum, minimum, and mean doses of the OAR in the plan with the PSCB were significantly reduced by 9.7%, 36.6%, and 28.3%, respectively, compared to those in the plan without the PSCB. By applying the PSCB, the OAR volumes receiving 90% and 80% of the prescribed dose were reduced from $14.40cm^3$ to $0.1cm^3$ and from $42.6cm^3$ to $3.7cm^3$, respectively, in comparison to that without using the PSCB. The gamma pass rates of the concave and convex plans were 95% and 98%, respectively. A new procedure of the fabrication of a PSCB is developed using a 3DP. We confirm the usefulness and dosimetric accuracy of the 3D-PSCB for the clinical use. Thus, rapidly advancing 3DP technology is able to ease and expand clinical implementation of the PSCB.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.