• Title/Summary/Keyword: Sensor based

Search Result 10,190, Processing Time 0.043 seconds

Development of Urban Wildlife Detection and Analysis Methodology Based on Camera Trapping Technique and YOLO-X Algorithm (카메라 트래핑 기법과 YOLO-X 알고리즘 기반의 도시 야생동물 탐지 및 분석방법론 개발)

  • Kim, Kyeong-Tae;Lee, Hyun-Jung;Jeon, Seung-Wook;Song, Won-Kyong;Kim, Whee-Moon
    • Journal of the Korean Society of Environmental Restoration Technology
    • /
    • v.26 no.4
    • /
    • pp.17-34
    • /
    • 2023
  • Camera trapping has been used as a non-invasive survey method that minimizes anthropogenic disturbance to ecosystems. Nevertheless, it is labor-intensive and time-consuming, requiring researchers to quantify species and populations. In this study, we aimed to improve the preprocessing of camera trapping data by utilizing an object detection algorithm. Wildlife monitoring using unmanned sensor cameras was conducted in a forested urban forest and a green space on a university campus in Cheonan City, Chungcheongnam-do, Korea. The collected camera trapping data were classified by a researcher to identify the occurrence of species. The data was then used to test the performance of the YOLO-X object detection algorithm for wildlife detection. The camera trapping resulted in 10,500 images of the urban forest and 51,974 images of green spaces on campus. Out of the total 62,474 images, 52,993 images (84.82%) were found to be false positives, while 9,481 images (15.18%) were found to contain wildlife. As a result of wildlife monitoring, 19 species of birds, 5 species of mammals, and 1 species of reptile were observed within the study area. In addition, there were statistically significant differences in the frequency of occurrence of the following species according to the type of urban greenery: Parus varius(t = -3.035, p < 0.01), Parus major(t = 2.112, p < 0.05), Passer montanus(t = 2.112, p < 0.05), Paradoxornis webbianus(t = 2.112, p < 0.05), Turdus hortulorum(t = -4.026, p < 0.001), and Sitta europaea(t = -2.189, p < 0.05). The detection performance of the YOLO-X model for wildlife occurrence was analyzed, and it successfully classified 94.2% of the camera trapping data. In particular, the number of true positive predictions was 7,809 images and the number of false negative predictions was 51,044 images. In this study, the object detection algorithm YOLO-X model was used to detect the presence of wildlife in the camera trapping data. In this study, the YOLO-X model was used with a filter activated to detect 10 specific animal taxa out of the 80 classes trained on the COCO dataset, without any additional training. In future studies, it is necessary to create and apply training data for key occurrence species to make the model suitable for wildlife monitoring.

Enhancing A Neural-Network-based ISP Model through Positional Encoding (위치 정보 인코딩 기반 ISP 신경망 성능 개선)

  • DaeYeon Kim;Woohyeok Kim;Sunghyun Cho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.81-86
    • /
    • 2024
  • The Image Signal Processor (ISP) converts RAW images captured by the camera sensor into user-preferred sRGB images. While RAW images contain more meaningful information for image processing than sRGB images, RAW images are rarely shared due to their large sizes. Moreover, the actual ISP process of a camera is not disclosed, making it difficult to model the inverse process. Consequently, research on learning the conversion between sRGB and RAW has been conducted. Recently, the ParamISP[1] model, which directly incorporates camera parameters (exposure time, sensitivity, aperture size, and focal length) to mimic the operations of a real camera ISP, has been proposed by advancing the simple network structures. However, existing studies, including ParamISP[1], have limitations in modeling the camera ISP as they do not consider the degradation caused by lens shading, optical aberration, and lens distortion, which limits the restoration performance. This study introduces Positional Encoding to enable the camera ISP neural network to better handle degradations caused by lens. The proposed positional encoding method is suitable for camera ISP neural networks that learn by dividing the image into patches. By reflecting the spatial context of the image, it allows for more precise image restoration compared to existing models.

Comparison of Feature Point Extraction Algorithms Using Unmanned Aerial Vehicle RGB Reference Orthophoto (무인항공기 RGB 기준 정사영상을 이용한 특징점 추출 알고리즘 비교)

  • Lee, Kirim;Seong, Jihoon;Jung, Sejung;Shin, Hyeongil;Kim, Dohoon;Lee, Wonhee
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.2
    • /
    • pp.263-270
    • /
    • 2024
  • As unmanned aerial vehicles(UAVs) and sensors have been developed in a variety of ways, it has become possible to update information on the ground faster than existing aerial photography or remote sensing. However, acquisition and input of ground control points(GCPs) UAV photogrammetry takes a lot of time, and geometric distortion occurs if measurement and input of GCPs are incorrect. In this study, RGB-based orthophotos were generated to reduce GCPs measurment and input time, and comparison and evaluation were performed by applying feature point algorithms to target orthophotos from various sensors. Four feature point extraction algorithms were applied to the two study sites, and as a result, speeded up robust features(SURF) was the best in terms of the ratio of matching pairs to feature points. When compared overall, the accelerated-KAZE(AKAZE) method extracted the most feature points and matching pairs, and the binary robust invariant scalable keypoints(BRISK) method extracted the fewest feature points and matching pairs. Through these results, it was confirmed that the AKAZE method is superior when performing geometric correction of the objective orthophoto for each sensor.

Discussion on Detection of Sediment Moisture Content at Different Altitudes Employing UAV Hyperspectral Images (무인항공 초분광 영상을 기반으로 한 고도에 따른 퇴적물 함수율 탐지 고찰)

  • Kyoungeun Lee;Jaehyung Yu;Chanhyeok Park;Trung Hieu Pham
    • Economic and Environmental Geology
    • /
    • v.57 no.4
    • /
    • pp.353-362
    • /
    • 2024
  • This study examined the spectral characteristics of sediments according to moisture content using an unmanned aerial vehicle (UAV)-based hyperspectral sensor and evaluated the efficiency of moisture content detection at different flight altitudes. For this purpose, hyperspectral images in the 400-1000nm wavelength range were acquired and analyzed at altitudes of 40m and 80m for sediment samples with various moisture contents. The reflectance of the sediments generally showed a decreasing trend as the moisture content increased. Correlation analysis between moisture content and reflectance showed a strong negative correlation (r < -0.8) across the entire 400-900nm range. The moisture content detection model constructed using the Random Forest technique showed detection accuracies of RMSE 2.6%, R2 0.92 at 40m altitude and RMSE 2.2%, R2 0.95 at 80m altitude, confirming that the difference in accuracy between altitudes was minimal. Variable importance analysis revealed that the 600-700nm band played a crucial role in moisture content detection. This study is expected to be utilized in efficient sediment moisture management and natural disaster prediction in the field of environmental monitoring in the future.

Comparison of Cognitive Loads between Koreans and Foreigners in the Reading Process

  • Im, Jung Nam;Min, Seung Nam;Cho, Sung Moon
    • Journal of the Ergonomics Society of Korea
    • /
    • v.35 no.4
    • /
    • pp.293-305
    • /
    • 2016
  • Objective: This study aims to measure cognitive load levels by analyzing the EEG of Koreans and foreigners, when they read a Korean text with care selected by level from the grammar and vocabulary aspects, and compare the cognitive load levels through quantitative values. The study results can be utilized as basic data for more scientific approach, when Korean texts or books are developed, and an evaluation method is built, when the foreigners encounter them for learning or an assignment. Background: Based on 2014, the number of the foreign students studying in Korea was 84,801, and they increase annually. Most of them are from Asian region, and they come to Korea to enter a university or a graduate school in Korea. Because those foreign students aim to learn within Universities in Korea, they receive Korean education from their preparation for study in Korea. To enter a university in Korea, they must acquire grade 4 or higher level in the Test of Proficiency in Korean (TOPIK), or they need to complete a certain educational program at each university's affiliated language institution. In such a program, the learners of the Korean language receive Korean education based on texts, except speaking domain, and the comprehension of texts can determine their academic achievements in studying after they enter their desired schools (Jeon, 2004). However, many foreigners, who finish a language course for the short-term, and need to start university study, cannot properly catch up with university classes requiring expertise with the vocabulary and grammar levels learned during the language course. Therefore, reading education, centered on a strategy to understand university textbooks regarded as top level reading texts to the foreigners, is necessary (Kim and Shin, 2015). This study carried out an experiment from a perspective that quantitative data on the readers of the main player of reading education and teaching materials need to be secured to back up the need for reading education for university study learners, and scientifically approach educational design. Namely, this study grasped the difficulty level of reading through the measurement of cognitive loads indicated in the reading activity of each text by dividing the difficulty of a teaching material (book) into eight levels, and the main player of reading into Koreans and foreigners. Method: To identify cognitive loads indicated upon reading Korean texts with care by Koreans and foreigners, this study recruited 16 participants (eight Koreans and eight foreigners). The foreigners were limited to the language course students studying the intermediate level Korean course at university-affiliated language institutions within Seoul Metropolitan Area. To identify cognitive load, as they read a text by level selected from the Korean books (difficulty: eight levels) published by King Sejong Institute (Sejonghakdang.org), the EEG sensor was attached to the frontal love (Fz) and occipital lobe (Oz). After the experiment, this study carried out a questionnaire survey to measure subjective evaluation, and identified the comprehension and difficulty on grammar and words. To find out the effects on schema that may affect text comprehension, this study controlled the Korean texts, and measured EEG and subjective satisfaction. Results: To identify brain's cognitive load, beta band was extracted. As a result, interactions (Fz: p =0.48; Oz: p =0.00) were revealed according to Koreans and foreigners, and difficulty of the text. The cognitive loads of Koreans, the readers whose mother tongue is Korean, were lower in reading Korean texts than those of the foreigners, and the foreigners' cognitive loads became higher gradually according to the difficulty of the texts. From the text four, which is intermediate level in difficulty, remarkable differences started to appear in comparison of the Koreans and foreigners in the beginner's level text. In the subjective evaluation, interactions were revealed according to the Koreans and foreigners and text difficulty (p =0.00), and satisfaction was lower, as the difficulty of the text became higher. Conclusion: When there was background knowledge in reading, namely schema was formed, the comprehension and satisfaction of the texts were higher, although higher levels of vocabulary and grammar were included in the texts than those of the readers. In the case of a text in which the difficulty of grammar was felt high in the subjective evaluation, foreigners' cognitive loads were also high, which shows the result of the loads' going up higher in proportion to the increase of difficulty. This means that the grammar factor functions as a stress factor to the foreigners' reading comprehension. Application: This study quantitatively evaluated the cognitive loads of Koreans and foreigners through EEG, based on readers and the text difficulty, when they read Korean texts. The results of this study can be used for making Korean teaching materials or Korean education content and topic selection for foreigners. If research scope is expanded to reading process using an eye-tracker, the reading education program and evaluation method for foreigners can be developed on the basis of quantitative values.

정지궤도 통신해양기상위성의 기상분야 요구사항에 관하여

  • Ahn, Myung-Hwan;Kim, Kum-Lan
    • Atmosphere
    • /
    • v.12 no.4
    • /
    • pp.20-42
    • /
    • 2002
  • Based on the "Mid to Long Term Plan for Space Development", a project to launch COMeS (Communication, Oceanography, and Meteorological Satellite) into the geostationary orbit is undergoing. Accordingly, KMA (Korea Meteorological Administration) has defined the meteorological missions and prepared the user requirements to fulfill the missions. To make a realistic user requirements, we prepared a first draft based on the ideal meteorological products derivable from a geostationary platform and sent the RFI (request for information) to the sensor manufacturers. Based on the responses to the RFI and other considerations, we revised the user requirement to be a realistic plan for the 2008 launch of the satellite. This manuscript introduces the revised user requirements briefly. The major mission defined in the revised user requirement is the augmentation of the detection and prediction ability of the severe weather phenomena, especially around the Korean Peninsula. The required payload is an enhanced Imager, which includes the major observation channels of the current geostationary sounder. To derive the required meteorological products from the Imager, at least 12 channels are required with the optimum of 16 channels. The minimum 12 channels are 6 wavelength bands used for current geostationary satellite, and additional channels in two visible bands, a near infrared band, two water vapor bands and one ozone absorption band. From these enhanced channel observation, we are going to derive and utilize the information of water vapor, stability index, wind field, and analysis of special weather phenomena such as the yellow sand event in addition to the standard derived products from the current geostationary Imager data. For a better temporal coverage, the Imager is required to acquire the full disk data within 15 minutes and to have the rapid scan mode for the limited area coverage. The required thresholds of spatial resolutions are 1 km and 2 km for visible and infrared channels, respectively, while the target resolutions are 0.5 km and 1 km.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

A Study on the Calculation of Evapotranspiration Crop Coefficient in the Cheongmi-cheon Paddy Field (청미천 논지에서의 증발산량 작물계수 산정에 관한 연구)

  • Kim, Kiyoung;Lee, Yongjun;Jung, Sungwon;Lee, Yeongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.883-893
    • /
    • 2019
  • In this study, crop coefficients were calculated in two different methods and the results were evaluated. In the first method, appropriateness of GLDAS-based evapotranspiration was evaluated by comparing it with observed data of Cheongmi-cheon (CMC) Flux tower. Then, crop coefficient was calculated by dividing actual evapotranspiration with potential evapotranspiration that derived from GLDAS. In the second method, crop coefficient was determined by using MLR (Multiple Linear Regression) analysis with vegetation index (NDVI, EVI, LAI and SAVI) derived from MODIS and in-situ soil moisture data observed in CMC, In comparison of two crop coefficients over the entire period, for each crop coefficient GLDAS Kc and SM&VI Kc, shows the mean value of 0.412 and 0.378, the bias of 0.031 and -0.004, the RMSE of 0.092 and 0.069, and the Index of Agree (IOA) of 0.944 and 0.958. Overall, both methods showed similar patterns with observed evapotranspiration, but the SM&VI-based method showed better results. One step further, the statistical evaluation of GLDAS Kc and SM&VI Kc in specific period was performed according to the growth phase of the crop. The result shows that GLDAS Kc was better in the early and mid-phase of the crop growth, and SM&VI Kc was better in the latter phase. This result seems to be because of reduced accuracy of MODIS sensors due to yellow dust in spring and rain clouds in summer. If the observational accuracy of the MODIS sensor is improved in subsequent study, the accuracy of the SM&VI-based method will also be improved and this method will be applicable in determining the crop coefficient of unmeasured basin or predicting the crop coefficient of a certain area.

Analysis of the Effect of Corner Points and Image Resolution in a Mechanical Test Combining Digital Image Processing and Mesh-free Method (디지털 이미지 처리와 강형식 기반의 무요소법을 융합한 시험법의 모서리 점과 이미지 해상도의 영향 분석)

  • Junwon Park;Yeon-Suk Jeong;Young-Cheol Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • In this paper, we present a DIP-MLS testing method that combines digital image processing with a rigid body-based MLS differencing approach to measure mechanical variables and analyze the impact of target location and image resolution. This method assesses the displacement of the target attached to the sample through digital image processing and allocates this displacement to the node displacement of the MLS differencing method, which solely employs nodes to calculate mechanical variables such as stress and strain of the studied object. We propose an effective method to measure the displacement of the target's center of gravity using digital image processing. The calculation of mechanical variables through the MLS differencing method, incorporating image-based target displacement, facilitates easy computation of mechanical variables at arbitrary positions without constraints from meshes or grids. This is achieved by acquiring the accurate displacement history of the test specimen and utilizing the displacement of tracking points with low rigidity. The developed testing method was validated by comparing the measurement results of the sensor with those of the DIP-MLS testing method in a three-point bending test of a rubber beam. Additionally, numerical analysis results simulated only by the MLS differencing method were compared, confirming that the developed method accurately reproduces the actual test and shows good agreement with numerical analysis results before significant deformation. Furthermore, we analyzed the effects of boundary points by applying 46 tracking points, including corner points, to the DIP-MLS testing method. This was compared with using only the internal points of the target, determining the optimal image resolution for this testing method. Through this, we demonstrated that the developed method efficiently addresses the limitations of direct experiments or existing mesh-based simulations. It also suggests that digitalization of the experimental-simulation process is achievable to a considerable extent.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.