• Title/Summary/Keyword: Users' assessment

Search Result 549, Processing Time 0.026 seconds

The Smartphone User's Dilemma among Personalization, Privacy, and Advertisement Fatigue: An Empirical Examination of Personalized Smartphone Advertisement (스마트폰 이용자의 모바일 광고 수용의사에 영향을 주는 요인: 개인화된 서비스, 개인정보보호, 광고 피로도 사이에서의 딜레마)

  • You, Soeun;Kim, Taeha;Cha, Hoon S.
    • Information Systems Review
    • /
    • v.17 no.2
    • /
    • pp.77-100
    • /
    • 2015
  • This study examined the factors that influence the smartphone user's decision to accept the personalized mobile advertisement. As a theoretical basis, we applied the privacy calculus model (PCM) that illustrates how consumers are engaged in a dynamic adjustment process in which privacy risks are weighted against benefits of information disclosure. In particular, we investigated how smartphone users make a risk-benefit assessment under which personalized service as benefit-side factor and information privacy risks as a risk-side factor accompanying their acceptance of advertisements. Further, we extend the current PCM by considering advertisement fatigue as a new factor that may influence the user's acceptance. The research model with five (5) hypotheses was tested using data gathered from 215 respondents through a quasi-experimental survey method. During the survey, each participant was asked to navigate the website where the experimental simulation of a mobile advertisement service was provided. The results showed that three (3) out of five (5) hypotheses were supported. First, we found that the intention to accept advertisements is positively and significantly influenced by the perceived value of personalization. Second, perceived advertisement fatigue was also found to be a strong predictor of the intention to accept advertisements. However, we did not find any evidence of direct influence of privacy risks. Finally, we found that the significant moderating effect between the perceived value of personalization and advertisement fatigue. This suggests that the firms should provide effective tailored advertisement that can increase the perceived value of personalization to mitigate the negative impacts of advertisement fatigue.

Changes and Improvements of the Standardized Eddy Covariance Data Processing in KoFlux (표준화된 KoFlux 에디 공분산 자료 처리 방법의 변화와 개선)

  • Kang, Minseok;Kim, Joon;Lee, Seung-Hoon;Kim, Jongho;Chun, Jung-Hwa;Cho, Sungsik
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.20 no.1
    • /
    • pp.5-17
    • /
    • 2018
  • The standardized eddy covariance flux data processing in KoFlux has been updated, and its database has been amended accordingly. KoFlux data users have not been informed properly regarding these changes and the likely impacts on their analyses. In this paper, we have documented how the current structure of data processing in KoFlux has been established through the changes and improvements to ensure transparency, reliability and usability of the KoFlux database. Due to increasing diversity and complexity of flux site instrumentation and organization, we have re-implemented the previously ignored or simplified procedures in data processing (e.g., frequency response correction, stationarity test), and added new methods for $CH_4$ flux gap-filling and $CO_2$ flux correction and partitioning. To evaluate the effects of the changes, we processed the data measured at a flat and homogeneous paddy field (i.e., HPK) and a deciduous forest in complex and heterogeneous topography (i.e., GDK), and quantified the differences. Based on the results from our overall assessment, it is confirmed that (1) the frequency response correction (HPK: 11~18% of biases for annually integrated values, GDK: 6~10%) and the stationarity test (HPK: 4~19% of biases for annually integrated values, GDK: 9~23%) are important for quality control and (2) the minimization of the missing data and the choice of the appropriate driver (rather than the choice of the gap-filling method) are important to reduce the uncertainty in gap-filled fluxes. These results suggest the future directions for the data processing technology development to ensure the continuity of the long-term KoFlux database.

Assessment of the usefulness of the Machine Performance Check system that is an evaluation tools for the determination of daily beam output (일간 빔 출력 확인을 위한 평가도구인 Machine Performance Check의 유용성 평가)

  • Lee, Sang Hyeon;Ahn, Woo Sang;Lee, Woo Seok;Choi, Jin Hyeok;Kim, Seon Yeon
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.29 no.2
    • /
    • pp.65-73
    • /
    • 2017
  • Purpose: Machine Performance Check (MPC) is a self-checking software based on the Electronic Portal Imaging Device (EPID) to measure daily beam outputs without external installation. The purpose of this study is to verify the usefulness of MPC by comparing and correlating daily beam output of QA Beamchecker PLUS. Materials and Methods: Linear accelerator (Truebeam 2.5) was used to measure 10 energies which are composed of photon beams(6, 10, 15 MV and 6, 10 MV-FFF) and electron beams(6, 9, 12, 16 and 20 MeV). A total of 80 cycles of data was obtained by measuring beam output measurement before treatment over five months period. The Pearson correlation coefficient was used to evaluate the consistency of the beam output between the MPC and the QA Beamchecker PLUS. In this study, if the Pearson correlation coefficient is; (1) 0.8 or higher, the correlation is very strong (2) between 0.6 and 0.79, the correlation is strong (3) between 0.4 and 0.59, the correlation is moderate (4) between 0.2 and 0.39, the correlation is weak (5) lower than 0.2, the correlation is very weak. Results: Output variations observed between MPC and QA Beamchecker PLUS were within 2 % for photons and electrons. The beam outputs variations of MPC were $0.29{\pm}0.26%$ and $0.30{\pm}0.26%$ for photon and electron beams, respectively. QA Beamchecker PLUS beam outputs were $0.31{\pm}0.24%$ and $0.33{\pm}0.24%$ for photon and electron beams, respectively. The Pearson correlation coefficient between MPC and QA Beamchecker PLUS indicated that photon beams were very strong at 15 MV, and strong at 6 MV, 10 MV, 6 MV-FFF and 10 MV-FFF. For electron beams, the Pearson correlation coefficient were strong at 16 MeV and 20 MeV, moderate at 9 MeV and 12 MeV, and very weak at 6 MeV. Conclusion: MPC showed significantly strong correlation with QA Beamchecker PLUS when testing with photon beams and high-energy electron beams in the evaluation of daily beam output, but the correlation when testing with low-energy electron beams (6 MeV) appeared to be low. However, MPC and QA Beamchecker PLUS are considered to be suitable for checking daily beam output, as they performed within 2 % of beam output consistency during the observation. MPC which can perform faster than the conventional daily beam output measurement tool, is considered to be an effective method for users.

  • PDF

Salt-Related Dietary Behaviors and Sodium Intakes of University Students in Gyeonggi-do (경기지역 대학생의 소금 관련 식행동 및 나트륨 섭취량)

  • Chung, Eun-Jung;Shim, Eu-Gene
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.37 no.5
    • /
    • pp.578-588
    • /
    • 2008
  • The objective of this study was to evaluate associations of dietary sodium (Na) intake with salt-related dietary behaviors of 218 university students (95 men; 123 women) living in Gyeonggi area. Dish frequency questionnaire (DFQ) was used to identify salt-related dietary behaviors and to determine Na intakes. In men, systolic & diastolic blood pressures, Na intakes and DFQ-15 scores were significantly higher than in women. High-salt intake group (HS), classified by DFQ-15, had higher scores of high-salt dietary attitude and more Na intakes than low-salt intake group (LS). HS took protein foods and had balanced diets less frequently than LS (p<0.05). HS had fried dishes & fatty meats, and added salt to dishes more frequently (p<0.05). HS and LS had differences in preference of soy-boiled and Chinese or Japanese foods, in intake frequency of bean-paste soup, in use of soy sauce with fried food or raw fish, and in salt addition to dishes at the table (p<0.05). HS, classified by Na intakes, had high-salt dietary attitudes such as preference of seasoned rice & soy-boiled foods and habitual addition of soy sauce or salt to dishes at the table. The subjects using food labels when purchasing had better salt-related attitudes & behaviors, and lower DFQ-15 scores & Na intakes than the non-users (p<0.01). Self-assessed HS (SHS) had worse salt-related attitudes and behaviors (p<0.05). Male self-assessed LS (SLS) had higher Na intakes, which indicated that self-assessment of salt preference did not actually reflect Na intake. In summary, male university students belonged to a high-risk group of salt intakes, and HS preferred soy-boiled foods or fatty dishes, frequently added salt to dishes and rarely had balanced diets. These results suggest that nutrition education programs for university students should include fundamental dietetics and a balanced diet, in addition to a low-Na diet.

Structural Relationships Among Factors to Adoption of Telehealth Service (원격의료서비스 수용요인의 구조적 관계 실증연구)

  • Kim, Sung-Soo;Ryu, See-Won
    • Asia pacific journal of information systems
    • /
    • v.21 no.3
    • /
    • pp.71-96
    • /
    • 2011
  • Within the traditional medical delivery system, patients residing in medically vulnerable areas, those with body movement difficulties, and nursing facility residents have had limited access to good healthcare services. However, Information and Communication Technology (ICT) provides us with a convenient and useful means of overcoming distance and time constraints. ICT is integrated with biomedical science and technology in a way that offers a new high-quality medical service. As a result, rapid technological advancement is expected to play a pivotal role bringing about innovation in a wide range of medical service areas, such as medical management, testing, diagnosis, and treatment; offering new and improved healthcare services; and effecting dramatic changes in current medical services. The increase in aging population and chronic diseases has caused an increase in medical expenses. In response to the increasing demand for efficient healthcare services, a telehealth service based on ICT is being emphasized on a global level. Telehealth services have been implemented especially in pilot projects and system development and technological research. With the service about to be implemented in earnest, it is necessary to study its overall acceptance by consumers, which is expected to contribute to the development and activation of a variety of services. In this sense, the study aims at positively examining the structural relationship among the acceptance factors for telehealth services based on the Technology Acceptance Model (TAM). Data were collected by showing audiovisual material on telehealth services to online panels and requesting them to respond to a structured questionnaire sheet, which is known as the information acceleration method. Among the 1,165 adult respondents, 608 valid samples were finally chosen, while the remaining were excluded because of incomplete answers or allotted time overrun. In order to test the reliability and validity of the assessment scale items, we carried out reliability and factor analyses, and in order to explore the causal relation among potential variables, we conducted a structural equation modeling analysis using AMOS 7.0 and SPSS 17.0. The research outcomes are as follows. First, service quality, innovativeness of medical technology, and social influence were shown to affect perceived ease of use and perceived usefulness of the telehealth service, which was statistically significant, and the two factors had a positive impact on willingness to accept the telehealth service. In addition, social influence had a direct, significant effect on intention to use, which is paralleled by the TAM used in previous research on technology acceptance. This shows that the research model proposed in the study effectively explains the acceptance of the telehealth service. Second, the research model reveals that information privacy concerns had a insignificant impact on perceived ease of use of the telehealth service. From this, it can be gathered that the concerns over information protection and security are reduced further due to advancements in information technology compared to the initial period in the information technology industry, and thus the improvement in quality of medical services appeared to ensure that information privacy concerns did not act as a prohibiting factor in the acceptance of the telehealth service. Thus, if other factors have an enormous impact on ease of use and usefulness, concerns over these results in the initial period of technology acceptance may become irrelevant. However, it is clear that users' information privacy concerns, as other studies have revealed, is a major factor affecting technology acceptance. Thus, caution must be exercised while interpreting the result, and further study is required on the issue. Numerous information technologies with outstanding performance and innovativeness often attract few consumers. A revised bill for those urgently in need of telehealth services is about to be approved in the national assembly. As telemedicine is implemented between doctors and patients, a wide range of systems that will improve the quality of healthcare services will be designed. In this sense, the study on the consumer acceptance of telehealth services is meaningful and offers strong academic evidence. Based on the implications, it can be expected to contribute to the activation of telehealth services. Further study is needed to assess the acceptance factors for telehealth services, such as motivation to remain healthy, health care involvement, knowledge on health, and control of health-related behavior, in order to develop unique services according to the categorization of customers based on health factors. In addition, further study may focus on various theoretical cognitive behavior models other than the TAM, such as the health belief model.

Management of plant genetic resources at RDA in line with Nagoya Protocol

  • Yoon, Moon-Sup;Na, Young-Wang;Ko, Ho-Cheol;Lee, Sun-Young;Ma, Kyung-Ho;Baek, Hyung-Jin;Lee, Su-Kyeung;Lee, Sok-Young
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2017.06a
    • /
    • pp.51-52
    • /
    • 2017
  • "Plant genetic resources for food and agriculture" means any genetic material of plant origin of actual or potential value for food and agriculture. "Genetic material" means any material of plant origin, including reproductive and vegetative propagating material, containing functional units of heredity. (Internal Treaty on Plant Genetic Resources for Food and Agriculture, ITPGRFA). The "Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization (ABS) to the Convention on Biological Diversity (shortly Nagoya Protocol)" is a supplementary agreement to the Convention on Biological Diversity. It provides a transparent legal framework for the effective implementation of one of the three objectives of the CBD: the fair and equitable sharing of benefits arising out of the utilization of genetic resources. The Nagoya Protocol on ABS was adopted on 29 October 2010 in Nagoya, Japan and entered into force on 12 October 2014, 90 days after the deposit of the fiftieth instrument of ratification. Its objective is the fair and equitable sharing of benefits arising from the utilization of genetic resources, thereby contributing to the conservation and sustainable use of biodiversity. The Nagoya Protocol will create greater legal certainty and transparency for both providers and users of genetic resources by; (a) Establishing more predictable conditions for access to genetic resources and (b) Helping to ensure benefit-sharing when genetic resources leave the country providing the genetic resources. By helping to ensure benefit-sharing, the Nagoya Protocol creates incentives to conserve and sustainably use genetic resources, and therefore enhances the contribution of biodiversity to development and human well-being. The Nagoya Protocol's success will require effective implementation at the domestic level. A range of tools and mechanisms provided by the Nagoya Protocol will assist contracting Parties including; (a) Establishing national focal points (NFPs) and competent national authorities (CNAs) to serve as contact points for information, grant access or cooperate on issues of compliance, (b) An Access and Benefit-sharing Clearing-House to share information, such as domestic regulatory ABS requirements or information on NFPs and CNAs, (c) Capacity-building to support key aspects of implementation. Based on a country's self-assessment of national needs and priorities, this can include capacity to develop domestic ABS legislation to implement the Nagoya Protocol, to negotiate MAT and to develop in-country research capability and institutions, (d) Awareness-raising, (e) Technology Transfer, (f) Targeted financial support for capacity-building and development initiatives through the Nagoya Protocol's financial mechanism, the Global Environment Facility (GEF) (Nagoya Protocol). The Rural Development Administration (RDA) leading to conduct management agricultural genetic resources following the 'ACT ON THE PRESERVATION, MANAGEMENT AND USE OF AGRO-FISHERY BIO-RESOURCES' established on 2007. According to $2^{nd}$ clause of Article 14 (Designation, Operation, etc. of Agencies Responsible for Agro-Fishery Bioresources) of the act, the duties endowed are, (a) Matters concerning securing, preservation, management, and use of agro-fishery bioresources; (b) Establishment of an integrated information system for agro-fishery bioresources; (c) Matters concerning medium and long-term preservation of, and research on, agro-fishery bioresources; (d) Matters concerning international cooperation for agro-fishery bioresources and other relevant matters. As the result the RDA manage about 246,000 accessions of plant genetic resources under the national management system at the end of 2016.

  • PDF

A Meta Analysis of Using Structural Equation Model on the Korean MIS Research (국내 MIS 연구에서 구조방정식모형 활용에 관한 메타분석)

  • Kim, Jong-Ki;Jeon, Jin-Hwan
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.47-75
    • /
    • 2009
  • Recently, researches on Management Information Systems (MIS) have laid out theoretical foundation and academic paradigms by introducing diverse theories, themes, and methodologies. Especially, academic paradigms of MIS encourage a user-friendly approach by developing the technologies from the users' perspectives, which reflects the existence of strong causal relationships between information systems and user's behavior. As in other areas in social science the use of structural equation modeling (SEM) has rapidly increased in recent years especially in the MIS area. The SEM technique is important because it provides powerful ways to address key IS research problems. It also has a unique ability to simultaneously examine a series of casual relationships while analyzing multiple independent and dependent variables all at the same time. In spite of providing many benefits to the MIS researchers, there are some potential pitfalls with the analytical technique. The research objective of this study is to provide some guidelines for an appropriate use of SEM based on the assessment of current practice of using SEM in the MIS research. This study focuses on several statistical issues related to the use of SEM in the MIS research. Selected articles are assessed in three parts through the meta analysis. The first part is related to the initial specification of theoretical model of interest. The second is about data screening prior to model estimation and testing. And the last part concerns estimation and testing of theoretical models based on empirical data. This study reviewed the use of SEM in 164 empirical research articles published in four major MIS journals in Korea (APJIS, ISR, JIS and JITAM) from 1991 to 2007. APJIS, ISR, JIS and JITAM accounted for 73, 17, 58, and 16 of the total number of applications, respectively. The number of published applications has been increased over time. LISREL was the most frequently used SEM software among MIS researchers (97 studies (59.15%)), followed by AMOS (45 studies (27.44%)). In the first part, regarding issues related to the initial specification of theoretical model of interest, all of the studies have used cross-sectional data. The studies that use cross-sectional data may be able to better explain their structural model as a set of relationships. Most of SEM studies, meanwhile, have employed. confirmatory-type analysis (146 articles (89%)). For the model specification issue about model formulation, 159 (96.9%) of the studies were the full structural equation model. For only 5 researches, SEM was used for the measurement model with a set of observed variables. The average sample size for all models was 365.41, with some models retaining a sample as small as 50 and as large as 500. The second part of the issue is related to data screening prior to model estimation and testing. Data screening is important for researchers particularly in defining how they deal with missing values. Overall, discussion of data screening was reported in 118 (71.95%) of the studies while there was no study discussing evidence of multivariate normality for the models. On the third part, issues related to the estimation and testing of theoretical models on empirical data, assessing model fit is one of most important issues because it provides adequate statistical power for research models. There were multiple fit indices used in the SEM applications. The test was reported in the most of studies (146 (89%)), whereas normed-test was reported less frequently (65 studies (39.64%)). It is important that normed- of 3 or lower is required for adequate model fit. The most popular model fit indices were GFI (109 (66.46%)), AGFI (84 (51.22%)), NFI (44 (47.56%)), RMR (42 (25.61%)), CFI (59 (35.98%)), RMSEA (62 (37.80)), and NNFI (48 (29.27%)). Regarding the test of construct validity, convergent validity has been examined in 109 studies (66.46%) and discriminant validity in 98 (59.76%). 81 studies (49.39%) have reported the average variance extracted (AVE). However, there was little discussion of direct (47 (28.66%)), indirect, and total effect in the SEM models. Based on these findings, we suggest general guidelines for the use of SEM and propose some recommendations on concerning issues of latent variables models, raw data, sample size, data screening, reporting parameter estimated, model fit statistics, multivariate normality, confirmatory factor analysis, reliabilities and the decomposition of effects.

Assessing and Mapping the Aesthetic Value of Bukhansan National Park Using Geotagged Images (지오태그 이미지를 활용한 북한산국립공원의 경관미 평가 및 맵핑)

  • Kim, Jee-Young;Son, Yong-Hoon
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.4
    • /
    • pp.64-73
    • /
    • 2021
  • The purpose of this study is to present a method to assess the landscape aesthetic value of Bukhansan National Park using geotagged images that have been shared on social media sites. The method presented in this study consisted mainly of collecting geotagged image data, identifying landscape images, and analyzing the cumulative visibility by applying a target probability index. Ramblr is an application that supports outdoor activities with many users in Korea, from which a total of 110,954 geotagged images for Bukhansan National Park were collected and used to assess the landscape aesthetics. The collected geotagged images were interpreted using the Google Vision API, and were subsequently were divided into 11 landscape image types and 9 non-landscape image types through cluster analysis. As a result of analyzing the landscape types of Bukhansan National Park based on the extracted landscape images, landscape types related to topographical characteristics, such as peaks and mountain ranges, accounted for the largest portion, and forest landscapes, foliage landscapes, and waterscapes were also commonly found as major landscape types. In the derived landscape aesthetic value map, the higher the elevation and slope, the higher the overall landscape aesthetic value, according to the proportion and characteristics of these major landscape types. However, high landscape aesthetic values were also confirmed in some areas of lowlands with gentle slopes. In addition, the Bukhansan area was evaluated to have higher landscape aesthetics than the Dobongsan area. Despite the high elevation and slope, the Dobongsan area had a relatively low landscape aesthetic value. This shows that the aesthetic value of the landscape is strongly related not only to the physical environment but also to the recreational activities of visitors who are viewing the scenery. In this way, the landscape aesthetics assessment using the cumulative visibility of geotagged images is expected to be useful for planning and managing the landscape of Bukhansan National Park in the future, through allowing the geographical understanding of the landscape values based on people's perceptions and the identification of the regional deviations.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.