• Title/Summary/Keyword: Performance Information Use

Search Result 5,694, Processing Time 0.039 seconds

A Double-Blind Comparison of Paroxetine and Amitriptyline in the Treatment of Depression Accompanied by Alcoholism : Behavioral Side Effects during the First 2 Weeks of Treatment (주정중독에 동반된 우울증의 치료에서 Paroxetine과 Amitriptyline의 이중맹 비교 : 치료초기 2주 동안의 행동학적 부작용)

  • Yoon, Jin-Sang;Yoon, Bo-Hyun;Choi, Tae-Seok;Kim, Yong-Bum;Lee, Hyung-Yung
    • Korean Journal of Biological Psychiatry
    • /
    • v.3 no.2
    • /
    • pp.277-287
    • /
    • 1996
  • Objective : It has been proposed that cognition and related aspects of mental functioning are decreased in depression as well as in alcoholism. The objective of the study was to compare behavioral side effects of paroxetine and amitriptyline in depressed patients accompanied by alcoholism. The focused comparisons were drug effects concerning psychomotor performance, cognitive function, sleep and daytime sleepiness during the first 2 weeks of treatment. Methods : After an alcohol detoxification period(3 weeks) and a washout period(1 week), a total of 20 male inpatients with alcohol use disorder (DSM-IV), who also had a major depressive episode(DSM-IV), were treated double-blind with paroxetine 20mg/day(n=10) or amitriptyline 25mg/day(n=10) for 2 weeks. All patients were required to have a scare of at least 18 respectively on bath the Hamilton Rating Scale far Depression(HAM-D) and Beck Depression Inventory(BDI) at pre-drug baseline. Patients randomized to paroxetine received active medication in the morning and placebo in the evening whereas those randomized to amitriptyline received active medication in the evening and placebo in the morning. All patients performed the various tasks in a test battery at baseline and at days 3, 7 and 14. The test battery included : critical flicker fusion threshold for sensory information processing capacity : choice reaction time for gross psychomotor performance : tracking accuracy and latency of response to peripheral stimulus as a measure of line sensorimotor co-ordination and divided attention : digit symbol substitution as a measure of sustained attention and concentration. To rate perceived sleep and daytime sleepiness, 10cm line Visual analogue scales were employed at baseline and at days 3, 7 and 14. The subjective rating scales were adapted far this study from Leeds sleep Evaluation Questionnaire and Epworth Sleepiness Scale. In addition a comprehensive side effect assessment, using the UKU side effect rating scale, was carried out at baseline and at days 7 and 14. The efficacy of treatment was evaluated using HAM-D, BDI and clinical global impression far severity and improvement at days 7 and 14. Results : The pattern of results indicated thai paroxetine improved performance an mast of the lest variables and also improved sleep with no effect on daytime sleepiness aver the study period. In contrast, amitriptyline produced disruption of performance on same tests and improved sleep with increased daytime sleepiness in particular at day 3. On the UKU side effect rating scale, mare side effects were registered an amitriptyline. The therapeutic efficacy was observed in favor of paroxetine early in day 7. Conclusion : These results demonstrated thai paroxetine in much better than amitriptyline for the treatment of depressed patients accompained by alcoholism at least in terms of behavioral safety and tolerability, furthermore the results may assist in explaining the therapeutic outcome of paroxetine. For example, and earlier onset of antidepressant action of paroxetine may be caused by early improved cognitive function or by contributing to good compliance with treatment.

  • PDF

Effects of Conflict Management Strategy Within Supply Chain on Partnership and Performance (공급망 내 갈등관리전략이 파트너십과 성과에 미치는 영향)

  • Ham, Yoon-Hee;Song, Sang-Hwa
    • Korean small business review
    • /
    • v.42 no.1
    • /
    • pp.79-105
    • /
    • 2020
  • While individual enterprises with different objectives each other within supply chains require a variety of resources to achieve their own seeking goals and performances, it is necessary to form interdependent relationships among the enterprises to secure the resources what they need, as the individual enterprises are supposed to have limitations on such as time, space and cost to secure all the resources. In this process, conflict possibilities rise and opportunistic behaviors increase due to those environmental factors such as unbalanced information among enterprises, limited rationality, pursuit of interests, and risk aversion. Those existing studies on conflicts in the field of supply chains have limitations in that they failed to present specific conflict management strategies based on the conflict types from the perspective of the conflict resolution mechanism as the studies have made only focused on investigating the causes of conflicts and the impact of conflicts on performance. In this study, therefore, it used the TKI model of Kilmann and Thomas(1977) to subdivide the conflict management strategies in the process of transactions within supply chains by enterprises, and looked into the impact on partnership and performance according to each strategy. As the results, it showed that those types of conflict management strategies such as concession type and cooperation type had a positive(+) impact on the relationship commitment as a factor of partnership, and it was identified that the relationship commitment had a positive(+) impact on performance. In other words, it can be considered that the enterprises making use of the concession type & the cooperation type conflict management strategies under the situation of conflict would be able to have a very positive impact on their performances if they can make good relationship commitment such as investments in and efforts for the sustainable relationship along with the conflict management, while recognizing the importance of relationship. The most important meaning of this study lies on in terms of that it would be contributable to strengthening the partnership between enterprises and minimizing the risk of supply chains caused by conflicts through these results from the study.

A Study on The RFID/WSN Integrated system for Ubiquitous Computing Environment (유비쿼터스 컴퓨팅 환경을 위한 RFID/WSN 통합 관리 시스템에 관한 연구)

  • Park, Yong-Min;Lee, Jun-Hyuk
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.49 no.1
    • /
    • pp.31-46
    • /
    • 2012
  • The most critical technology to implement ubiquitous health care is Ubiquitous Sensor Network (USN) technology which makes use of various sensor technologies, processor integration technology, and wireless network technology-Radio Frequency Identification (RFID) and Wireless Sensor Network (WSN)-to easily gather and monitor actual physical environment information from a remote site. With the feature, the USN technology can make the information technology of the existing virtual space expanded to actual environments. However, although the RFID and the WSN have technical similarities and mutual effects, they have been recognized to be studied separately, and sufficient studies have not been conducted on the technical integration of the RFID and the WSN. Therefore, EPCglobal which realized the issue proposed the EPC Sensor Network to efficiently integrate and interoperate the RFID and WSN technologies based on the international standard EPCglobal network. The proposed EPC Sensor Network technology uses the Complex Event Processing method in the middleware to integrate data occurring through the RFID and the WSN in a single environment and to interoperate the events based on the EPCglobal network. However, as the EPC Sensor Network technology continuously performs its operation even in the case that the minimum conditions are not to be met to find complex events in the middleware, its operation cost rises. Moreover, since the technology is based on the EPCglobal network, it can neither perform its operation only for the sake of sensor data, nor connect or interoperate with each information system in which the most important information in the ubiquitous computing environment is saved. Therefore, to address the problems of the existing system, we proposed the design and implementation of USN integration management system. For this, we first proposed an integration system that manages RFID and WSN data based on Session Initiation Protocol (SIP). Secondly, we defined the minimum conditions of the complex events to detect unnecessary complex events in the middleware, and proposed an algorithm that can extract complex events only when the minimum conditions are to be met. To evaluate the performance of the proposed methods we implemented SIP-based integration management system.

An Analysis of Web Services in the Legal Works of the Metropolitan Representative Library (광역대표도서관 법정업무의 웹서비스 분석)

  • Seon-Kyung Oh
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.58 no.2
    • /
    • pp.177-198
    • /
    • 2024
  • Article 22(1) of the Library Act, which was completely revised in December 2006, stipulated that regional representative libraries are statutory organizations, and Article 25(1) of the Library Act, which was revised again in late 2021, renamed them as metropolitan representative libraries and expanded their duties. The reason why cities and provinces are required to specify or establish and operate metropolitan representative libraries is that in addition to their role as public libraries for public information use, cultural activities, and lifelong learning as stipulated in Article 23 of the Act, they are also responsible for the legal works of metropolitan representative libraries as stipulated in Article 26, and lead the development of libraries and knowledge culture by serving as policy libraries, comprehensive knowledge information centers, support and cooperation centers, research centers, and joint preservation libraries for all public libraries in the city or province. Therefore, it is necessary to analyze and diagnose whether the metropolitan representative library has been faithfully fulfilling its legal works for the past 15 years(2009-2023), and whether it is properly providing the results of its statutory planning and implementation on its website to meet the digital and mobile era. Therefore, this study investigated and analyzed the performance of the metropolitan representative library for the last two years based on the current statutory tasks and evaluated the extent to which it provides them through its website, and suggested complementary measures to strengthen its web services. As a result, it was analyzed that the web services for legal works that the metropolitan representative library should perform are quite insufficient and inadequate, so it suggested complementary measures such as building a website for legal works on the homepage, enhancing accessibility and visibility through providing an independent website, providing various policy information and web services (portal search, inter-library loan, one-to-one consultation, joint DB construction, data transfer and preservation, etc.), and ensuring digital accessibility of knowledge information for the vulnerable.

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.

Development of a Window Program for Searching CpG Island (CpG Island 검색용 윈도우 프로그램 개발)

  • Kim, Ki-Bong
    • Journal of Life Science
    • /
    • v.18 no.8
    • /
    • pp.1132-1139
    • /
    • 2008
  • A CpG island is a short stretch of DNA in which the frequency of the CG dinucleotide is higher than other regions. CpG islands are present in the promoters and exonic regions of approximately $30{\sim}60$% of mammalian genes so they are useful markers for genes in organisms containing 5-methylcytosine in their genomes. Recent evidence supports the notion that the hypermethylation of CpG island, by silencing tumor suppressor genes, plays a major causal role in cancer, which has been described in almost every tumor types. In this respect, CpG island search by computational methods is very helpful for cancer research and computational promoter and gene predictions. I therefore developed a window program (called CpGi) on the basis of CpG island criteria defined by D. Takai and P. A. Jones. The program 'CpGi' was implemented in Visual C++ 6.0 and can determine the locations of CpG islands using diverse parameters (%GC, Obs (CpG)/Exp (CpG), window size, step size, gap value, # of CpG, length) specified by user. The analysis result of CpGi provides a graphical map of CpG islands and G+C% plot, where more detailed information on CpG island can be obtained through pop-up window. Two human contigs, i.e. AP00524 (from chromosome 22) and NT_029490.3 (from chromosome 21), were used to compare the performance of CpGi and two other public programs for the accuracy of search results. The two other programs used in the performance comparison are Emboss-CpGPlot and CpG Island Searcher that are web-based public CpG island search programs. The comparison result showed that CpGi is on a level with or outperforms Emboss-CpGPlot and CpG Island Searcher. Having a simple and easy-to-use user interface, CpGi would be a very useful tool for genome analysis and CpG island research. To obtain a copy of CpGi for academic use only, contact corresponding author.

Nutrition Education Performance of Elementary School Dietitians in North Gyeonggi Province (경기 북부 지역 초등학교 영양사의 영양 교육 실시 현황)

  • Min Kyung-Chan;Park Young-Sim;Park Hae-Won;Lee Myung-Ho;Shin Yong-Chill;Cho Kyu-Bong;Rhie Kyoung-Ik;Jeaung Koang-Ock;Shin Yim-Sook;Yoon Hee-Sun
    • The Korean Journal of Food And Nutrition
    • /
    • v.19 no.2
    • /
    • pp.183-192
    • /
    • 2006
  • The purpose of this study was to investigate the performance of elementary school dietitians in terms of nutrition education in the northern portion of Gyeonggi province. Self-administered questionnaires were given to 50 dietitians who have worked in elementary schools with self-operation food service, and 35(70%) dietitians returned the questionnaires. The results are summarized as follows: no students took part in nutrition education as a regular course, but all dietitians performed nutrition education in passive ways, such as 'using home correspondence'(39.0%), 'bulletin board/poster'(22.0%), 'using the internet'(13.4%) and 'indirectly through a classroom teacher'(12.2%). Most respondents performed nutrition education 'one time/month'(66.0%) or 'one time/week'(20.0%). The respondents thought that suitable teaching times for nutrition education were 'during a related subject'(35.5%), 'during lunch time'(22.6%) rather than 'during an independent subject'(16.1%). Most of the dietitians(94.3%) did not perform nutrition counseling because of 'a lack of opportunity'(72.7%) and 'workload'(27.3%). Additionally 88.6% of respondents did not have the time of for nutrition counseling for parents because 'am not a teacher'(56.7%) and 'workload'(30,0%). Information sources for nutrition education were mainly 'internet'(71.4%) and 're-educationa1 materials'(17.1%). They possessed instructional materials in the forms of 'printed materials'(35.1 %), 'exhibition/bulletin board'(31.2%), and 'electrical materials'(33.8%), 'but did not have 'solid materials' such as food models and dolls. Generally they had mostly 'leaflets'(82.9%), 'bulletins'(68.6%), 'internet'(57.1%), and 'CDs'(57.1%). Preferences for instructional materials used were 'printed materials'(46.2%), 'exhibition/bulletin board'(36.5%), and 'electrical materials'(17.3%) 'Leaflets'(80.0%) were mainly used; 'CD'(17.1 %) use was low compared to the proportion possessing CDs. The topics frequently chosen by the subjects for nutrition education were 'table manners'(82.9%), 'basic concepts of food and nutrition'(80.0%), and 'proper food habits'(80.0%), but the topics helpful for practical use, such as 'how much do I eat'(20.0%) and 'nutrition labeling'(37.1%), were not included frequently. The respondents thought that 'eating only what they like'(60.0 %), 'intake of processed foods'(17.8%), and 'obesity'(17.8%) were the most common nutritional problems among elementary school children. They also thought that establishing a regular course for nutrition education was an effective way to cut down on these nutritional problems. In conclusion, nutrition education programs that are combined with effective instructional materials and practical topics should be developed. Additionally, it is recommended that dietitians act as teachers who participate in regular courses as soon as possible.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Managing Duplicate Memberships of Websites : An Approach of Social Network Analysis (웹사이트 중복회원 관리 : 소셜 네트워크 분석 접근)

  • Kang, Eun-Young;Kwahk, Kee-Young
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.153-169
    • /
    • 2011
  • Today using Internet environment is considered absolutely essential for establishing corporate marketing strategy. Companies have promoted their products and services through various ways of on-line marketing activities such as providing gifts and points to customers in exchange for participating in events, which is based on customers' membership data. Since companies can use these membership data to enhance their marketing efforts through various data analysis, appropriate website membership management may play an important role in increasing the effectiveness of on-line marketing campaign. Despite the growing interests in proper membership management, however, there have been difficulties in identifying inappropriate members who can weaken on-line marketing effectiveness. In on-line environment, customers tend to not reveal themselves clearly compared to off-line market. Customers who have malicious intent are able to create duplicate IDs by using others' names illegally or faking login information during joining membership. Since the duplicate members are likely to intercept gifts and points that should be sent to appropriate customers who deserve them, this can result in ineffective marketing efforts. Considering that the number of website members and its related marketing costs are significantly increasing, it is necessary for companies to find efficient ways to screen and exclude unfavorable troublemakers who are duplicate members. With this motivation, this study proposes an approach for managing duplicate membership based on the social network analysis and verifies its effectiveness using membership data gathered from real websites. A social network is a social structure made up of actors called nodes, which are tied by one or more specific types of interdependency. Social networks represent the relationship between the nodes and show the direction and strength of the relationship. Various analytical techniques have been proposed based on the social relationships, such as centrality analysis, structural holes analysis, structural equivalents analysis, and so on. Component analysis, one of the social network analysis techniques, deals with the sub-networks that form meaningful information in the group connection. We propose a method for managing duplicate memberships using component analysis. The procedure is as follows. First step is to identify membership attributes that will be used for analyzing relationship patterns among memberships. Membership attributes include ID, telephone number, address, posting time, IP address, and so on. Second step is to compose social matrices based on the identified membership attributes and aggregate the values of each social matrix into a combined social matrix. The combined social matrix represents how strong pairs of nodes are connected together. When a pair of nodes is strongly connected, we expect that those nodes are likely to be duplicate memberships. The combined social matrix is transformed into a binary matrix with '0' or '1' of cell values using a relationship criterion that determines whether the membership is duplicate or not. Third step is to conduct a component analysis for the combined social matrix in order to identify component nodes and isolated nodes. Fourth, identify the number of real memberships and calculate the reliability of website membership based on the component analysis results. The proposed procedure was applied to three real websites operated by a pharmaceutical company. The empirical results showed that the proposed method was superior to the traditional database approach using simple address comparison. In conclusion, this study is expected to shed some light on how social network analysis can enhance a reliable on-line marketing performance by efficiently and effectively identifying duplicate memberships of websites.