A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)
-
- Journal of Intelligence and Information Systems
- /
- v.25 no.1
- /
- pp.163-177
- /
- 2019
As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.
Purpose: The radiochromic film (Gafchromic EBT3, Ashland Advanced Materials, USA) and 3-dimensional analysis system dosimetry checkTM (DC, MathResolutions, USA) were evaluated for patient-specific quality assurance (QA) of helical tomotherapy. Materials and Methods: Depending on the tumors' positions, three types of targets, which are the abdominal tumor (130.6㎤), retroperitoneal tumor (849.0㎤), and the whole abdominal metastasis tumor (3131.0㎤) applied to the humanoid phantom (Anderson Rando Phantom, USA). We established a total of 12 comparative treatment plans by the four geometric conditions of the beam irradiation, which are the different field widths (FW) of 2.5-cm, 5.0-cm, and pitches of 0.287, 0.43. Ionization measurements (1D) with EBT3 by inserting the cheese phantom (2D) were compared to DC measurements of the 3D dose reconstruction on CT images from beam fluence log information. For the clinical feasibility evaluation of the DC, dose reconstruction has been performed using the same cheese phantom with the EBT3 method. Recalculated dose distributions revealed the dose error information during the actual irradiation on the same CT images quantitatively compared to the treatment plan. The Thread effect, which might appear in the Helical Tomotherapy, was analyzed by ripple amplitude (%). We also performed gamma index analysis (DD: 3mm/ DTA: 3%, pass threshold limit: 95%) for pattern check of the dose distribution. Results: Ripple amplitude measurement resulted in the highest average of 23.1% in the peritoneum tumor. In the radiochromic film analysis, the absolute dose was on average 0.9±0.4%, and gamma index analysis was on average 96.4±2.2% (Passing rate: >95%), which could be limited to the large target sizes such as the whole abdominal metastasis tumor. In the DC analysis with the humanoid phantom for FW of 5.0-cm, the three regions' average was 91.8±6.4% in the 2D and 3D plan. The three planes (axial, coronal, and sagittal) and dose profile could be analyzed with the entire peritoneum tumor and the whole abdominal metastasis target, with planned dose distributions. The dose errors based on the dose-volume histogram in the DC evaluations increased depending on FW and pitch. Conclusion: The DC method could implement a dose error analysis on the 3D patient image data by the measured beam fluence log information only without any dosimetry tools for patient-specific quality assurance. Also, there may be no limit to apply for the tumor location and size; therefore, the DC could be useful in patient-specific QAl during the treatment of Helical Tomotherapy of large and irregular tumors.
Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.
Much has teed changed in the field of hospital administration in the It wake of the rapid development of sciences, techniques ana systematic hospital management. However, we still have a long way to go in organization, in the quality of hospital employees and hospital equipment and facilities, and in financial support in order to achieve proper hospital management. The above factors greatly effect the ability of hospitals to fulfill their obligation in patient care and nursing services. The purpose of this study is to determine the optimal methods of standardization and quality nursing so as to improve present nursing services through investigations and analyses of various problems concerning nursing administration. This study has been undertaken during the six month period from October 1971 to March 1972. The 41 comprehensive hospitals have been selected iron amongst the 139 in the whole country. These have been categorized according-to the specific purposes of their establishment, such as 7 university hospitals, 18 national or public hospitals, 12 religious hospitals and 4 enterprise ones. The following conclusions have been acquired thus far from information obtained through interviews with nursing directors who are in charge of the nursing administration in each hospital, and further investigations concerning the purposes of establishment, the organization, personnel arrangements, working conditions, practices of service, and budgets of the nursing service department. 1. The nursing administration along with its activities in this country has been uncritical1y adopted from that of the developed countries. It is necessary for us to re-establish a new medical and nursing system which is adequate for our social environments through continuous study and research. 2. The survey shows that the 7 university hospitals were chiefly concerned with education, medical care and research; the 18 national or public hospitals with medical care, public health and charity work; the 2 religious hospitals with medical care, charity and missionary works; and the 4 enterprise hospitals with public health, medical care and charity works. In general, the main purposes of the hospitals were those of charity organizations in the pursuit of medical care, education and public benefits. 3. The survey shows that in general hospital facilities rate 64 per cent and medical care 60 per-cent against a 100 per cent optimum basis in accordance with the medical treatment law and approved criteria for training hospitals. In these respects, university hospitals have achieved the highest standards, followed by religious ones, enterprise ones, and national or public ones in that order. 4. The ages of nursing directors range from 30 to 50. The level of education achieved by most of the directors is that of graduation from a nursing technical high school and a three year nursing junior college; a very few have graduated from college or have taken graduate courses. 5. As for the career tenure of nurses in the hospitals: one-third of the nurses, or 38 per cent, have worked less than one year; those in the category of one year to two represent 24 pet cent. This means that a total of 62 per cent of the career nurses have been practicing their profession for less than two years. Career nurses with over 5 years experience number only 16 per cent: therefore the efficiency of nursing services has been rated very low. 6. As for the standard of education of the nurses: 62 per cent of them have taken a three year course of nursing in junior colleges, and 22 per cent in nursing technical high schools. College graduate nurses come up to only 15 per cent; and those with graduate course only 0.4 per cent. This indicates that most of the nurses are front nursing technical high schools and three year nursing junior colleges. Accordingly, it is advisable that nursing services be divided according to their functions, such as professional, technical nurses and nurse's aides. 7. The survey also shows that the purpose of nursing service administration in the hospitals has been regulated in writing in 74 per cent of the hospitals and not regulated in writing in 26 per cent of the hospitals. The general purposes of nursing are as follows: patient care, assistance in medical care and education. The main purpose of these nursing services is to establish proper operational and personnel management which focus on in-service education. 8. The nursing service departments belong to the medical departments in almost 60 per cent of the hospitals. Even though the nursing service department is formally separated, about 24 per cent of the hospitals regard it as a functional unit in the medical department. Only 5 per cent of the hospitals keep the department as a separate one. To the contrary, approximately 12 per cent of the hospitals have not established a nursing service department at all but surbodinate it to the other department. In this respect, it is required that a new hospital organization be made to acknowledge the independent function of the nursing department. In 76 per cent of the hospitals they have advisory committees under the nursing department, such as a dormitory self·regulating committee, an in-service education committee and a nursing procedure and policy committee. 9. Personnel arrangement and working conditions of nurses 1) The ratio of nurses to patients is as follows: In university hospitals, 1 to 2.9 for hospitalized patients and 1 to 4.0 for out-patients; in religious hospitals, 1 to 2.3 for hospitalized patients and 1 to 5.4 for out-patients. Grouped together this indicates that one nurse covers 2.2 hospitalized patients and 4.3 out-patients on a daily basis. The current medical treatment law stipulates that one nurse should care for 2.5 hospitalized patients or 30.0 out-patients. Therefore the statistics indicate that nursing services are being peformed with an insufficient number of nurses to cover out-patients. The current law concerns the minimum number of nurses and disregards the required number of nurses for operation rooms, recovery rooms, delivery rooms, new-born baby rooms, central supply rooms and emergency rooms. Accordingly, tile medical treatment law has been requested to be amended. 2) The ratio of doctors to nurses: In university hospitals, the ratio is 1 to 1.1; in national of public hospitals, 1 to 0.8; in religious hospitals 1 to 0.5; and in private hospitals 1 to 0.7. The average ratio is 1 to 0.8; generally the ideal ratio is 3 to 1. Since the number of doctors working in hospitals has been recently increasing, the nursing services have consequently teen overloaded, sacrificing the services to the patients. 3) The ratio of nurses to clerical staff is 1 to 0.4. However, the ideal ratio is 5 to 1, that is, 1 to 0.2. This means that clerical personnel far outnumber the nursing staff. 4) The ratio of nurses to nurse's-aides; The average 2.5 to 1 indicates that most of the nursing service are delegated to nurse's-aides owing to the shortage of registered nurses. This is the main cause of the deterioration in the quality of nursing services. It is a real problem in the guest for better nursing services that certain hospitals employ a disproportionate number of nurse's-aides in order to meet financial requirements. 5) As for the working conditions, most of hospitals employ a three-shift day with 8 hours of duty each. However, certain hospitals still use two shifts a day. 6) As for the working environment, most of the hospitals lack welfare and hygienic facilities. 7) The salary basis is the highest in the private university hospitals, with enterprise hospitals next and religious hospitals and national or public ones lowest. 8) Method of employment is made through paper screening, and further that the appointment of nurses is conditional upon the favorable opinion of the nursing directors. 9) The unemployment ratio for one year in 1971 averaged 29 per cent. The reasons for unemployment indicate that the highest is because of marriage up to 40 per cent, and next is because of overseas employment. This high unemployment ratio further causes the deterioration of efficiency in nursing services and supplementary activities. The hospital authorities concerned should take this matter into a jeep consideration in order to reduce unemployment. 10) The importance of in-service education is well recognized and established. 1% has been noted that on the-job nurses. training has been most active, with nursing directors taking charge of the orientation programs of newly employed nurses. However, it is most necessary that a comprehensive study be made of instructors, contents and methods of education with a separate section for in-service education. 10. Nursing services'activities 1) Division of services and job descriptions are urgently required. 81 per rent of the hospitals keep written regulations of services in accordance with nursing service manuals. 19 per cent of the hospitals do not keep written regulations. Most of hospitals delegate to the nursing directors or certain supervisors the power of stipulating service regulations. In 21 per cent of the total hospitals they have policy committees, standardization committees and advisory committees to proceed with the stipulation of regulations. 2) Approximately 81 per cent of the hospitals have service channels in which directors, supervisors, head nurses and staff nurses perform their appropriate services according to the service plans and make up the service reports. In approximately 19 per cent of the hospitals the staff perform their nursing services without utilizing the above channels. 3) In the performance of nursing services, a ward manual is considered the most important one to be utilized in about 32 percent of hospitals. 25 per cent of hospitals indicate they use a kardex; 17 per cent use ward-rounding, and others take advantage of work sheets or coordination with other departments through conferences. 4) In about 78 per cent of hospitals they have records which indicate the status of personnel, and in 22 per cent they have not. 5) It has been advised that morale among nurses may be increased, ensuring more efficient services, by their being able to exchange opinions and views with each other. 6) The satisfactory performance of nursing services rely on the following factors to the degree indicated: approximately 32 per cent to the systematic nursing activities and services; 27 per cent to the head nurses ability for nursing diagnosis; 22 per cent to an effective supervisory system; 16 per cent to the hospital facilities and proper supply, and 3 per cent to effective in·service education. This means that nurses, supervisors, head nurses and directors play the most important roles in the performance of nursing services. 11. About 87 per cent of the hospitals do not have separate budgets for their nursing departments, and only 13 per cent of the hospitals have separate budgets. It is recommended that the planning and execution of the nursing administration be delegated to the pertinent administrators in order to bring about improved proved performances and activities in nursing services.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.
This study investigated consumer intention to use a location-based mobile shopping service (LBMSS) that integrates cognitive and affective responses. Information relevancy was integrated into pleasure-arousal-dominance (PAD) emotional state model in the present study as a conceptual framework. The results of an online survey of 335 mobile phone users in the U.S. indicated the positive effects of arousal and information relevancy on pleasure. In addition, there was a significant relationship between pleasure and intention to use a LBMSS. However, the relationship between dominance and pleasure was not statistically significant. The results of the present study provides insight to retailers and marketers as to what factors they need to consider to implement location-based mobile shopping services to improve their business performance. Extended Abstract : Location aware technology has expanded the marketer's reach by reducing space and time between a consumer's receipt of advertising and purchase, offering real-time information and coupons to consumers in purchasing situations (Dickenger and Kleijnen, 2008; Malhotra and Malhotra, 2009). LBMSS increases the relevancy of SMS marketing by linking advertisements to a user's location (Bamba and Barnes, 2007; Malhotra and Malhotra, 2009). This study investigated consumer intention to use a location-based mobile shopping service (LBMSS) that integrates cognitive and affective response. The purpose of the study was to examine the relationship among information relevancy and affective variables and their effects on intention to use LBMSS. Thus, information relevancy was integrated into pleasure-arousal-dominance (PAD) model and generated the following hypotheses. Hypothesis 1. There will be a positive influence of arousal concerning LBMSS on pleasure in regard to LBMSS. Hypothesis 2. There will be a positive influence of dominance in LBMSS on pleasure in regard to LBMSS. Hypothesis 3. There will be a positive influence of information relevancy on pleasure in regard to LBMSS. Hypothesis 4. There will be a positive influence of pleasure about LBMSS on intention to use LBMSS. E-mail invitations were sent out to a randomly selected sample of three thousand consumers who are older than 18 years old and mobile phone owners, acquired from an independent marketing research company. An online survey technique was employed utilizing Dillman's (2000) online survey method and follow-ups. A total of 335 valid responses were used for the data analysis in the present study. Before the respondents answer any of the questions, they were told to read a document describing LBMSS. The document included definitions and examples of LBMSS provided by various service providers. After that, they were exposed to a scenario describing the participant as taking a saturday shopping trip to a mall and then receiving a short message from the mall. The short message included new product information and coupons for same day use at participating stores. They then completed a questionnaire containing various questions. To assess arousal, dominance, and pleasure, we adapted and modified scales used in the previous studies in the context of location-based mobile shopping service, each of the five items from Mehrabian and Russell (1974). A total of 15 items were measured on a seven-point bipolar scale. To measure information relevancy, four items were borrowed from Mason et al. (1995). Intention to use LBMSS was captured using two items developed by Blackwell, and Miniard (1995) and one items developed by the authors. Data analyses were conducted using SPSS 19.0 and LISREL 8.72. A total of usable 335 data were obtained after deleting the incomplete responses, which results in a response rate of 11.20%. A little over half of the respondents were male (53.9%) and approximately 60% of respondents were married (57.4%). The mean age of the sample was 29.44 years with a range from 19 to 60 years. In terms of the ethnicity there were European Americans (54.5%), Hispanic American (5.3%), African-American (3.6%), and Asian American (2.9%), respectively. The respondents were highly educated; close to 62.5% of participants in the study reported holding a college degree or its equivalent and 14.5% of the participants had graduate degree. The sample represents all income categories: less than $24,999 (10.8%), $25,000-$49,999 (28.34%), $50,000-$74,999 (13.8%), and $75,000 or more (10.23%). The respondents of the study indicated that they were employed in many occupations. Responses came from all 42 states in the U.S. To identify the dimensions of research constructs, Exploratory Factor Analysis (EFA) using a varimax rotation was conducted. As indicated in table 1, these dimensions: arousal, dominance, relevancy, pleasure, and intention to use, suggested by the EFA, explained 82.29% of the total variance with factor loadings ranged from .74 to .89. As a next step, CFA was conducted to validate the dimensions that were identified from the exploratory factor analysis and to further refine the scale. Table 1 exhibits the results of measurement model analysis and revealed a chi-square of 202.13 with degree-of-freedom of 89 (p =.002), GFI of .93, AGFI = .89, CFI of .99, NFI of .98, which indicates of the evidence of a good model fit to the data (Bagozzi and Yi, 1998; Hair et al., 1998). As table 1 shows, reliability was estimated with Cronbach's alpha and composite reliability (CR) for all multi-item scales. All the values met evidence of satisfactory reliability in multi-item measure for alpha (>.91) and CR (>.80). In addition, we tested the convergent validity of the measure using average variance extracted (AVE) by following recommendations from Fornell and Larcker (1981). The AVE values for the model constructs ranged from .74 through .85, which are higher than the threshold suggested by Fornell and Larcker (1981). To examine discriminant validity of the measure, we again followed the recommendations from Fornell and Larcker (1981). The shared variances between constructs were smaller than the AVE of the research constructs and confirm discriminant validity of the measure. The causal model testing was conducted using LISREL 8.72 with a maximum-likelihood estimation method. Table 2 shows the results of the hypotheses testing. The results for the conceptual model revealed good overall fit for the proposed model. Chi-square was 342.00 (df = 92, p =.000), NFI was .97, NNFI was .97, GFI was .89, AGFI was .83, and RMSEA was .08. All paths in the proposed model received significant statistical support except H2. The paths from arousal to pleasure (H1:
To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.
By means of the interspecific hybridization in the Sub-genus Diploxylon of the Genus Pinus,