Business Application of Convolutional Neural Networks for Apparel Classification Using Runway Image (합성곱 신경망의 비지니스 응용: 런웨이 이미지를 사용한 의류 분류를 중심으로)
-
- Journal of Intelligence and Information Systems
- /
- v.24 no.3
- /
- pp.1-19
- /
- 2018
Large amount of data is now available for research and business sectors to extract knowledge from it. This data can be in the form of unstructured data such as audio, text, and image data and can be analyzed by deep learning methodology. Deep learning is now widely used for various estimation, classification, and prediction problems. Especially, fashion business adopts deep learning techniques for apparel recognition, apparel search and retrieval engine, and automatic product recommendation. The core model of these applications is the image classification using Convolutional Neural Networks (CNN). CNN is made up of neurons which learn parameters such as weights while inputs come through and reach outputs. CNN has layer structure which is best suited for image classification as it is comprised of convolutional layer for generating feature maps, pooling layer for reducing the dimensionality of feature maps, and fully-connected layer for classifying the extracted features. However, most of the classification models have been trained using online product image, which is taken under controlled situation such as apparel image itself or professional model wearing apparel. This image may not be an effective way to train the classification model considering the situation when one might want to classify street fashion image or walking image, which is taken in uncontrolled situation and involves people's movement and unexpected pose. Therefore, we propose to train the model with runway apparel image dataset which captures mobility. This will allow the classification model to be trained with far more variable data and enhance the adaptation with diverse query image. To achieve both convergence and generalization of the model, we apply Transfer Learning on our training network. As Transfer Learning in CNN is composed of pre-training and fine-tuning stages, we divide the training step into two. First, we pre-train our architecture with large-scale dataset, ImageNet dataset, which consists of 1.2 million images with 1000 categories including animals, plants, activities, materials, instrumentations, scenes, and foods. We use GoogLeNet for our main architecture as it has achieved great accuracy with efficiency in ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Second, we fine-tune the network with our own runway image dataset. For the runway image dataset, we could not find any previously and publicly made dataset, so we collect the dataset from Google Image Search attaining 2426 images of 32 major fashion brands including Anna Molinari, Balenciaga, Balmain, Brioni, Burberry, Celine, Chanel, Chloe, Christian Dior, Cividini, Dolce and Gabbana, Emilio Pucci, Ermenegildo, Fendi, Giuliana Teso, Gucci, Issey Miyake, Kenzo, Leonard, Louis Vuitton, Marc Jacobs, Marni, Max Mara, Missoni, Moschino, Ralph Lauren, Roberto Cavalli, Sonia Rykiel, Stella McCartney, Valentino, Versace, and Yve Saint Laurent. We perform 10-folded experiments to consider the random generation of training data, and our proposed model has achieved accuracy of 67.2% on final test. Our research suggests several advantages over previous related studies as to our best knowledge, there haven't been any previous studies which trained the network for apparel image classification based on runway image dataset. We suggest the idea of training model with image capturing all the possible postures, which is denoted as mobility, by using our own runway apparel image dataset. Moreover, by applying Transfer Learning and using checkpoint and parameters provided by Tensorflow Slim, we could save time spent on training the classification model as taking 6 minutes per experiment to train the classifier. This model can be used in many business applications where the query image can be runway image, product image, or street fashion image. To be specific, runway query image can be used for mobile application service during fashion week to facilitate brand search, street style query image can be classified during fashion editorial task to classify and label the brand or style, and website query image can be processed by e-commerce multi-complex service providing item information or recommending similar item.
The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.
A Macroscopic combination of two or more distinct materials is commonly referred to as a "Composite Material", having been designed mechanically and chemically superior in function and characteristic than its individual constituent materials. Composite materials are used not only for aerospace and military, but also heavily used in boat/ship building and general composite industries which we are seeing increasingly more. Regardless of the various applications for composite materials, the industry is still limited and requires better fabrication technology and methodology in order to expand and grow. An example of this is that the majority of fabrication facilities nearby still use an antiquated wet lay-up process where fabrication still requires manual hand labor in a 3D environment impeding productivity of composite product design advancement. As an expert in the advanced composites field, I have developed fabrication skills with the use of machinery based on my past composite experience. In autumn 2011, the Korea government confirmed to fund my project. It is the development of a composite sanding machine. I began development of this semi-robotic prototype beginning in 2009. It has possibilities of replacing or augmenting the exhaustive and difficult jobs performed by human hands, such as sanding, grinding, blasting, and polishing in most often, very awkward conditions, and is also will boost productivity, improve surface quality, cut abrasive costs, eliminate vibration injuries, and protect workers from exposure to dust and airborne contamination. Ease of control and operation of the equipment in or outside of the sanding room is a key benefit to end-users. It will prove to be much more economical than normal robotics and minimize errors that commonly occur in factories. The key components and their technologies are a 360 degree rotational shoulder and a wrist that is controlled under PLC controller and joystick manual mode. Development on both of the key modules is complete and are now operational. The Korean government fund boosted my development and I expect to complete full scale development no later than 3rd quarter 2012. Even with the advantages of composite materials, there is still the need to repair or to maintain composite products with a higher level of technology. I have learned many composite repair skills on composite airframe since many composite fabrication skills including repair, requires training for non aerospace applications. The wind energy market is now requiring much larger blades in order to generate more electrical energy for wind farms. One single blade is commonly 50 meters or longer now. When a wind blade becomes damaged from external forces, on-site repair is required on the columns even under strong wind and freezing temperature conditions. In order to correctly obtain polymerization, the repair must be performed on the damaged area within a very limited time. The use of pre-impregnated glass fabric and heating silicone pad and a hot bonder acting precise heating control are surely required.
Hybrid rockets have lately attracted attention as a strong candidate of small, low cost, safe and reliable launch vehicles. A significant topic is that the first commercially sponsored space ship, SpaceShipOne vehicle chose a hybrid rocket. The main factors for the choice were safety of operation, system cost, quick turnaround, and thrust termination. In Japan, five universities including Hokkaido University and three private companies organized "Hybrid Rocket Research Group" from 1998 to 2002. Their main purpose was to downsize the cost and scale of rocket experiments. In 2002, UNISEC (University Space Engineering Consortium) and HASTIC (Hokkaido Aerospace Science and Technology Incubation Center) took over the educational and R&D rocket activities respectively and the research group dissolved. In 2008, JAXA/ISAS and eleven universities formed "Hybrid Rocket Research Working Group" as a subcommittee of the Steering Committee for Space Engineering in ISAS. Their goal is to demonstrate technical feasibility of lowcost and high frequency launches of nano/micro satellites into sun-synchronous orbits. Hybrid rockets use a combination of solid and liquid propellants. Usually the fuel is in a solid phase. A serious problem of hybrid rockets is the low regression rate of the solid fuel. In single port hybrids the low regression rate below 1 mm/s causes large L/D exceeding a hundred and small fuel loading ratio falling below 0.3. Multi-port hybrids are a typical solution to solve this problem. However, this solution is not the mainstream in Japan. Another approach is to use high regression rate fuels. For example, a fuel regression rate of 4 mm/s decreases L/D to around 10 and increases the loading ratio to around 0.75. Liquefying fuels such as paraffins are strong candidates for high regression fuels and subject of active research in Japan too. Nakagawa et al. in Tokai University employed EVA (Ethylene Vinyl Acetate) to modify viscosity of paraffin based fuels and investigated the effect of viscosity on regression rates. Wada et al. in Akita University employed LTP (Low melting ThermoPlastic) as another candidate of liquefying fuels and demonstrated high regression rates comparable to paraffin fuels. Hori et al. in JAXA/ISAS employed glycidylazide-poly(ethylene glycol) (GAP-PEG) copolymers as high regression rate fuels and modified the combustion characteristics by changing the PEG mixing ratio. Regression rate improvement by changing internal ballistics is another stream of research. The author proposed a new fuel configuration named "CAMUI" in 1998. CAMUI comes from an abbreviation of "cascaded multistage impinging-jet" meaning the distinctive flow field. A CAMUI type fuel grain consists of several cylindrical fuel blocks with two ports in axial direction. The port alignment shifts 90 degrees with each other to make jets out of ports impinge on the upstream end face of the downstream fuel block, resulting in intense heat transfer to the fuel. Yuasa et al. in Tokyo Metropolitan University employed swirling injection method and improved regression rates more than three times higher. However, regression rate distribution along the axis is not uniform due to the decay of the swirl strength. Aso et al. in Kyushu University employed multi-swirl injection to solve this problem. Combinations of swirling injection and paraffin based fuel have been tried and some results show very high regression rates exceeding ten times of conventional one. High fuel regression rates by new fuel, new internal ballistics, or combination of them require faster fuel-oxidizer mixing to maintain combustion efficiency. Nakagawa et al. succeeded to improve combustion efficiency of a paraffin-based fuel from 77% to 96% by a baffle plate. Another effective approach some researchers are trying is to use an aft-chamber to increase residence time. Better understanding of the new flow fields is necessary to reveal basic mechanisms of regression enhancement. Yuasa et al. visualized the combustion field in a swirling injection type motor. Nakagawa et al. observed boundary layer combustion of wax-based fuels. To understand detailed flow structures in swirling flow type hybrids, Sawada et al. (Tohoku Univ.), Teramoto et al. (Univ. of Tokyo), Shimada et al. (ISAS), and Tsuboi et al. (Kyushu Inst. Tech.) are trying to simulate the flow field numerically. Main challenges are turbulent reaction, stiffness due to low Mach number flow, fuel regression model, and other non-steady phenomena. Oshima et al. in Hokkaido University simulated CAMUI type flow fields and discussed correspondence relation between regression distribution of a burning surface and the vortex structure over the surface.
The release of administrative information has been the challenge of our age following the maturation of democratic ideology in our society. However, differences of opinion and conflict still exist between the government and private sectors regarding the issue, and it seems that the technical and policy-related insufficiencies of information and record management that actually operate the release of information are the main causes. From the perspective of records management, records or information are variable in their nature, value, and influence during their life span. The most controversial issue is the records and information in the current stage of carrying out business activities. This is because the records and information pertaining to finished business are but evidence to ascertain the past, and have only a limited relationship to the ideal of the 'democratic participation' by citizens in activities of the public sector. The current information release policies are helpless against the 'absence of information,' or incomplete records, but such weakness can be supplemented by enforcing record management policies that make obligatory the recording of all details of business activities. In addition, it is understood that the installation of 'document offices("Jaryogwan")' that can manage each organization's information and records will be an important starting point to integrate the release, management, and preservation of information and records. Nevertheless, it seems that the concept of 'release' in information release policies refers not to free use by all citizens but is limited to the 'provision' of records according to public requests, and the concept of 'confidential' refers not to treating documents with total secrecy but varies according to the particulars of each situation, making the actual practice of information release difficult. To solve such problems, it is absolutely necessary to collect the opinions of various constituents associated with the recorded information in question, and to effectively mediate the collective opinions and the information release requests coming from applicants, to carry out the business more practically. Especially crucial is the management of the process by which the nature and influence of recorded information changes, so that information which has to be confidential at first may become available for inquiry and use over time through appropriate procedures. Such processes are also part of the duties that record management, which is in charge of the entire life span of documents, must perform. All created records will be captured within a record management system, and the record creation data thus collected will be used as a guide for inquiry and usage. With 'document offices(Jaryogwan)' and 'archives' controlling the entire life span of records, the release of information will become simpler and more widespread. It is undesirable to try to control only through information release policies those records the nature of which has changed because, unlike the ones still in the early stages of their life span and can directly influence business activities, their work has finished, and they have become historical records or evidences pointing to the truth of past events. Even in the past, when there existed no formal policy regarding the release of administrative information, the access and use of archival records were permitted. A more active and expanded approach must be taken regarding the 'usage' of archival records. If the key factor regarding 'release' lies in the provision of information, the key factor regarding 'usage' lies in the quality and level of the service provided. The full-scale usage of archival records must be preceded by the release of such records, and accordingly, a thorough analysis of the nature, content, and value of the records and their changes must be implemented to guarantee the release of information before their use is requested. That must become a central task of document offices and "Today's information" will soon become "yesterday's records," and the "reality" of today will become "history" of the past. The policies of information release and record management share information records as their common objective. As they have a mutual relationship that is supplementary and leads toward perfection, the two policies must both be differentiated and integrated with each another. It is hoped that the policies and business activities of record management will soon become normalized and reformed for effective and fair release of information.
According to the 2013 construction market outlook report, the liquidation of construction companies is expected to continue due to the ongoing residential construction recession. Bankruptcies of construction companies have a greater social impact compared to other industries. However, due to the different nature of the capital structure and debt-to-equity ratio, it is more difficult to forecast construction companies' bankruptcies than that of companies in other industries. The construction industry operates on greater leverage, with high debt-to-equity ratios, and project cash flow focused on the second half. The economic cycle greatly influences construction companies. Therefore, downturns tend to rapidly increase the bankruptcy rates of construction companies. High leverage, coupled with increased bankruptcy rates, could lead to greater burdens on banks providing loans to construction companies. Nevertheless, the bankruptcy prediction model concentrated mainly on financial institutions, with rare construction-specific studies. The bankruptcy prediction model based on corporate finance data has been studied for some time in various ways. However, the model is intended for all companies in general, and it may not be appropriate for forecasting bankruptcies of construction companies, who typically have high liquidity risks. The construction industry is capital-intensive, operates on long timelines with large-scale investment projects, and has comparatively longer payback periods than in other industries. With its unique capital structure, it can be difficult to apply a model used to judge the financial risk of companies in general to those in the construction industry. Diverse studies of bankruptcy forecasting models based on a company's financial statements have been conducted for many years. The subjects of the model, however, were general firms, and the models may not be proper for accurately forecasting companies with disproportionately large liquidity risks, such as construction companies. The construction industry is capital-intensive, requiring significant investments in long-term projects, therefore to realize returns from the investment. The unique capital structure means that the same criteria used for other industries cannot be applied to effectively evaluate financial risk for construction firms. Altman Z-score was first published in 1968, and is commonly used as a bankruptcy forecasting model. It forecasts the likelihood of a company going bankrupt by using a simple formula, classifying the results into three categories, and evaluating the corporate status as dangerous, moderate, or safe. When a company falls into the "dangerous" category, it has a high likelihood of bankruptcy within two years, while those in the "safe" category have a low likelihood of bankruptcy. For companies in the "moderate" category, it is difficult to forecast the risk. Many of the construction firm cases in this study fell in the "moderate" category, which made it difficult to forecast their risk. Along with the development of machine learning using computers, recent studies of corporate bankruptcy forecasting have used this technology. Pattern recognition, a representative application area in machine learning, is applied to forecasting corporate bankruptcy, with patterns analyzed based on a company's financial information, and then judged as to whether the pattern belongs to the bankruptcy risk group or the safe group. The representative machine learning models previously used in bankruptcy forecasting are Artificial Neural Networks, Adaptive Boosting (AdaBoost) and, the Support Vector Machine (SVM). There are also many hybrid studies combining these models. Existing studies using the traditional Z-Score technique or bankruptcy prediction using machine learning focus on companies in non-specific industries. Therefore, the industry-specific characteristics of companies are not considered. In this paper, we confirm that adaptive boosting (AdaBoost) is the most appropriate forecasting model for construction companies by based on company size. We classified construction companies into three groups - large, medium, and small based on the company's capital. We analyzed the predictive ability of AdaBoost for each group of companies. The experimental results showed that AdaBoost has more predictive ability than the other models, especially for the group of large companies with capital of more than 50 billion won.
Tosolve complex and diverse social problems and ensure the quality of life of individuals, social robots that can interact with humans are attracting attention. In the past, robots were recognized as beings that provide labor force as they put into industrial sites on behalf of humans. However, the concept of today's robot has been extended to social robots that coexist with humans and enable social interaction with the advent of Smart technology, which is considered an important driver in most industries. Specifically, there are service robots that respond to customers, the robots that have the purpose of edutainment, and the emotionalrobots that can interact with humans intimately. However, popularization of robots is not felt despite the current information environment in the modern ICT service environment and the 4th industrial revolution. Considering social interaction with users which is an important function of social robots, not only the technology of the robots but also other factors should be considered. The design elements of the robot are more important than other factors tomake consumers purchase essentially a social robot. In fact, existing studies on social robots are at the level of proposing "robot development methodology" or testing the effects provided by social robots to users in pieces. On the other hand, consumer emotions felt from the robot's appearance has an important influence in the process of forming user's perception, reasoning, evaluation and expectation. Furthermore, it can affect attitude toward robots and good feeling and performance reasoning, etc. Therefore, this study aims to verify the effect of appearance of social robot and consumer emotions on consumer's attitude toward social robot. At this time, a social robot design evaluation model is constructed by combining heterogeneous data from different sources. Specifically, the three quantitative indicator data for the appearance of social robots from the ABOT Database is included in the model. The consumer emotions of social robot design has been collected through (1) the existing design evaluation literature and (2) online buzzsuch as product reviews and blogs, (3) qualitative interviews for social robot design. Later, we collected the score of consumer emotions and attitudes toward various social robots through a large-scale consumer survey. First, we have derived the six major dimensions of consumer emotions for 23 pieces of detailed emotions through dimension reduction methodology. Then, statistical analysis was performed to verify the effect of derived consumer emotionson attitude toward social robots. Finally, the moderated regression analysis was performed to verify the effect of quantitatively collected indicators of social robot appearance on the relationship between consumer emotions and attitudes toward social robots. Interestingly, several significant moderation effects were identified, these effects are visualized with two-way interaction effect to interpret them from multidisciplinary perspectives. This study has theoretical contributions from the perspective of empirically verifying all stages from technical properties to consumer's emotion and attitudes toward social robots by linking the data from heterogeneous sources. It has practical significance that the result helps to develop the design guidelines based on consumer emotions in the design stage of social robot development.
Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.
1. Introduction Today Internet is recognized as an important way for the transaction of products and services. According to the data surveyed by the National Statistical Office, the on-line transaction in 2007 for a year, 15.7656 trillion, shows a 17.1%(2.3060 trillion won) increase over last year, of these, the amount of B2C has been increased 12.0%(10.2258 trillion won). Like this, because the entry barrier of on-line market of Korea is low, many retailers could easily enter into the market. So the bigger its scale is, but on the other hand, the tougher its competition is. Particularly due to the Internet and innovation of IT, the existing market has been changed into the perfect competitive market(Srinivasan, Rolph & Kishore, 2002). In the early years of on-line business, they think that the main reason for success is a moderate price, they are awakened to its importance of on-line service quality with tough competition. If it's not sure whether customers can be provided with what they want, they can use the Web sites, perhaps they can trust their products that had been already bought or not, they have a doubt its viability(Parasuraman, Zeithaml & Malhotra, 2005). Customers can directly reserve and issue their air tickets irrespective of place and time at the Web sites of travel agencies or airlines, but its empirical studies about these Web sites for reserving and issuing air tickets are insufficient. Therefore this study goes on for following specific objects. First object is to measure service quality and service recovery of Web sites for reserving and issuing air tickets. Second is to look into whether above on-line service quality and on-line service recovery have an impact on overall service quality. Third is to seek for the relation with overall service quality and customer satisfaction, then this customer satisfaction and loyalty intention. 2. Theoretical Background 2.1 On-line Service Quality Barnes & Vidgen(2000; 2001a; 2001b; 2002) had invented the tool to measure Web sites' quality four times(called WebQual). The WebQual 1.0, Step one invented a measuring item for information quality based on QFD, and this had been verified by students of UK business school. The Web Qual 2.0, Step two invented for interaction quality, and had been judged by customers of on-line bookshop. The WebQual 3.0, Step three invented by consolidating the WebQual 1.0 for information quality and the WebQual2.0 for interactionquality. It includes 3-quality-dimension, information quality, interaction quality, site design, and had been assessed and confirmed by auction sites(e-bay, Amazon, QXL). Furtheron, through the former empirical studies, the authors changed sites quality into usability by judging that usability is a concept how customers interact with or perceive Web sites and It is used widely for accessing Web sites. By this process, WebQual 4.0 was invented, and is consist of 3-quality-dimension; information quality, interaction quality, usability, 22 items. However, because WebQual 4.0 is focusing on technical part, it's usable at the Website's design part, on the other hand, it's not usable at the Web site's pleasant experience part. Parasuraman, Zeithaml & Malhorta(2002; 2005) had invented the measure for measuring on-line service quality in 2002 and 2005. The study in 2002 divided on-line service quality into 5 dimensions. But these were not well-organized, so there needed to be studied again totally. So Parasuraman, Zeithaml & Malhorta(2005) re-worked out the study about on-line service quality measure base on 2002's study and invented E-S-QUAL. After they invented preliminary measure for on-line service quality, they made up a question for customers who had purchased at amazon.com and walmart.com and reassessed this measure. And they perfected an invention of E-S-QUAL consists of 4 dimensions, 22 items of efficiency, system availability, fulfillment, privacy. Efficiency measures assess to sites and usability and others, system availability measures accurate technical function of sites and others, fulfillment measures promptness of delivering products and sufficient goods and others and privacy measures the degree of protection of data about their customers and so on. 2.2 Service Recovery Service industries tend to minimize the losses by coping with service failure promptly. This responses of service providers to service failure mean service recovery(Kelly & Davis, 1994). Bitner(1990) went on his study from customers' view about service providers' behavior for customers to recognize their satisfaction/dissatisfaction at service point. According to them, to manage service failure successfully, exact recognition of service problem, an apology, sufficient description about service failure and some tangible compensation are important. Parasuraman, Zeithaml & Malhorta(2005) approached the service recovery from how to measure, rather than how to manage, and moved to on-line market not to off-line, then invented E-RecS-QUAL which is a measuring tool about on-line service recovery. 2.3 Customer Satisfaction The definition of customer satisfaction can be divided into two points of view. First, they approached customer satisfaction from outcome of comsumer. Howard & Sheth(1969) defined satisfaction as 'a cognitive condition feeling being rewarded properly or improperly for their sacrifice.' and Westbrook & Reilly(1983) also defined customer satisfaction/dissatisfaction as 'a psychological reaction to the behavior pattern of shopping and purchasing, the display condition of retail store, outcome of purchased goods and service as well as whole market.' Second, they approached customer satisfaction from process. Engel & Blackwell(1982) defined satisfaction as 'an assessment of a consistency in chosen alternative proposal and their belief they had with them.' Tse & Wilton(1988) defined customer satisfaction as 'a customers' reaction to discordance between advance expectation and ex post facto outcome.' That is, this point of view that customer satisfaction is process is the important factor that comparing and assessing process what they expect and outcome of consumer. Unlike outcome-oriented approach, process-oriented approach has many advantages. As process-oriented approach deals with customers' whole expenditure experience, it checks up main process by measuring one by one each factor which is essential role at each step. And this approach enables us to check perceptual/psychological process formed customer satisfaction. Because of these advantages, now many studies are adopting this process-oriented approach(Yi, 1995). 2.4 Loyalty Intention Loyalty has been studied by dividing into behavioral approaches, attitudinal approaches and complex approaches(Dekimpe et al., 1997). In the early years of study, they defined loyalty focusing on behavioral concept, behavioral approaches regard customer loyalty as "a tendency to purchase periodically within a certain period of time at specific retail store." But the loyalty of behavioral approaches focuses on only outcome of customer behavior, so there are someone to point the limits that customers' decision-making situation or process were neglected(Enis & Paul, 1970; Raj, 1982; Lee, 2002). So the attitudinal approaches were suggested. The attitudinal approaches consider loyalty contains all the cognitive, emotional, voluntary factors(Oliver, 1997), define the customer loyalty as "friendly behaviors for specific retail stores." However these attitudinal approaches can explain that how the customer loyalty form and change, but cannot say positively whether it is moved to real purchasing in the future or not. This is a kind of shortcoming(Oh, 1995). 3. Research Design 3.1 Research Model Based on the objects of this study, the research model derived is shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient
shows, Step 1 and Step 2 are significant, and mediation variable has a significant effect on dependent variables and so does independent variables at Step 3, too. And there needs to prove the partial mediation effect, independent variable's estimate ability at Step 3(Standardized coefficient
이메일무단수집거부
이용약관
제 1 장 총칙
제 2 장 이용계약의 체결
제 3 장 계약 당사자의 의무
제 4 장 서비스의 이용
제 5 장 계약 해지 및 이용 제한
제 6 장 손해배상 및 기타사항
Detail Search
Image Search
(β)