• Title/Summary/Keyword: New Business Models

Search Result 670, Processing Time 0.028 seconds

A Study of how LLM-based generative AI response data quality affects impact on job satisfaction (LLM 기반의 생성형 AI 응답 데이터 품질이 업무 활용 만족도에 미치는 영향에 관한 연구)

  • Lee Seung Hwan;Hyun Ji Eun;Gim Gwang Yong
    • Convergence Security Journal
    • /
    • v.24 no.3
    • /
    • pp.117-129
    • /
    • 2024
  • With the announcement of Transformer, a new type of architecture, in 2017, there have been many changes in language models. In particular, the development of LLM (Large language model) has enabled generative AI services such as search and chatbot to be utilized in various business areas. However, security issues such as personal information leakage and reliability issues such as hallucination, which generates false information, have raised concerns about the effectiveness of these services. In this study, we aimed to analyze the factors that are increasing the frequency of using generative AI in the workplace despite these concerns. To this end, we derived eight factors that affect the quality of LLM-based generative AI response data and empirically analyzed the impact of these factors on job satisfaction using a valid sample of 195 respondents. The results showed that expertise, accessibility, diversity, and convenience had a significant impact on intention to continue using, security, stability, and reliability had a partially significant impact, and completeness had a negative impact. The purpose of this study is to academically investigate how customer perception of response data quality affects business utilization satisfaction and to provide meaningful practical implications for customer-centered services.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

A Study on the Intelligent Quick Response System for Fast Fashion(IQRS-FF) (패스트 패션을 위한 지능형 신속대응시스템(IQRS-FF)에 관한 연구)

  • Park, Hyun-Sung;Park, Kwang-Ho
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.163-179
    • /
    • 2010
  • Recentlythe concept of fast fashion is drawing attention as customer needs are diversified and supply lead time is getting shorter in fashion industry. It is emphasized as one of the critical success factors in the fashion industry how quickly and efficiently to satisfy the customer needs as the competition has intensified. Because the fast fashion is inherently susceptible to trend, it is very important for fashion retailers to make quick decisions regarding items to launch, quantity based on demand prediction, and the time to respond. Also the planning decisions must be executed through the business processes of procurement, production, and logistics in real time. In order to adapt to this trend, the fashion industry urgently needs supports from intelligent quick response(QR) system. However, the traditional functions of QR systems have not been able to completely satisfy such demands of the fast fashion industry. This paper proposes an intelligent quick response system for the fast fashion(IQRS-FF). Presented are models for QR process, QR principles and execution, and QR quantity and timing computation. IQRS-FF models support the decision makers by providing useful information with automated and rule-based algorithms. If the predefined conditions of a rule are satisfied, the actions defined in the rule are automatically taken or informed to the decision makers. In IQRS-FF, QRdecisions are made in two stages: pre-season and in-season. In pre-season, firstly master demand prediction is performed based on the macro level analysis such as local and global economy, fashion trends and competitors. The prediction proceeds to the master production and procurement planning. Checking availability and delivery of materials for production, decision makers must make reservations or request procurements. For the outsourcing materials, they must check the availability and capacity of partners. By the master plans, the performance of the QR during the in-season is greatly enhanced and the decision to select the QR items is made fully considering the availability of materials in warehouse as well as partners' capacity. During in-season, the decision makers must find the right time to QR as the actual sales occur in stores. Then they are to decide items to QRbased not only on the qualitative criteria such as opinions from sales persons but also on the quantitative criteria such as sales volume, the recent sales trend, inventory level, the remaining period, the forecast for the remaining period, and competitors' performance. To calculate QR quantity in IQRS-FF, two calculation methods are designed: QR Index based calculation and attribute similarity based calculation using demographic cluster. In the early period of a new season, the attribute similarity based QR amount calculation is better used because there are not enough historical sales data. By analyzing sales trends of the categories or items that have similar attributes, QR quantity can be computed. On the other hand, in case of having enough information to analyze the sales trends or forecasting, the QR Index based calculation method can be used. Having defined the models for decision making for QR, we design KPIs(Key Performance Indicators) to test the reliability of the models in critical decision makings: the difference of sales volumebetween QR items and non-QR items; the accuracy rate of QR the lead-time spent on QR decision-making. To verify the effectiveness and practicality of the proposed models, a case study has been performed for a representative fashion company which recently developed and launched the IQRS-FF. The case study shows that the average sales rateof QR items increased by 15%, the differences in sales rate between QR items and non-QR items increased by 10%, the QR accuracy was 70%, the lead time for QR dramatically decreased from 120 hours to 8 hours.

Impact of Shortly Acquired IPO Firms on ICT Industry Concentration (ICT 산업분야 신생기업의 IPO 이후 인수합병과 산업 집중도에 관한 연구)

  • Chang, YoungBong;Kwon, YoungOk
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.51-69
    • /
    • 2020
  • Now, it is a stylized fact that a small number of technology firms such as Apple, Alphabet, Microsoft, Amazon, Facebook and a few others have become larger and dominant players in an industry. Coupled with the rise of these leading firms, we have also observed that a large number of young firms have become an acquisition target in their early IPO stages. This indeed results in a sharp decline in the number of new entries in public exchanges although a series of policy reforms have been promulgated to foster competition through an increase in new entries. Given the observed industry trend in recent decades, a number of studies have reported increased concentration in most developed countries. However, it is less understood as to what caused an increase in industry concentration. In this paper, we uncover the mechanisms by which industries have become concentrated over the last decades by tracing the changes in industry concentration associated with a firm's status change in its early IPO stages. To this end, we put emphasis on the case in which firms are acquired shortly after they went public. Especially, with the transition to digital-based economies, it is imperative for incumbent firms to adapt and keep pace with new ICT and related intelligent systems. For instance, after the acquisition of a young firm equipped with AI-based solutions, an incumbent firm may better respond to a change in customer taste and preference by integrating acquired AI solutions and analytics skills into multiple business processes. Accordingly, it is not unusual for young ICT firms become an attractive acquisition target. To examine the role of M&As involved with young firms in reshaping the level of industry concentration, we identify a firm's status in early post-IPO stages over the sample periods spanning from 1990 to 2016 as follows: i) being delisted, ii) being standalone firms and iii) being acquired. According to our analysis, firms that have conducted IPO since 2000s have been acquired by incumbent firms at a relatively quicker time than those that did IPO in previous generations. We also show a greater acquisition rate for IPO firms in the ICT sector compared with their counterparts in other sectors. Our results based on multinomial logit models suggest that a large number of IPO firms have been acquired in their early post-IPO lives despite their financial soundness. Specifically, we show that IPO firms are likely to be acquired rather than be delisted due to financial distress in early IPO stages when they are more profitable, more mature or less leveraged. For those IPO firms with venture capital backup have also become an acquisition target more frequently. As a larger number of firms are acquired shortly after their IPO, our results show increased concentration. While providing limited evidence on the impact of large incumbent firms in explaining the change in industry concentration, our results show that the large firms' effect on industry concentration are pronounced in the ICT sector. This result possibly captures the current trend that a few tech giants such as Alphabet, Apple and Facebook continue to increase their market share. In addition, compared with the acquisitions of non-ICT firms, the concentration impact of IPO firms in early stages becomes larger when ICT firms are acquired as a target. Our study makes new contributions. To our best knowledge, this is one of a few studies that link a firm's post-IPO status to associated changes in industry concentration. Although some studies have addressed concentration issues, their primary focus was on market power or proprietary software. Contrast to earlier studies, we are able to uncover the mechanism by which industries have become concentrated by placing emphasis on M&As involving young IPO firms. Interestingly, the concentration impact of IPO firm acquisitions are magnified when a large incumbent firms are involved as an acquirer. This leads us to infer the underlying reasons as to why industries have become more concentrated with a favor of large firms in recent decades. Overall, our study sheds new light on the literature by providing a plausible explanation as to why industries have become concentrated.

Legal Research on FinTech Regulatory Sandbox Fostering Financial Innovations in Korea (핀테크 활성화를 위한 규제 샌드박스의 도입 방안 연구)

  • Ko, Young-Mi
    • Journal of Legislation Research
    • /
    • no.53
    • /
    • pp.213-267
    • /
    • 2017
  • Regulatory barrier is considered most challenging out of all FinTech barriers, which many technology innovators have always experienced. Even though technological solutions promise customers accessibility to more cost-effective and secured financial services, it is quite challenging to create regulatory environment that enables innovation FinTech industry. Especially, a common challenge FinTech innovators and business face is regulatory uncertainty and confusion rather than any particular regulation. Since many FinTech models are continuously introducing new innovative ways in providing financial services, significant confusion could be raised in applying principles of existing law and regulations. In addition, it is uncertain whether or not applying complex regulatory compliance model intended for large financial institutions to small start-ups is appropriate since most existing regulations and rules are established and introduced without considering innovative tools such as mobile instruments, e-trade, and internet. Therefore, new mechanism to access to regulatory information in a more cost-effective, quick and immediate way should be created. Regulators, technological innovators, and financial customers should cooperate each other to find out appropriate solutions for those issues. Many regulators are introducing regulatory sandbox which provides service providers with opportunities to test their innovations, during the test, providing regulators with enough time to understand risks of innovations. However, regulatory sandbox is not a panacea for all challenges to FinTech innovations. Therefore, regulators should make comprehensive and multidimensional efforts including regulatory sandbox in supporting FinTech ecosystem.

A Study on Forecasting Industrial Land Considering Leading Economic Variable Using ARIMA-X (선행경제변수를 고려한 산업용지 수요예측 방법 연구)

  • Byun, Tae-Geun;Jang, Cheol-Soon;Kim, Seok-Yun;Choi, Sung-Hwan;Lee, Sang-Ho
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.1
    • /
    • pp.214-223
    • /
    • 2022
  • The purpose of this study is to present a new industrial land demand prediction method that can consider external economic factors. The analysis model used ARIMA-X, which can consider exogenous variables. Exogenous variables are composed of macroeconomic variable, Business Survey Index, and Composite Economic Index variables to reflect the economic and industrial structure. And, among the exogenous variables, only variables that precede the supply of industrial land are used for prediction. Variables with precedence in the supply of industrial land were found to be import, private and government consumption expenditure, total capital formation, economic sentiment index, producer's shipment index, machinery for domestic demand and composite leading index. As a result of estimating the ARIMA-X model using these variables, the ARIMA-X(1,1,0) model including only the import was found to be statistically significant. The industrial land demand forecast predicted the industrial land from 2021 to 2030 by reflecting the scenario of change in import. As a result, the future demand for industrial land was predicted to increase by 1.91% annually to 1,030.79 km2. As a result of comparing these results with the existing exponential smoothing method, the results of this study were found to be more suitable than the existing models. It is expected to b available as a new industrial land forecasting model.

A Study on the Method of Manufacturing Lactic Acid from Seaweed Biomass (해조류 바이오매스로부터 Lactic acid를 제조하는 방법에 관한 연구)

  • Lee, Hakrae;Ko, Euisuk;Shim, Woncheol;Kim, Jongseo;Kim, Jaineung
    • KOREAN JOURNAL OF PACKAGING SCIENCE & TECHNOLOGY
    • /
    • v.28 no.1
    • /
    • pp.1-8
    • /
    • 2022
  • With the spread of COVID-19 worldwide, non-face-to-face services have grown rapidly, but at the same time, the problem of plastic waste is getting worse. Accordingly, eco-friendly policies such as carbon neutrality and sustainable circular economy are being promoted worldwide. Due to the high demand for eco-friendly products, the packaging industry is trying to develop eco-friendly packaging materials using PLA and PBAT and create new business models. On the other hand, Ulva australis occurs in large quantities in the southern seas of Korea and off the coast of Jeju Island, causing marine environmental problems. In this study, lactic acid was produced through dilute acid pretreatment, enzymatic saccharification, and fermentation processes to utilize Ulva australis as a new alternative energy raw material. In general, seaweeds vary in carbohydrate content and sugar composition depending on the species, harvest location, and time. Seaweed is mainly composed of polysaccharides such as cellulose, alginate, mannan, and xylan, but does not contain lignin. It is difficult to expect high extraction yield of the complex polysaccharide constituting Ulva australis with only one process. However, the fusion process of dilute acid and enzymatic saccharification presented in this study can extract most of the sugars contained in Ulva australis. Therefore, the fusion process is considered to be able to expect high lactic acid production yield when a commercial-scale production process is established.

The Application of Operations Research to Librarianship : Some Research Directions (운영연구(OR)의 도서관응용 -그 몇가지 잠재적응용분야에 대하여-)

  • Choi Sung Jin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.4
    • /
    • pp.43-71
    • /
    • 1975
  • Operations research has developed rapidly since its origins in World War II. Practitioners of O. R. have contributed to almost every aspect of government and business. More recently, a number of operations researchers have turned their attention to library and information systems, and the author believes that significant research has resulted. It is the purpose of this essay to introduce the library audience to some of these accomplishments, to present some of the author's hypotheses on the subject of library management to which he belives O. R. has great potential, and to suggest some future research directions. Some problem areas in librianship where O. R. may play a part have been discussed and are summarized below. (1) Library location. It is usually necessary to make balance between accessibility and cost In location problems. Many mathematical methods are available for identifying the optimal locations once the balance between these two criteria has been decided. The major difficulties lie in relating cost to size and in taking future change into account when discriminating possible solutions. (2) Planning new facilities. Standard approaches to using mathematical models for simple investment decisions are well established. If the problem is one of choosing the most economical way of achieving a certain objective, one may compare th althenatives by using one of the discounted cash flow techniques. In other situations it may be necessary to use of cost-benefit approach. (3) Allocating library resources. In order to allocate the resources to best advantage the librarian needs to know how the effectiveness of the services he offers depends on the way he puts his resources. The O. R. approach to the problems is to construct a model representing effectiveness as a mathematical function of levels of different inputs(e.g., numbers of people in different jobs, acquisitions of different types, physical resources). (4) Long term planning. Resource allocation problems are generally concerned with up to one and a half years ahead. The longer term certainly offers both greater freedom of action and greater uncertainty. Thus it is difficult to generalize about long term planning problems. In other fields, however, O. R. has made a significant contribution to long range planning and it is likely to have one to make in librarianship as well. (5) Public relations. It is generally accepted that actual and potential users are too ignorant both of the range of library services provided and of how to make use of them. How should services be brought to the attention of potential users? The answer seems to lie in obtaining empirical evidence by controlled experiments in which a group of libraries participated. (6) Acquisition policy. In comparing alternative policies for acquisition of materials one needs to know the implications of each service which depends on the stock. Second is the relative importance to be ascribed to each service for each class of user. By reducing the level of the first, formal models will allow the librarian to concentrate his attention upon the value judgements which will be necessary for the second. (7) Loan policy. The approach to choosing between loan policies is much the same as the previous approach. (8) Manpower planning. For large library systems one should consider constructing models which will permit the skills necessary in the future with predictions of the skills that will be available, so as to allow informed decisions. (9) Management information system for libraries. A great deal of data can be available in libraries as a by-product of all recording activities. It is particularly tempting when procedures are computerized to make summary statistics available as a management information system. The values of information to particular decisions that may have to be taken future is best assessed in terms of a model of the relevant problem. (10) Management gaming. One of the most common uses of a management game is as a means of developing staff's to take decisions. The value of such exercises depends upon the validity of the computerized model. If the model were sufficiently simple to take the form of a mathematical equation, decision-makers would probably able to learn adequately from a graph. More complex situations require simulation models. (11) Diagnostics tools. Libraries are sufficiently complex systems that it would be useful to have available simple means of telling whether performance could be regarded as satisfactory which, if it could not, would also provide pointers to what was wrong. (12) Data banks. It would appear to be worth considering establishing a bank for certain types of data. It certain items on questionnaires were to take a standard form, a greater pool of data would de available for various analysis. (13) Effectiveness measures. The meaning of a library performance measure is not readily interpreted. Each measure must itself be assessed in relation to the corresponding measures for earlier periods of time and a standard measure that may be a corresponding measure in another library, the 'norm', the 'best practice', or user expectations.

  • PDF

The Analysis on the Relationship between Firms' Exposures to SNS and Stock Prices in Korea (기업의 SNS 노출과 주식 수익률간의 관계 분석)

  • Kim, Taehwan;Jung, Woo-Jin;Lee, Sang-Yong Tom
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.233-253
    • /
    • 2014
  • Can the stock market really be predicted? Stock market prediction has attracted much attention from many fields including business, economics, statistics, and mathematics. Early research on stock market prediction was based on random walk theory (RWT) and the efficient market hypothesis (EMH). According to the EMH, stock market are largely driven by new information rather than present and past prices. Since it is unpredictable, stock market will follow a random walk. Even though these theories, Schumaker [2010] asserted that people keep trying to predict the stock market by using artificial intelligence, statistical estimates, and mathematical models. Mathematical approaches include Percolation Methods, Log-Periodic Oscillations and Wavelet Transforms to model future prices. Examples of artificial intelligence approaches that deals with optimization and machine learning are Genetic Algorithms, Support Vector Machines (SVM) and Neural Networks. Statistical approaches typically predicts the future by using past stock market data. Recently, financial engineers have started to predict the stock prices movement pattern by using the SNS data. SNS is the place where peoples opinions and ideas are freely flow and affect others' beliefs on certain things. Through word-of-mouth in SNS, people share product usage experiences, subjective feelings, and commonly accompanying sentiment or mood with others. An increasing number of empirical analyses of sentiment and mood are based on textual collections of public user generated data on the web. The Opinion mining is one domain of the data mining fields extracting public opinions exposed in SNS by utilizing data mining. There have been many studies on the issues of opinion mining from Web sources such as product reviews, forum posts and blogs. In relation to this literatures, we are trying to understand the effects of SNS exposures of firms on stock prices in Korea. Similarly to Bollen et al. [2011], we empirically analyze the impact of SNS exposures on stock return rates. We use Social Metrics by Daum Soft, an SNS big data analysis company in Korea. Social Metrics provides trends and public opinions in Twitter and blogs by using natural language process and analysis tools. It collects the sentences circulated in the Twitter in real time, and breaks down these sentences into the word units and then extracts keywords. In this study, we classify firms' exposures in SNS into two groups: positive and negative. To test the correlation and causation relationship between SNS exposures and stock price returns, we first collect 252 firms' stock prices and KRX100 index in the Korea Stock Exchange (KRX) from May 25, 2012 to September 1, 2012. We also gather the public attitudes (positive, negative) about these firms from Social Metrics over the same period of time. We conduct regression analysis between stock prices and the number of SNS exposures. Having checked the correlation between the two variables, we perform Granger causality test to see the causation direction between the two variables. The research result is that the number of total SNS exposures is positively related with stock market returns. The number of positive mentions of has also positive relationship with stock market returns. Contrarily, the number of negative mentions has negative relationship with stock market returns, but this relationship is statistically not significant. This means that the impact of positive mentions is statistically bigger than the impact of negative mentions. We also investigate whether the impacts are moderated by industry type and firm's size. We find that the SNS exposures impacts are bigger for IT firms than for non-IT firms, and bigger for small sized firms than for large sized firms. The results of Granger causality test shows change of stock price return is caused by SNS exposures, while the causation of the other way round is not significant. Therefore the correlation relationship between SNS exposures and stock prices has uni-direction causality. The more a firm is exposed in SNS, the more is the stock price likely to increase, while stock price changes may not cause more SNS mentions.

The influence of perceived usefulness and perceived ease of use of experience store on satisfaction and loyalty (체험매장의 지각된 용이성과 유용성이 만족과 충성도에 미치는 영향)

  • Lee, Ji-Hyun
    • Journal of Distribution Science
    • /
    • v.9 no.3
    • /
    • pp.5-14
    • /
    • 2011
  • One of the new roles of modern retail stores is to supply consumers with a memorable experience. In Korea, enhancing a store's environment so that customers remember a unique shopping experience is recognized as a sound strategy for strengthening the store's competitiveness. Motivated by this incentive, awareness of the experience-store concept is starting to increase in various categories of the retail industry. However, many experience stores, except in a few cases, have yet to derive a significant profit, explaining why Korean consumers are somewhat unfamiliar with, yet fascinated by, the experience stores that now exist in the country. Consumer satisfaction directly, and indirectly, affects a company's future profit and potential financial gain; customer satisfaction also affects loyalty. Therefore, knowing the significant factors that increase satisfaction and loyalty is essential for any company, in any field, to be able to effectively differentiate itself from the competition. Intrigued by increased competition opportunities, most Korean companies have adopted experience-store marketing strategies. When establishing the most effective processes for increasing sales and achieving a sustainable competitive advantage of a new concept, companies should consider certain factors that influence consumers' ability to accept new concepts and ideas. The Technology Acceptance Model (TAM) is a theory that models how people accept new concepts. TAM proposes the following two factors that influence a person's decisions about how, and when, he or she will use a new product: "perceived usefulness" and "perceived ease of use." Much of the existing research has suggested that a person's character also affects the process for accepting new ideas. Such personal character attributes as individual preferences, self-confidence, and a person's values, traits, and/or skills affect the process for willingly consenting to try something new. It will be meaningful to establish how the TAM theory's components, as well as personal character, affect individuals accepting the experience-store concept. To that end, as it pertains to an experience store, the first goal of the study is to examine the influence of innovative factors (perceived usefulness and perceived ease of use) on satisfaction and loyalty. The second objective is to define the moderate effect of consumers' personal characteristics on the model. The proposed model was tested on 149 respondents who were engaged in leisure sports activities and bought sports outdoor garments and equipment. According to the study's findings, the satisfaction and loyalty of an experience store can be explained by perceived usefulness and perceived ease of use, with the study's results demonstrating the stronger of the two factors being "perceived ease of use." The study failed to explain the effects of a person's character on the model. In conclusion, when the companies that operate the experience stores execute their marketing and promotion strategies, they should stress the stores' "ease of use" product components. Additionally, it can be extrapolated from the study data that since the experience-store idea is still relatively unfamiliar to Korean consumers, most customers are not yet able to evaluate, nor take a position regarding, their respective attitudes toward experience stores.

  • PDF