Analysis of the First Time User Experience of the online memorial platform and suggestion of service developments (온라인 장례 플랫폼의 초기 사용자 경험 분석및서비스 개발 제안)
-
- Journal of Service Research and Studies
- /
- v.14 no.1
- /
- pp.44-62
- /
- 2024
The development of online funeral services and social issues of eco-friendly funeral culture have raised awareness of the new need for online funeral culture. There have been several attempts to revitalize online funeral services in domestic institutions and companies, but the effect is weak. The purpose of this study is to propose a design that can improve the accessibility and usability of online memorial services by analyzing the usability problem factors through a First Time User Experience analysis of the online memorial platform. Therefore, in this study, in order to identify the problem factors of the online memorial platform, a literature review on the UX, OOBE, and FTUE theories was conducted. The subject of the study was the app 'Memorial'. Before analyzing the First-Time User-Experience, IA was compared and analyzed with other similar services to understand the characteristics of the UX service of the app 'Memorial', which is the subject of the study. In addition, tasks corresponding to the Unpack-Setup/Configure-First Use stage were performed on 10 subjects who had no experience using the online memorial platform. The experimental process was expressed as the UX Curve to identify factors that caused negative experiences. As a result, the major problem factors included unnecessary UI elements, the need for sensitive personal information at the membership stage, and lack of immersion in the service. The improvements included strengthening community functions to facilitate the sharing of emotions and promote smooth communication between users. We proposed UI/UX service developments that enhanced the app by incorporating these insights. In order to verify the effectiveness, serviceability, and value of the developed prototype, an interview with a expert was conducted. The interviewes consisted of three service design experts. This study was conducted to contribute to the quality improvement and activation of the recently emerging online funeral services. The study is significant as it aims to understand the current status of these services and identify the factors necessary to improve service accessibility and usability. Subsequent studies require in-depth user verification of how much the proposed improvement plan affects the actual user experience.
Recommender systems based on association rule mining significantly contribute to seller's sales by reducing consumers' time to search for products that they want. Recommendations based on the frequency of transactions such as orders can effectively screen out the products that are statistically marketable among multiple products. A product with a high possibility of sales, however, can be omitted from the recommendation if it records insufficient number of transactions at the beginning of the sale. Products missing from the associated recommendations may lose the chance of exposure to consumers, which leads to a decline in the number of transactions. In turn, diminished transactions may create a vicious circle of lost opportunity to be recommended. Thus, initial sales are likely to remain stagnant for a certain period of time. Products that are susceptible to fashion or seasonality, such as clothing, may be greatly affected. This study was aimed at expanding association rules to include into the list of recommendations those products whose initial trading frequency of transactions is low despite the possibility of high sales. The particular purpose is to predict the strength of the direct connection of two unconnected items through the properties of the paths located between them. An association between two items revealed in transactions can be interpreted as the interaction between them, which can be expressed as a link in a social network whose nodes are items. The first step calculates the centralities of the nodes in the middle of the paths that indirectly connect the two nodes without direct connection. The next step identifies the number of the paths and the shortest among them. These extracts are used as independent variables in the regression analysis to predict future connection strength between the nodes. The strength of the connection between the two nodes of the model, which is defined by the number of nodes between the two nodes, is measured after a certain period of time. The regression analysis results confirm that the number of paths between the two products, the distance of the shortest path, and the number of neighboring items connected to the products are significantly related to their potential strength. This study used actual order transaction data collected for three months from February to April in 2016 from an online commerce company. To reduce the complexity of analytics as the scale of the network grows, the analysis was performed only on miscellaneous goods. Two consecutively purchased items were chosen from each customer's transactions to obtain a pair of antecedent and consequent, which secures a link needed for constituting a social network. The direction of the link was determined in the order in which the goods were purchased. Except for the last ten days of the data collection period, the social network of associated items was built for the extraction of independent variables. The model predicts the number of links to be connected in the next ten days from the explanatory variables. Of the 5,711 previously unconnected links, 611 were newly connected for the last ten days. Through experiments, the proposed model demonstrated excellent predictions. Of the 571 links that the proposed model predicts, 269 were confirmed to have been connected. This is 4.4 times more than the average of 61, which can be found without any prediction model. This study is expected to be useful regarding industries whose new products launch quickly with short life cycles, since their exposure time is critical. Also, it can be used to detect diseases that are rarely found in the early stages of medical treatment because of the low incidence of outbreaks. Since the complexity of the social networking analysis is sensitive to the number of nodes and links that make up the network, this study was conducted in a particular category of miscellaneous goods. Future research should consider that this condition may limit the opportunity to detect unexpected associations between products belonging to different categories of classification.
Every company wants to know customer's requirement and makes an effort to meet them. Cause that, communication between customer and company became core competition of business and that important is increasing continuously. There are several strategies to find customer's needs, but VOC (Voice of customer) is one of most powerful communication tools and VOC gathering by several channels as telephone, post, e-mail, website and so on is so meaningful. So, almost company is gathering VOC and operating VOC system. VOC is important not only to business organization but also public organization such as government, education institute, and medical center that should drive up public service quality and customer satisfaction. Accordingly, they make a VOC gathering and analyzing System and then use for making a new product and service, and upgrade. In recent years, innovations in internet and ICT have made diverse channels such as SNS, mobile, website and call-center to collect VOC data. Although a lot of VOC data is collected through diverse channel, the proper utilization is still difficult. It is because the VOC data is made of very emotional contents by voice or text of informal style and the volume of the VOC data are so big. These unstructured big data make a difficult to store and analyze for use by human. So that, the organization need to automatic collecting, storing, classifying and analyzing system for unstructured big VOC data. This study propose an intelligent VOC analyzing system based on opinion mining to classify the unstructured VOC data automatically and determine the polarity as well as the type of VOC. And then, the basis of the VOC opinion analyzing system, called domain-oriented sentiment dictionary is created and corresponding stages are presented in detail. The experiment is conducted with 4,300 VOC data collected from a medical website to measure the effectiveness of the proposed system and utilized them to develop the sensitive data dictionary by determining the special sentiment vocabulary and their polarity value in a medical domain. Through the experiment, it comes out that positive terms such as "칭찬, 친절함, 감사, 무사히, 잘해, 감동, 미소" have high positive opinion value, and negative terms such as "퉁명, 뭡니까, 말하더군요, 무시하는" have strong negative opinion. These terms are in general use and the experiment result seems to be a high probability of opinion polarity. Furthermore, the accuracy of proposed VOC classification model has been compared and the highest classification accuracy of 77.8% is conformed at threshold with -0.50 of opinion classification of VOC. Through the proposed intelligent VOC analyzing system, the real time opinion classification and response priority of VOC can be predicted. Ultimately the positive effectiveness is expected to catch the customer complains at early stage and deal with it quickly with the lower number of staff to operate the VOC system. It can be made available human resource and time of customer service part. Above all, this study is new try to automatic analyzing the unstructured VOC data using opinion mining, and shows that the system could be used as variable to classify the positive or negative polarity of VOC opinion. It is expected to suggest practical framework of the VOC analysis to diverse use and the model can be used as real VOC analyzing system if it is implemented as system. Despite experiment results and expectation, this study has several limits. First of all, the sample data is only collected from a hospital web-site. It means that the sentimental dictionary made by sample data can be lean too much towards on that hospital and web-site. Therefore, next research has to take several channels such as call-center and SNS, and other domain like government, financial company, and education institute.
Social media is becoming the platform for users to communicate their activities, status, emotions, and experiences to other people. In recent years, microblogs, such as Twitter, have gained in popularity because of its ease of use, speed, and reach. Compared to a conventional web blog, a microblog lowers users' efforts and investment for content generation by recommending shorter posts. There has been a lot research into capturing the social phenomena and analyzing the chatter of microblogs. However, measuring television ratings has been given little attention so far. Currently, the most common method to measure TV ratings uses an electronic metering device installed in a small number of sampled households. Microblogs allow users to post short messages, share daily updates, and conveniently keep in touch. In a similar way, microblog users are interacting with each other while watching television or movies, or visiting a new place. In order to measure TV ratings, some features are significant during certain hours of the day, or days of the week, whereas these same features are meaningless during other time periods. Thus, the importance of features can change during the day, and a model capturing the time sensitive relevance is required to estimate TV ratings. Therefore, modeling time-related characteristics of features should be a key when measuring the TV ratings through microblogs. We show that capturing time-dependency of features in measuring TV ratings is vitally necessary for improving their accuracy. To explore the relationship between the content of microblogs and TV ratings, we collected Twitter data using the Get Search component of the Twitter REST API from January 2013 to October 2013. There are about 300 thousand posts in our data set for the experiment. After excluding data such as adverting or promoted tweets, we selected 149 thousand tweets for analysis. The number of tweets reaches its maximum level on the broadcasting day and increases rapidly around the broadcasting time. This result is stems from the characteristics of the public channel, which broadcasts the program at the predetermined time. From our analysis, we find that count-based features such as the number of tweets or retweets have a low correlation with TV ratings. This result implies that a simple tweet rate does not reflect the satisfaction or response to the TV programs. Content-based features extracted from the content of tweets have a relatively high correlation with TV ratings. Further, some emoticons or newly coined words that are not tagged in the morpheme extraction process have a strong relationship with TV ratings. We find that there is a time-dependency in the correlation of features between the before and after broadcasting time. Since the TV program is broadcast at the predetermined time regularly, users post tweets expressing their expectation for the program or disappointment over not being able to watch the program. The highly correlated features before the broadcast are different from the features after broadcasting. This result explains that the relevance of words with TV programs can change according to the time of the tweets. Among the 336 words that fulfill the minimum requirements for candidate features, 145 words have the highest correlation before the broadcasting time, whereas 68 words reach the highest correlation after broadcasting. Interestingly, some words that express the impossibility of watching the program show a high relevance, despite containing a negative meaning. Understanding the time-dependency of features can be helpful in improving the accuracy of TV ratings measurement. This research contributes a basis to estimate the response to or satisfaction with the broadcasted programs using the time dependency of words in Twitter chatter. More research is needed to refine the methodology for predicting or measuring TV ratings.
Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.
Korea has stepped up efforts to investigate and catalog its flora and fauna to conserve the biodiversity of the Korean Peninsula and secure biological resources since the ratification of the Convention on Biological Diversity (CBD) in 1992 and the Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits (ABS) in 2010. Thus, after its establishment in 2007, the National Institute of Biological Resources (NIBR) of the Ministry of Environment of Korea initiated a project called the Korean Indigenous Species Investigation Project to investigate indigenous species on the Korean Peninsula. For 15 years since its beginning in 2006, this project has been carried out in five phases, Phase 1 from 2006-2008, Phase 2 from 2009-2011, Phase 3 from 2012-2014, Phase 4 from 2015-2017, and Phase 5 from 2018-2020. Before this project, in 2006, the number of indigenous species surveyed was 29,916. The figure was cumulatively aggregated at the end of each phase as 33,253 species for Phase 1 (2008), 38,011 species for Phase 2 (2011), 42,756 species for Phase 3 (2014), 49,027 species for Phase 4 (2017), and 54,428 species for Phase 5(2020). The number of indigenous species surveyed grew rapidly, showing an approximately 1.8-fold increase as the project progressed. These statistics showed an annual average of 2,320 newly recorded species during the project period. Among the recorded species, a total of 5,242 new species were reported in scientific publications, a great scientific achievement. During this project period, newly recorded species on the Korean Peninsula were identified using the recent taxonomic classifications as follows: 4,440 insect species (including 988 new species), 4,333 invertebrate species except for insects (including 1,492 new species), 98 vertebrate species (fish) (including nine new species), 309 plant species (including 176 vascular plant species, 133 bryophyte species, and 39 new species), 1,916 algae species (including 178 new species), 1,716 fungi and lichen species(including 309 new species), and 4,812 prokaryotic species (including 2,226 new species). The number of collected biological specimens in each phase was aggregated as follows: 247,226 for Phase 1 (2008), 207,827 for Phase 2 (2011), 287,133 for Phase 3 (2014), 244,920 for Phase 4(2017), and 144,333 for Phase 5(2020). A total of 1,131,439 specimens were obtained with an annual average of 75,429. More specifically, 281,054 insect specimens, 194,667 invertebrate specimens (except for insects), 40,100 fish specimens, 378,251 plant specimens, 140,490 algae specimens, 61,695 fungi specimens, and 35,182 prokaryotic specimens were collected. The cumulative number of researchers, which were nearly all professional taxonomists and graduate students majoring in taxonomy across the country, involved in this project was around 5,000, with an annual average of 395. The number of researchers/assistant researchers or mainly graduate students participating in Phase 1 was 597/268; 522/191 in Phase 2; 939/292 in Phase 3; 575/852 in Phase 4; and 601/1,097 in Phase 5. During this project period, 3,488 papers were published in major scientific journals. Of these, 2,320 papers were published in domestic journals and 1,168 papers were published in Science Citation Index(SCI) journals. During the project period, a total of 83.3 billion won (annual average of 5.5 billion won) or approximately US $75 million (annual average of US $5 million) was invested in investigating indigenous species and collecting specimens. This project was a large-scale research study led by the Korean government. It is considered to be a successful example of Korea's compressed development as it attracted almost all of the taxonomists in Korea and made remarkable achievements with a massive budget in a short time. The results from this project led to the National List of Species of Korea, where all species were organized by taxonomic classification. Information regarding the National List of Species of Korea is available to experts, students, and the general public (https://species.nibr.go.kr/index.do). The information, including descriptions, DNA sequences, habitats, distributions, ecological aspects, images, and multimedia, has been digitized, making contributions to scientific advancement in research fields such as phylogenetics and evolution. The species information also serves as a basis for projects aimed at species distribution and biological monitoring such as climate-sensitive biological indicator species. Moreover, the species information helps bio-industries search for useful biological resources. The most meaningful achievement of this project can be in providing support for nurturing young taxonomists like graduate students. This project has continued for the past 15 years and is still ongoing. Efforts to address issues, including species misidentification and invalid synonyms, still have to be made to enhance taxonomic research. Research needs to be conducted to investigate another 50,000 species out of the estimated 100,000 indigenous species on the Korean Peninsula.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70
Purpose : 3D conformal radiotherapy, the optimum dose delivered to the tumor and provided the risk of normal tissue unless marginal miss, was restricted by organ motion. For tumors in the thorax and abdomen, the planning target volume (PTV) is decided including the margin for movement of tumor volumes during treatment due to patients breathing. We designed the respiratory gating radiotherapy device (RGRD) for using during CT simulation, dose planning and beam delivery at identical breathing period conditions. Using RGRD, reducing the treatment margin for organ (thorax or abdomen) motion due to breathing and improve dose distribution for 3D conformal radiotherapy. Materials and Methods : The internal organ motion data for lung cancer patients were obtained by examining the diaphragm in the supine position to find the position dependency. We made a respiratory gating radiotherapy device (RGRD) that is composed of a strip band, drug sensor, micro switch, and a connected on-off switch in a LINAC control box. During same breathing period by RGRD, spiral CT scan, virtual simulation, and 3D dose planing for lung cancer patients were peformed, without an extended PTV margin for free breathing, and then the dose was delivered at the same positions. We calculated effective volumes and normal tissue complication probabilities (NTCP) using dose volume histograms for normal lung, and analyzed changes in doses associated with selected NTCP levels and tumor control probabilities (TCP) at these new dose levels. The effects of 3D conformal radiotherapy by RGRD were evaluated with DVH (Dose Volume Histogram), TCP, NTCP and dose statistics. Results : The average movement of a diaphragm was 1.5 cm in the supine position when patients breathed freely. Depending on the location of the tumor, the magnitude of the PTV margin needs to be extended from 1 cm to 3 cm, which can greatly increase normal tissue irradiation, and hence, results in increase of the normal tissue complications probabiliy. Simple and precise RGRD is very easy to setup on patients and is sensitive to length variation (+2 mm), it also delivers on-off information to patients and the LINAC machine. We evaluated the treatment plans of patients who had received conformal partial organ lung irradiation for the treatment of thorax malignancies. Using RGRD, the PTV margin by free breathing can be reduced about 2 cm for moving organs by breathing. TCP values are almost the same values
Purpose: Bone metastasis in breast cancer patients are usually assessed by conventional Tc-99m methylene diphosphonate whole-body bone scan, which has a high sensitivity but a poor specificity. However, positron emission tomography with