Development of the Information Delivery System for the Home Nursing Service (가정간호사업 운용을 위한 정보전달체계 개발 I (가정간호 데이터베이스 구축과 뇌졸중 환자의 가정간호 전산개발))
-
- Journal of Home Health Care Nursing
- /
- v.4
- /
- pp.5-22
- /
- 1997
The purpose of the study was to development an information delivery system for the home nursing service, to demonstrate and to evaluate the efficiency of it. The period of research conduct was from September 1996 to August 31, 1997. At the 1st stage to achieve the purpose, Firstly Assessment tool for the patients with cerebral vascular disease who have the first priority of HNS among the patients with various health problems at home was developed through literature review. Secondly, after identification of patient nursing problem by the home care nurse with the assessment tool, the patient's classification system developed by Park (1988) that was 128 nursing activities under 6 categories was used to identify the home care nurse's activities of the patient with CAV at home. The research team had several workshops with 5 clinical nurse experts to refine it. At last 110 nursing activities under 11 categories for the patients with CVA were derived. At the second stage, algorithms were developed to connect 110 nursing activities with the patient nursing problems identified by assessment tool. The computerizing process of the algorithms is as follows:
Mortierella alpina, a common soil fungus, is the most efficient organism for production of production acid presently known. Since arachidonic acid are important in human brain and retina development, it was undertaken the growing effect containing diet as a food ingredient. Arachidonic acid rich oil derived from Mortierella alpina, was subjected to a program of studies to establish for use in diet supplement. This study was compared the growth and learning effect of fungal oil rich in arachidonic acid by incorporated into diets ad libitum. Sprague-Dawley rats received experimental diets 5 groups (standard AIN 93 based control with beef tallow, extract oil 8%, and 4%, and Mortierella alpina in diet 10% and 20%) over all experiment duration (pre-mating, mating, gestation, lactation, and after weaning 4 weeks). Pups born during this period consumed same diets after wean for 4 weeks. There was no statistical significance of diet effects in reproductive performance and fertility from birth to weaning. But the groups of Mortierella alpine diet were lower of weight gain and diet intake after weaning. The serum lipids were significantly different with diet groups, higher TG in LO (oil 4%) group of dams, and higher total cholesterol in LF (M. alpina 10%) of pups, although serum albumin content was not significantly different in diet group. The spent-time and memory effect within 4 weeks of T-Morris water maze pass test in dam and 7-week- age pups did not differ in diet groups. On the count of backing error in weaning period of pups was lower in HO(extracted oil 8%) group. In the group of 10% and 20% Mortierella alpina diet, DNA content was lower in brain with lower body weight, but liver DNA relative to body weight was higher than control. Further correlation analyses would be needed DNA and arachidonic acid intakes, with Mortierella alpina diet digestion rate.
The sound design performed in the production of media contents, such as TV, movie, and CF, have been conducted through the experienced feeling of some experts in the aspect of auditory effects that communicates stories. Also, there have been few studies of the quantitative approach and verification to apply visual and auditory effects felt by users. This study is a non-equivalent control group pretest-posttest design and investigates the difference in communication effects in which the difference in a sound design in the production of media contents that affects users. This study analyzed the brain quotient (BQ) obtained by the measurement of brain waves during the watching of an experiment image (track A) designed by using a 60-second TV CF only and an experiment image (track B) designed by sound effects and music and investigated which sound design represents differences in communication effects for users. The results of this investigation can be summarized as follows: First, in the results of the comparison of the attention quotient (ATQ), which is the BQ of recognition effects, between A and B tracks, the track A showed a higher difference in activation than the track B. It can be analyzed that the sound design based on music showed higher levels in attention and concentration than that of the sound effect design. Second, in the results of the comparison of the emotional quotient (EQ), which is emotional effects, between A and B tracks, the track A represented a higher difference than the track B. It means that the sound design based on music showed higher contribution levels in emotional effects than that of the design based on sound effects. Third, in the results of the comparison of the left and right brain equivalent quotient (ACQ), which is memory activation effects, between A and B tracks, there were no significant differences. In the results of the experiments, although there are some constraints in TV CF based on the conventional theories in which sound effects based design affects strong concentration, and music based design affects emotional feeling, the music based design may present more effects in continued concentration. In addition, it was evident that the music based design showed higher effects in emotional aspects. However, it is necessary to continue the study by increasing the number of subjects for improving the little differences in ACQ. This study is useful to investigate the communication effects of the sound based design in media contents as a quantitative manner through measuring brain waves and expect the results of this study as the basic materials in the fields of sound production.
The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.
The purpose of this study is to develop a neuropsychological model for the spatial ability factor and to divide the brain active area involved in the light & shadow problem solving process into the domain-general ability and the domain-specific ability based on the neuropsychological model. Twenty-four male college students participated in the study to measure the synchronized eye movement and electroencephalograms (EEG) while they performed the spatial ability test and the light & shadow tasks. Neuropsychological model for the spatial ability factor and light & shadow problem solving process was developed by integrating the measurements of the participants' eye movements, brain activity areas, and the interview findings regarding their thoughts and strategies. The results of this study are as follows; first, the spatial visualization and mental rotation factors mainly required activation of the parietal lobe, and the spatial orientation factor required activation of the frontal lobe. Second, in the light & shadow problem solving process, participants use both their spatial ability as a domain-general thought, and the application of scientific principles as a domain-specific thought. The brain activity patterns resulting from a participants' inferring the shadow by parallel light source and inferring the shadow when the direction of the light changed were similar to the neuropsychological model for the spatial visualization factor. The brain activity pattern from inferring an object from its shadow by light from multiple directions was similar to the neuropsychological model for the spatial orientation factor. The brain activity pattern from inferring a shadow with a point source of light was similar to the neuropsychological model for the spatial visualization factor. In addition, when solving the light & shadow tasks, the brain's middle temporal gyrus, precentral gyrus, inferior frontal gyrus, middle frontal gyrus were additionally activated, which are responsible for deductive reasoning, working memory, and planning for action.
Brand experience has received much attention from considerable marketing research. When consumers consume and use brands, they are exposed to various specific brand-related stimuli. These brand-related stimuli include brand identity and brand communications(e.g., colors, shapes, designs, slogans, mascots, brand characters) components. Brakus, Schmitt, and Zarantonello(2009) conceptualized brand experience as subjective and internal consumer responses evoked by brand-related stimuli. They demonstrated that brand experience can be broken down into four dimensions(sensory, affective, intellectual, and behavioral). Because experiences result from stimulations and lead to pleasurable outcomes, we expect consumers to want to repeat theses experiences. That is, brand experiences, stored in consumer memory, should affect brand loyalty. Consumers with positive experiences should be more likely to buy a brand again and less likely to buy an alternative brand(Fournier 1998; Oliver 1997). Brand attachment, one of dimensions of the consumer-brand relationship, is defined as an emotional bond to the specific brand(Thomson, MacInnis, and Park 2005). Brand attachment is target-specific bond between the consumer and the specific brand. Thus, strong attachment is attended by a rich set of schema that link the brand to the consumer. Previous researches propose that brand attachments should affect consumers' commitment to the brand. Brand experience differs from affective construct such as brand attachment. Brand attachment is based on interaction between a consumer and the brand. In contrast, brand experience occurs whenever there is a direct and indirect interaction with the brand. Furthermore, brand experience is not an emotional relationship concept. Brakus et al.(2009) suggest that brand experience may result in brand attachment. This study aims to distinguish brand experience dimensions and investigate the effects of brand experience on brand attachment and brand commitment. We test research problems with data from 265 customers having brand experiences in various product categories by using multiple regression and structural equation model. The empirical results can be summarized as follows. First, the paths from affective, behavior, and intellectual experience to the brand attachment were found to be positively significant whereas the effect of sensory experience to brand attachment was not supported. In the consumer literature, sensory experiences for consumers are often equated with aesthetic pleasure. Over time, these pleasure experiences can affect consumer satisfaction. However, sensory pleasures are not linked to attachment such as consumers' strong emotional bond(i.e., hot affect). These empirical results confirms the results of previous studies. Second, brand attachment including passion and connection influences brand commitment positively but affection does not influence brand commitment. In marketing context, consumers with brand attachment have intention to have a willingness to stay with the relationship. The results also imply that consumers' emotional attachment is characterized by a set of brand experience dimensions and consumers who are emotionally attached to the brand are committed. The findings of this research contribute to develop differences between brand experience and brand attachment and to provide practical implications on the brand experience management. Recently, many brand managers have focused on short-term view. According to this study, we suggest that effective brand experience management requires taking a long-term view of marketing decisions.
Artificial intelligences are changing world. Financial market is also not an exception. Robo-Advisor is actively being developed, making up the weakness of traditional asset allocation methods and replacing the parts that are difficult for the traditional methods. It makes automated investment decisions with artificial intelligence algorithms and is used with various asset allocation models such as mean-variance model, Black-Litterman model and risk parity model. Risk parity model is a typical risk-based asset allocation model which is focused on the volatility of assets. It avoids investment risk structurally. So it has stability in the management of large size fund and it has been widely used in financial field. XGBoost model is a parallel tree-boosting method. It is an optimized gradient boosting model designed to be highly efficient and flexible. It not only makes billions of examples in limited memory environments but is also very fast to learn compared to traditional boosting methods. It is frequently used in various fields of data analysis and has a lot of advantages. So in this study, we propose a new asset allocation model that combines risk parity model and XGBoost machine learning model. This model uses XGBoost to predict the risk of assets and applies the predictive risk to the process of covariance estimation. There are estimated errors between the estimation period and the actual investment period because the optimized asset allocation model estimates the proportion of investments based on historical data. these estimated errors adversely affect the optimized portfolio performance. This study aims to improve the stability and portfolio performance of the model by predicting the volatility of the next investment period and reducing estimated errors of optimized asset allocation model. As a result, it narrows the gap between theory and practice and proposes a more advanced asset allocation model. In this study, we used the Korean stock market price data for a total of 17 years from 2003 to 2019 for the empirical test of the suggested model. The data sets are specifically composed of energy, finance, IT, industrial, material, telecommunication, utility, consumer, health care and staple sectors. We accumulated the value of prediction using moving-window method by 1,000 in-sample and 20 out-of-sample, so we produced a total of 154 rebalancing back-testing results. We analyzed portfolio performance in terms of cumulative rate of return and got a lot of sample data because of long period results. Comparing with traditional risk parity model, this experiment recorded improvements in both cumulative yield and reduction of estimated errors. The total cumulative return is 45.748%, about 5% higher than that of risk parity model and also the estimated errors are reduced in 9 out of 10 industry sectors. The reduction of estimated errors increases stability of the model and makes it easy to apply in practical investment. The results of the experiment showed improvement of portfolio performance by reducing the estimated errors of the optimized asset allocation model. Many financial models and asset allocation models are limited in practical investment because of the most fundamental question of whether the past characteristics of assets will continue into the future in the changing financial market. However, this study not only takes advantage of traditional asset allocation models, but also supplements the limitations of traditional methods and increases stability by predicting the risks of assets with the latest algorithm. There are various studies on parametric estimation methods to reduce the estimated errors in the portfolio optimization. We also suggested a new method to reduce estimated errors in optimized asset allocation model using machine learning. So this study is meaningful in that it proposes an advanced artificial intelligence asset allocation model for the fast-developing financial markets.
In recent years, frequent itemset mining for considering the importance of each item has been intensively studied as one of important issues in the data mining field. According to strategies utilizing the item importance, itemset mining approaches for discovering itemsets based on the item importance are classified as follows: weighted frequent itemset mining, frequent itemset mining using transactional weights, and utility itemset mining. In this paper, we perform empirical analysis with respect to frequent itemset mining algorithms based on transactional weights. The mining algorithms compute transactional weights by utilizing the weight for each item in large databases. In addition, these algorithms discover weighted frequent itemsets on the basis of the item frequency and weight of each transaction. Consequently, we can see the importance of a certain transaction through the database analysis because the weight for the transaction has higher value if it contains many items with high values. We not only analyze the advantages and disadvantages but also compare the performance of the most famous algorithms in the frequent itemset mining field based on the transactional weights. As a representative of the frequent itemset mining using transactional weights, WIS introduces the concept and strategies of transactional weights. In addition, there are various other state-of-the-art algorithms, WIT-FWIs, WIT-FWIs-MODIFY, and WIT-FWIs-DIFF, for extracting itemsets with the weight information. To efficiently conduct processes for mining weighted frequent itemsets, three algorithms use the special Lattice-like data structure, called WIT-tree. The algorithms do not need to an additional database scanning operation after the construction of WIT-tree is finished since each node of WIT-tree has item information such as item and transaction IDs. In particular, the traditional algorithms conduct a number of database scanning operations to mine weighted itemsets, whereas the algorithms based on WIT-tree solve the overhead problem that can occur in the mining processes by reading databases only one time. Additionally, the algorithms use the technique for generating each new itemset of length N+1 on the basis of two different itemsets of length N. To discover new weighted itemsets, WIT-FWIs performs the itemset combination processes by using the information of transactions that contain all the itemsets. WIT-FWIs-MODIFY has a unique feature decreasing operations for calculating the frequency of the new itemset. WIT-FWIs-DIFF utilizes a technique using the difference of two itemsets. To compare and analyze the performance of the algorithms in various environments, we use real datasets of two types (i.e., dense and sparse) in terms of the runtime and maximum memory usage. Moreover, a scalability test is conducted to evaluate the stability for each algorithm when the size of a database is changed. As a result, WIT-FWIs and WIT-FWIs-MODIFY show the best performance in the dense dataset, and in sparse dataset, WIT-FWI-DIFF has mining efficiency better than the other algorithms. Compared to the algorithms using WIT-tree, WIS based on the Apriori technique has the worst efficiency because it requires a large number of computations more than the others on average.
In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.
The wall shear stress in the vicinity of end-to end anastomoses under steady flow conditions was measured using a flush-mounted hot-film anemometer(FMHFA) probe. The experimental measurements were in good agreement with numerical results except in flow with low Reynolds numbers. The wall shear stress increased proximal to the anastomosis in flow from the Penrose tubing (simulating an artery) to the PTFE: graft. In flow from the PTFE graft to the Penrose tubing, low wall shear stress was observed distal to the anastomosis. Abnormal distributions of wall shear stress in the vicinity of the anastomosis, resulting from the compliance mismatch between the graft and the host artery, might be an important factor of ANFH formation and the graft failure. The present study suggests a correlation between regions of the low wall shear stress and the development of anastomotic neointimal fibrous hyperplasia(ANPH) in end-to-end anastomoses. 30523 T00401030523 ^x Air pressure decay(APD) rate and ultrafiltration rate(UFR) tests were performed on new and saline rinsed dialyzers as well as those roused in patients several times. C-DAK 4000 (Cordis Dow) and CF IS-11 (Baxter Travenol) reused dialyzers obtained from the dialysis clinic were used in the present study. The new dialyzers exhibited a relatively flat APD, whereas saline rinsed and reused dialyzers showed considerable amount of decay. C-DAH dialyzers had a larger APD(11.70