• Title/Summary/Keyword: Proposed model

Search Result 33,547, Processing Time 0.069 seconds

A Graph Layout Algorithm for Scale-free Network (척도 없는 네트워크를 위한 그래프 레이아웃 알고리즘)

  • Cho, Yong-Man;Kang, Tae-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.5_6
    • /
    • pp.202-213
    • /
    • 2007
  • A network is an important model widely used in natural and social science as well as engineering. To analyze these networks easily it is necessary that we should layout the features of networks visually. These Graph-Layout researches have been performed recently according to the development of the computer technology. Among them, the Scale-free Network that stands out in these days is widely used in analyzing and understanding the complicated situations in various fields. The Scale-free Network is featured in two points. The first, the number of link(Degree) shows the Power-function distribution. The second, the network has the hub that has multiple links. Consequently, it is important for us to represent the hub visually in Scale-free Network but the existing Graph-layout algorithms only represent clusters for the present. Therefor in this thesis we suggest Graph-layout algorithm that effectively presents the Scale-free network. The Hubity(hub+ity) repulsive force between hubs in suggested algorithm in this thesis is in inverse proportion to the distance, and if the degree of hubs increases in a times the Hubity repulsive force between hubs is ${\alpha}^{\gamma}$ times (${\gamma}$??is a connection line index). Also, if the algorithm has the counter that controls the force in proportion to the total node number and the total link number, The Hubity repulsive force is independent of the scale of a network. The proposed algorithm is compared with Graph-layout algorithm through an experiment. The experimental process is as follows: First of all, make out the hub that exists in the network or not. Check out the connection line index to recognize the existence of hub, and then if the value of connection line index is between 2 and 3, then conclude the Scale-free network that has a hub. And then use the suggested algorithm. In result, We validated that the proposed Graph-layout algorithm showed the Scale-free network more effectively than the existing cluster-centered algorithms[Noack, etc.].

The Effect of Meta-Features of Multiclass Datasets on the Performance of Classification Algorithms (다중 클래스 데이터셋의 메타특징이 판별 알고리즘의 성능에 미치는 영향 연구)

  • Kim, Jeonghun;Kim, Min Yong;Kwon, Ohbyung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.23-45
    • /
    • 2020
  • Big data is creating in a wide variety of fields such as medical care, manufacturing, logistics, sales site, SNS, and the dataset characteristics are also diverse. In order to secure the competitiveness of companies, it is necessary to improve decision-making capacity using a classification algorithm. However, most of them do not have sufficient knowledge on what kind of classification algorithm is appropriate for a specific problem area. In other words, determining which classification algorithm is appropriate depending on the characteristics of the dataset was has been a task that required expertise and effort. This is because the relationship between the characteristics of datasets (called meta-features) and the performance of classification algorithms has not been fully understood. Moreover, there has been little research on meta-features reflecting the characteristics of multi-class. Therefore, the purpose of this study is to empirically analyze whether meta-features of multi-class datasets have a significant effect on the performance of classification algorithms. In this study, meta-features of multi-class datasets were identified into two factors, (the data structure and the data complexity,) and seven representative meta-features were selected. Among those, we included the Herfindahl-Hirschman Index (HHI), originally a market concentration measurement index, in the meta-features to replace IR(Imbalanced Ratio). Also, we developed a new index called Reverse ReLU Silhouette Score into the meta-feature set. Among the UCI Machine Learning Repository data, six representative datasets (Balance Scale, PageBlocks, Car Evaluation, User Knowledge-Modeling, Wine Quality(red), Contraceptive Method Choice) were selected. The class of each dataset was classified by using the classification algorithms (KNN, Logistic Regression, Nave Bayes, Random Forest, and SVM) selected in the study. For each dataset, we applied 10-fold cross validation method. 10% to 100% oversampling method is applied for each fold and meta-features of the dataset is measured. The meta-features selected are HHI, Number of Classes, Number of Features, Entropy, Reverse ReLU Silhouette Score, Nonlinearity of Linear Classifier, Hub Score. F1-score was selected as the dependent variable. As a result, the results of this study showed that the six meta-features including Reverse ReLU Silhouette Score and HHI proposed in this study have a significant effect on the classification performance. (1) The meta-features HHI proposed in this study was significant in the classification performance. (2) The number of variables has a significant effect on the classification performance, unlike the number of classes, but it has a positive effect. (3) The number of classes has a negative effect on the performance of classification. (4) Entropy has a significant effect on the performance of classification. (5) The Reverse ReLU Silhouette Score also significantly affects the classification performance at a significant level of 0.01. (6) The nonlinearity of linear classifiers has a significant negative effect on classification performance. In addition, the results of the analysis by the classification algorithms were also consistent. In the regression analysis by classification algorithm, Naïve Bayes algorithm does not have a significant effect on the number of variables unlike other classification algorithms. This study has two theoretical contributions: (1) two new meta-features (HHI, Reverse ReLU Silhouette score) was proved to be significant. (2) The effects of data characteristics on the performance of classification were investigated using meta-features. The practical contribution points (1) can be utilized in the development of classification algorithm recommendation system according to the characteristics of datasets. (2) Many data scientists are often testing by adjusting the parameters of the algorithm to find the optimal algorithm for the situation because the characteristics of the data are different. In this process, excessive waste of resources occurs due to hardware, cost, time, and manpower. This study is expected to be useful for machine learning, data mining researchers, practitioners, and machine learning-based system developers. The composition of this study consists of introduction, related research, research model, experiment, conclusion and discussion.

Performance Evaluation of a Dynamic Bandwidth Allocation Algorithm with providing the Fairness among Terminals for Ethernet PON Systems (단말에 대한 공정성을 고려한 이더넷 PON 시스템의 동적대역할당방법의 성능분석)

  • Park Ji-won;Yoon Chong-ho;Song Jae-yeon;Lim Se-youn;Kim Jin-hee
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.11B
    • /
    • pp.980-990
    • /
    • 2004
  • In this paper, we propose the dynamic bandwidth allocation algorithm for the IEEE802.3ah Ethernet Passive Optical Network(EPON) system to provide the fairness among terminals, and evaluate the delay-throughput performance by simulation. For the conventional EPON systems, an Optical Line Termination (OLT) schedules the upstream bandwidth for each Optical Network Unit (ONU), based on its buffer state. This scheme can provide a fair bandwidth allocation for each ONU. However, it has a critical problem that it does not guarantee the fair bandwidth among terminals which are connected to ONUs. For an example, we assume that the traffic from a greedy terminal increases at a time. Then, the buffer state of its ONU is instantly reported to the OLT, and finally the OW can get more bandwidth. As a result, the less bandwidth is allocated to the other ONUs, and thus the transfer delay of terminals connected to the ONUs gets inevitably increased. Noting that this unfairness problem exists in the conventional EPON systems, we propose a fair bandwidth allocation scheme by OLT with considering the buffer state of ONU as welt as the number of terminals connected it. For the performance evaluation, we develop the EPON simulation model with SIMULA simulation language. From the result of the throughput-delay performance and the dynamics of buffer state along time for each terminal and ONU, respectively, one can see that the proposed scheme can provide the fairness among not ONUs but terminals. Finally, it is worthwhile to note that the proposed scheme for the public EPON systems might be an attractive solution for providing the fairness among subscriber terminals.

Temperature Compensation of Optical FBG Sensors Embedded Tendon for Long-term Monitoring of Tension Force of Ground Anchor (광섬유 센서 내장형 텐던을 이용한 그라운드 앵커의 장기 장력모니터링을 위한 온도보상)

  • Sung, Hyun-Jong;Kim, Young-Sang;Kim, Jae-Min;Park, Gui-Hyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.28 no.5
    • /
    • pp.13-25
    • /
    • 2012
  • Ground anchor method is one of the most popular reinforcing technology for slope in Korea. For the health monitoring of slope which is reinforced by permanent anchor for a long period, monitoring of the tension force of ground anchor is very important. However, since electromechanical sensors such as strain gauge and V/W type load cell are also subject to long-term risk as well as suffering from noise during long distance transmission and immunity to electromagnetic interference (EMI), optical FBG sensors embedded tendon was developed to measure strain of 7-wire strand by embedding FBG sensor into the center king cable of 7-wire strand. This FBG sensors embedded tendon has been successfully applied to measuring the short-term anchor force. But to adopt this tendon to long-term monitoring, temperature compensation of the FBG sensors embedded tendon should be done. In this paper, we described how to compensate the effect in compliance with the change of underground temperature during long-term tension force monitoring of ground anchors by using optical fiber sensors (FBG: Fiber Bragg Grating). The model test was carried out to determine the temperature sensitivity coefficient (${\beta}^{\prime}$) of FBG sensors embedded tendon. The determined temperature sensitivity coefficient ${\beta}^{\prime}=2.0{\times}10^{-5}/^{\circ}C$ was verified by comparing the ground temperatures predicted from the proposed sensor using ${\beta}^{\prime}$ with ground temperatures measured from ground thermometer. Finally, temperature compensations were carried out based on ${\beta}^{\prime}$ value and ground temperature measurement from KMA for the tension force monitoring results of tension type and compression type anchors, which had been installed more than 1 year before at the test site. Temperature compensated tension forces are compared with those measured from conventional load cell during the same measuring time. Test results show that determined temperature sensitivity coefficient (${\beta}^{\prime}$) of FBG sensors embedded tendon is valid and proposed temperature compensation method is also appropriate from the fact that the temperature compensated tension forces are not dependent on the change of ground temperature and are consistent with the tension forces measured from the conventional load cell.

Intelligent Optimal Route Planning Based on Context Awareness (상황인식 기반 지능형 최적 경로계획)

  • Lee, Hyun-Jung;Chang, Yong-Sik
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.117-137
    • /
    • 2009
  • Recently, intelligent traffic information systems have enabled people to forecast traffic conditions before hitting the road. These convenient systems operate on the basis of data reflecting current road and traffic conditions as well as distance-based data between locations. Thanks to the rapid development of ubiquitous computing, tremendous context data have become readily available making vehicle route planning easier than ever. Previous research in relation to optimization of vehicle route planning merely focused on finding the optimal distance between locations. Contexts reflecting the road and traffic conditions were then not seriously treated as a way to resolve the optimal routing problems based on distance-based route planning, because this kind of information does not have much significant impact on traffic routing until a a complex traffic situation arises. Further, it was also not easy to take into full account the traffic contexts for resolving optimal routing problems because predicting the dynamic traffic situations was regarded a daunting task. However, with rapid increase in traffic complexity the importance of developing contexts reflecting data related to moving costs has emerged. Hence, this research proposes a framework designed to resolve an optimal route planning problem by taking full account of additional moving cost such as road traffic cost and weather cost, among others. Recent technological development particularly in the ubiquitous computing environment has facilitated the collection of such data. This framework is based on the contexts of time, traffic, and environment, which addresses the following issues. First, we clarify and classify the diverse contexts that affect a vehicle's velocity and estimates the optimization of moving cost based on dynamic programming that accounts for the context cost according to the variance of contexts. Second, the velocity reduction rate is applied to find the optimal route (shortest path) using the context data on the current traffic condition. The velocity reduction rate infers to the degree of possible velocity including moving vehicles' considerable road and traffic contexts, indicating the statistical or experimental data. Knowledge generated in this papercan be referenced by several organizations which deal with road and traffic data. Third, in experimentation, we evaluate the effectiveness of the proposed context-based optimal route (shortest path) between locations by comparing it to the previously used distance-based shortest path. A vehicles' optimal route might change due to its diverse velocity caused by unexpected but potential dynamic situations depending on the road condition. This study includes such context variables as 'road congestion', 'work', 'accident', and 'weather' which can alter the traffic condition. The contexts can affect moving vehicle's velocity on the road. Since these context variables except for 'weather' are related to road conditions, relevant data were provided by the Korea Expressway Corporation. The 'weather'-related data were attained from the Korea Meteorological Administration. The aware contexts are classified contexts causing reduction of vehicles' velocity which determines the velocity reduction rate. To find the optimal route (shortest path), we introduced the velocity reduction rate in the context for calculating a vehicle's velocity reflecting composite contexts when one event synchronizes with another. We then proposed a context-based optimal route (shortest path) algorithm based on the dynamic programming. The algorithm is composed of three steps. In the first initialization step, departure and destination locations are given, and the path step is initialized as 0. In the second step, moving costs including composite contexts into account between locations on path are estimated using the velocity reduction rate by context as increasing path steps. In the third step, the optimal route (shortest path) is retrieved through back-tracking. In the provided research model, we designed a framework to account for context awareness, moving cost estimation (taking both composite and single contexts into account), and optimal route (shortest path) algorithm (based on dynamic programming). Through illustrative experimentation using the Wilcoxon signed rank test, we proved that context-based route planning is much more effective than distance-based route planning., In addition, we found that the optimal solution (shortest paths) through the distance-based route planning might not be optimized in real situation because road condition is very dynamic and unpredictable while affecting most vehicles' moving costs. For further study, while more information is needed for a more accurate estimation of moving vehicles' costs, this study still stands viable in the applications to reduce moving costs by effective route planning. For instance, it could be applied to deliverers' decision making to enhance their decision satisfaction when they meet unpredictable dynamic situations in moving vehicles on the road. Overall, we conclude that taking into account the contexts as a part of costs is a meaningful and sensible approach to in resolving the optimal route problem.

Hydrogeochemical and Environmental Isotope Study of Groundwaters in the Pungki Area (풍기 지역 지하수의 수리지구화학 및 환경동위원소 특성 연구)

  • 윤성택;채기탁;고용권;김상렬;최병영;이병호;김성용
    • Journal of the Korean Society of Groundwater Environment
    • /
    • v.5 no.4
    • /
    • pp.177-191
    • /
    • 1998
  • For various kinds of waters including surface water, shallow groundwater (<70 m deep) and deep groundwater (500∼810 m deep) from the Pungki area, an integrated study based on hydrochemical, multivariate statistical, thermodynamic, environmental isotopic (tritium, oxygen-hydrogen, carbon and sulfur), and mass-balance approaches was attempted to elucidate the hydrogeochemical and hydrologic characteristics of the groundwater system in the gneiss area. Shallow groundwaters are typified as the 'Ca-HCO$_3$'type with higher concentrations of Ca, Mg, SO$_4$and NO$_3$, whereas deep groundwaters are the 'Na-HCO$_3$'type with elevated concentrations of Na, Ba, Li, H$_2$S, F and Cl and are supersaturated with respect to calcite. The waters in the area are largely classified into two groups: 1) surface waters and most of shallow groundwaters, and 2) deep groundwaters and one sample of shallow groundwater. Seasonal compositional variations are recognized for the former. Multivariate statistical analysis indicates that three factors may explain about 86% of the compositional variations observed in deep groundwaters. These are: 1) plagioclase dissolution and calcite precipitation, 2) sulfate reduction, and 3) acid hydrolysis of hydroxyl-bearing minerals(mainly mica). By combining with results of thermodynamic calculation, four appropriate models of water/ rock interaction, each showing the dissolution of plagioclase, kaolinite and micas and the precipitation of calcite, illite, laumontite, chlorite and smectite, are proposed by mass balance modelling in order to explain the water quality of deep groundwaters. Oxygen-hydrogen isotope data indicate that deep groundwaters were originated from a local meteoric water recharged from distant, topograpically high mountainous region and underwent larger degrees of water/rock interaction during the regional deep circulation, whereas the shallow groundwaters were recharged from nearby, topograpically low region. Tritium data show that the recharge time was the pre-thermonuclear age for deep groundwaters (<0.2 TU) but the post-thermonuclear age for shallow groundwaters (5.66∼7.79 TU). The $\delta$$\^$34/S values of dissolved sulfate indicate that high amounts of dissolved H$_2$S (up to 3.9 mg/1), a characteristic of deep groundwaters in this area, might be derived from the reduction of sulfate. The $\delta$$\^$13/C values of dissolved carbonates are controlled by not only the dissolution of carbonate minerals by dissolved soil CO$_2$(for shallow groundwaters) but also the reprecipitation of calcite (for deep groundwaters). An integrated model of the origin, flow and chemical evolution for the groundwaters in this area is proposed in this study.

  • PDF

Petrogenetic Study on the Foliated Granitoids in the Chonju and the Sunchang area (II) - In the Light of Sr and Nd Isotopic Properites - (전주 및 순창지역에 분포하는 엽리상 화강암류의 성인에 대한 연구 (II) - Sr 및 Nd 동위원소적 특성을 중심으로 -)

  • Na, Choon-Ki;Lee, In-Seong;Chung, Jae-Il
    • Economic and Environmental Geology
    • /
    • v.30 no.3
    • /
    • pp.249-262
    • /
    • 1997
  • The Sr and Nd isotopic compositions of two foliated granitic plutons located in the Chonju and Sunchang area were determined in order to reconfirm the intrusion ages of granitoids and to study the sources of granitic magmas. The best defined Rb-Sr isochron for the whole rock samples of the Chonju foliated granite (CFGR) give an age of $284{\pm}12Ma$, suggesting early Permian intrusion age. In contrast, the whole rock Rb-Sr data of the Sunchang foliated granite (SFGR) scatter widely on the isochron diagram with very little variation in the $^{87}Rb/^{86}Sr$ ratios and, therefore, yield no reliable age information. Futhermore they show the concordance of mineral and whole rock Rb-Sr isochron and divide into two linear groups with roughly the same slopes and significantly different $^{87}Sr/^{86}Sr$ ratios, indicating some kind of Rb-Sr distortion in whole rock scale and a difference in source material and/or magmatic evolution between two subsets. The reconstructed isochrons of 243 Ma, which was defined from the proposed data by the omission of one sample point with significantly higher $^{87}Rb/^{86}Sr$ ratio than the others, and 252 Ma, from the combined data of it and some of this study, strongly suggest the possibility that the SFGR was intruded appreciably earlier than had previously been proposed, although the reliability of these ages still questionable owing to high scatter of data points and, therefore, further study is necessary. All mineral isochrons for the investigated granites show the Jurassic to early Cretaceous thermal episode ranging from 160 Ma to 120 Ma Their corresponding initial $^{87}Sr/^{86}Sr$ ratios correlate well with their whole rock data, indicating that the mineral Rb-Sr system of the investigated granites was redistributed by the postmagmatic thermal event during Jurassic to early Cretaceous. The initial ${\varepsilon}Sr$ values for the CFGR (64.27 to 94.81) tend to be significantly lower than those for the SFGR (125.43 to 167.09). Thus it is likely that there is a marked difference in the magma source characteristics between the CFGR and the SFGR, although the possibility of an isotopic resetting event giving rise to a high apparent initial ${\varepsilon}Sr$ in the SFGR can not be ruled out. In contrast to ${\varepsilon}Sr$, both batholiths show a highly resticted and negative values of initial ${\varepsilon}Nd$, which is -14.73 to -19.53 with an average $-16.13{\pm}1.47$ in the CFGR and -14.78 to -18.59 with an average $-17.17{\pm}1.01$ in the SFGR. The highly negative initial ${\varepsilon}Nd$ values in the investigated granitoids strongly suggest that large amounts of recycled old continental components have taken part in their evolution. Furthermore, this highly resticted variation in ${\varepsilon}Nd$ is significant because it requires that the old crustal source material, from which the granitoid-producing melts were generated, should have a reasonably uniform Nd isotopic composition and also quit similar age. Calculated T2DM model ages give an average of $1.83{\pm}0.25Ga$ for CFGR and $1.96{\pm}0.19Ga$ for SFGR, suggesting the importance of a mid-Proterozoic episode for the genesis of two foliated granites. Although it is not possible to determine precisely the source rock compositions for the investigated foliatic granites, the Sr-Nd isotopic evidences indicate that midcrustal or less probably, a lower crustal granulitic source could be the most likely candidate.

  • PDF

An Empirical Study on the Effect of CRM System on the Performance of Pharmaceutical Companies (고객관계관리 시스템의 수준이 BSC 관점에서의 기업성과에 미치는 영향 : 제약회사를 중심으로)

  • Kim, Hyun-Jung;Park, Jong-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.43-65
    • /
    • 2010
  • Facing a complex environment driven by a decade, many companies are adopting new strategic frameworks such as Customer Relationship Management system to achieve sustainable profitability as well as overcome serious competition for survival. In many business areas, CRM system advanced a great deal in a matter of continuous compensating the defect and overall integration. However, pharmaceutical companies in Korea were slow to accept them for usesince they still have a tendency of holding fast to traditional way of sales and marketing based on individual networks of sales representatives. In the circumstance, this article tried to empirically address current status of CRM system as well as the effects of the system on the performance of pharmaceutical companies by applying BSC method's four perspectives, from financial, customer, learning and growth and internal process. Survey by e-mail and post to employers and employees who were working in pharma firms were undergone for the purpose. Total 113 cases among collected 140 ones were used for the statistical analysis by SPSS ver. 15 package. Reliability, Factor analysis, regression were done. This study revealed that CRM system had a significant effect on improving financial and non-financial performance of pharmaceutical companies as expected. Proposed regression model fits well and among them, CRM marketing information system shed the light on substantial impact on companies' outcome given profitability, growth and investment. Useful analytical information by CRM marketing information system appears to enable pharmaceutical firms to set up effective marketing and sales strategies, these result in favorable financial performance by enhancing values for stakeholderseventually, not to mention short-term profit and/or mid-term potential to growth. CRM system depicted its influence on not only financial performance, but also non-financial fruit of pharmaceutical companies. Further analysis for each component showed that CRM marketing information system were able to demonstrate statistically significant effect on the performance like the result of financial outcome. CRM system is believed to provide the companies with efficient way of customers managing by valuable standardized business process prompt coping with specific customers' needs. It consequently induces customer satisfaction and retentionto improve performance for long period. That is, there is a virtuous circle for creating value as the cornerstone for sustainable growth. However, the research failed to put forward to evidence to support hypothesis regarding favorable influence of CRM sales representative's records assessment system and CRM customer analysis system on the management performance. The analysis is regarded to reflect the lack of understanding of sales people and respondents between actual work duties and far-sighted goal in strategic analysis framework. Ordinary salesmen seem to dedicate short-term goal for the purpose of meeting sales target, receiving incentive bonus in a manner-of-fact style, as such, they tend to avail themselves of personal network and sales and promotional expense rather than CRM system. The study finding proposed a link between CRM information system and performance. It empirically indicated that pharmaceutical companies had been implementing CRM system as an effective strategic business framework in order for more balanced achievements based on the grounded understanding of both CRM system and integrated performance. It suggests a positive impact of supportive CRM system on firm performance, especially for pharmaceutical industry through the initial empirical evidence. Also, it brings out unmet needs for more practical system design, improvement of employees' awareness, increase of system utilization in the field. On the basis of the insight from this exploratory study, confirmatory research by more appropriate measurement tool and increased sample size should be further examined.

A Study for the Methodology of Analyzing the Operation Behavior of Thermal Energy Grids with Connecting Operation (열 에너지 그리드 연계운전의 운전 거동 특성 분석을 위한 방법론에 관한 연구)

  • Im, Yong Hoon;Lee, Jae Yong;Chung, Mo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.1 no.3
    • /
    • pp.143-150
    • /
    • 2012
  • A simulation methodology and corresponding program based on it is to be discussed for analyzing the effects of the networking operation of existing DHC system in connection with CHP system on-site. The practical simulation for arbitrary areas with various building compositions is carried out for the analysis of operational features in both systems, and the various aspects of thermal energy grids with connecting operation are highlighted through the detailed assessment of predicted results. The intrinsic operational features of CHP prime movers, gas engine, gas turbine etc., are effectively implemented by realizing the performance data, i.e. actual operation efficiency in the full and part loads range. For the sake of simplicity, a simple mathematical correlation model is proposed for simulating various aspects of change effectively on the existing DHC system side due to the connecting operation, instead of performing cycle simulations separately. The empirical correlations are developed using the hourly based annual operation data for a branch of the Korean District Heating Corporation (KDHC) and are implicit in relation between main operation parameters such as fuel consumption by use, heat and power production. In the simulation, a variety of system configurations are able to be considered according to any combination of the probable CHP prime-movers, absorption or turbo type cooling chillers of every kind and capacity. From the analysis of the thermal network operation simulations, it is found that the newly proposed methodology of mathematical correlation for modelling of the existing DHC system functions effectively in reflecting the operational variations due to thermal energy grids with connecting operation. The effects of intrinsic features of CHP prime-movers, e.g. the different ratio of heat and power production, various combinations of different types of chillers (i.e. absorption and turbo types) on the overall system operation are discussed in detail with the consideration of operation schemes and corresponding simulation algorithms.

Development of User Based Recommender System using Social Network for u-Healthcare (사회 네트워크를 이용한 사용자 기반 유헬스케어 서비스 추천 시스템 개발)

  • Kim, Hyea-Kyeong;Choi, Il-Young;Ha, Ki-Mok;Kim, Jae-Kyeong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.181-199
    • /
    • 2010
  • As rapid progress of population aging and strong interest in health, the demand for new healthcare service is increasing. Until now healthcare service has provided post treatment by face-to-face manner. But according to related researches, proactive treatment is resulted to be more effective for preventing diseases. Particularly, the existing healthcare services have limitations in preventing and managing metabolic syndrome such a lifestyle disease, because the cause of metabolic syndrome is related to life habit. As the advent of ubiquitous technology, patients with the metabolic syndrome can improve life habit such as poor eating habits and physical inactivity without the constraints of time and space through u-healthcare service. Therefore, lots of researches for u-healthcare service focus on providing the personalized healthcare service for preventing and managing metabolic syndrome. For example, Kim et al.(2010) have proposed a healthcare model for providing the customized calories and rates of nutrition factors by analyzing the user's preference in foods. Lee et al.(2010) have suggested the customized diet recommendation service considering the basic information, vital signs, family history of diseases and food preferences to prevent and manage coronary heart disease. And, Kim and Han(2004) have demonstrated that the web-based nutrition counseling has effects on food intake and lipids of patients with hyperlipidemia. However, the existing researches for u-healthcare service focus on providing the predefined one-way u-healthcare service. Thus, users have a tendency to easily lose interest in improving life habit. To solve such a problem of u-healthcare service, this research suggests a u-healthcare recommender system which is based on collaborative filtering principle and social network. This research follows the principle of collaborative filtering, but preserves local networks (consisting of small group of similar neighbors) for target users to recommend context aware healthcare services. Our research is consisted of the following five steps. In the first step, user profile is created using the usage history data for improvement in life habit. And then, a set of users known as neighbors is formed by the degree of similarity between the users, which is calculated by Pearson correlation coefficient. In the second step, the target user obtains service information from his/her neighbors. In the third step, recommendation list of top-N service is generated for the target user. Making the list, we use the multi-filtering based on user's psychological context information and body mass index (BMI) information for the detailed recommendation. In the fourth step, the personal information, which is the history of the usage service, is updated when the target user uses the recommended service. In the final step, a social network is reformed to continually provide qualified recommendation. For example, the neighbors may be excluded from the social network if the target user doesn't like the recommendation list received from them. That is, this step updates each user's neighbors locally, so maintains the updated local neighbors always to give context aware recommendation in real time. The characteristics of our research as follows. First, we develop the u-healthcare recommender system for improving life habit such as poor eating habits and physical inactivity. Second, the proposed recommender system uses autonomous collaboration, which enables users to prevent dropping and not to lose user's interest in improving life habit. Third, the reformation of the social network is automated to maintain the quality of recommendation. Finally, this research has implemented a mobile prototype system using JAVA and Microsoft Access2007 to recommend the prescribed foods and exercises for chronic disease prevention, which are provided by A university medical center. This research intends to prevent diseases such as chronic illnesses and to improve user's lifestyle through providing context aware and personalized food and exercise services with the help of similar users'experience and knowledge. We expect that the user of this system can improve their life habit with the help of handheld mobile smart phone, because it uses autonomous collaboration to arouse interest in healthcare.