• Title/Summary/Keyword: rate dependent

Search Result 3,171, Processing Time 0.038 seconds

Skin Permeability Study of Flavonoids Derived from Smilax china: Utilizing the Franz Diffusion Cell Assay

  • Sun-Beom Kwon;Ji-Hui Kim;Mi-Su Kim;Su-Hong Kim;Seong-Min Lee;Moo-Sung Kim;Jun-Sub Kim;Gi-Seong Moon;Hyang-Yeol Lee
    • Journal of the Korean Applied Science and Technology
    • /
    • v.41 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • Smilax china is known for its excellent antimicrobial, antioxidant, and anti-inflammatory properties. As a foundational study for applying the functionality of Smilax china extracts to cosmetics, it is necessory to investigate the concentration-dependent skin permation characteristics of the flavonoids in the extract, namely quercetin, catechin, and naringenin. Therefore, it serves as a crucial method for conducting this basic research on the functional aspects fo Smilax china extracts for cosmetic applications. This investigation focused on examining the percutaneous permeability characteristics of flavonoids originating from Smilax china. Applying Marzulli's definition, the Kp value of quercetin was categorized as "fast" at 0.1 mg/mL and "moderate" at 0.2 and 0.4 mg/mL. Notably, the permeation rate exhibited a decline with increasing concentration. For naringenin, Flux values were 0.69, 1.07, and 1.42 ㎍/hr/cm2 at concentrations of 0.1, 0.2, and 0.4 mg/mL, respectively, with corresponding Kp values of 6.95, 5.34, and 3.56. Naringenin's Kp value fell into the "moderate" category across all concentrations, and as observed with quercetin, the permeation rate decreased with higher concentrations. Likewise, for catechin, Flux values were 0.75, 1.09, and 1.66 ㎍/hr/cm2, and corresponding Kp values were 7.55, 5.46, and 4.16. Catechin's Kp value was consistently classified as "moderate" across all concentrations. The efficacy of quercetin, catechin, and naringenin, active ingredients in high-performance and anti-inflammatory Smilax china extracts, was found to exhibit skin penetration properties above the average. This confirms their suitability as excellent natural materials for use in functional cosmetics, given their outstanding capabilities in preventing acne and reducing inflammation.

The Effects of Intravenous Methylprednisolone Pulse Therapy by Mendoza Protocol in Primary and Secondary Nephrotic Syndrome (일차성 및 이차성 신증후군에서 Mendoza Protocol에 의한 Intravenous Methylprednisolone Pulse Therapy의 효과)

  • Lee Kyoung-Jae;Han Jae-Hyuk;Lee Young-Mock;Kim Ji-Hong;Kim Pyung-Kil
    • Childhood Kidney Diseases
    • /
    • v.5 no.2
    • /
    • pp.117-124
    • /
    • 2001
  • Purpose : Since Mendoza(1990)'s report that long term methylprednisolone pulse therapy by Mendoza protocol (MP therapy) is a good treatment option in focal segmental glomerulosclerosis(FSGS), there have been reports of the effects of this therapy in steroid-resistant nephrotic syndrome. However, no studies have been performed on the effects of MP therapy in steroid- dependent nephrotic syndrome and secondary nephrotic syndrome. In this study, we investigated the effects of long term MP therapy in primary and secondary nephrotic syndrome in which previous treatment options were not effective. Methods : We chose 10 children who were diagnosed with steroid-dependent minimal change nephrotic syndrome(SD-MCNS), who had shown frequent relapse during the immunocompromised or cytotoxic therapy Period, and 6 children with FSGS and 5 children with secondary nephrotic syndrome children, who had shown no response during the previous therapy period. We treated these patients according to Mendoza protocol involving infusions of high doses of methylprednisolone, often in combination with oral cyclophosphamide for 82 weeks. Results : In all the 10 children with SD-MCNS, complete remission was visible on average of $18{\pm}9$ days after MP therapy was started. However, all these children relapsed during or after MP therapy. In these children, the mean relapse rate prior to MP therapy was $2.1{\pm}1.0$ relpases/year, which was reduced to $1.4{\pm}0.9$ relapses/year during MP therapy(P>0.05) and rose to $2.7{\pm}1.0$ relapse/year after MP therapy. Of the 6 children with FSGS, 4 children($67\%$) showed complete remission, of whom 3 children($50\%$) remained in the remission status during the follow up period, $1.2{\pm}0.7$ years, after the end of MP therapy. 2 children($33\%$) showed no response. All of the 5 children with secondary nephrotic syndrome showed remission and remained in the remissiom status during the follow up period, $1.7{\pm}0.6$ years The only side effect of MP therapy was transient hypertension in 10 children of ail subjects during the intravenous infusion of methylprednisolone. Conclusion : We conclude that although long term MP therapy is not effective in the treatment of SD-MCNS, it is an effective therapy against intractable FSGS and secondary nephrotic syndrome. (J Korean Soc Pediatr Nephrol 2001 ; 5 : 117-24)

  • PDF

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Effect of $H_2O_2$ on Alveolar Epithelial Barrier Properties (폐상피세포 장벽에 대한 $H_2O_2$의 영향)

  • Suh, Duk-Joon;Cho, Se-Heon;Kang, Chang-Woon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.40 no.3
    • /
    • pp.236-249
    • /
    • 1993
  • Background: Among the injurious agents to which the lung airspaces are constantly exposed are reactive species of oxygen. It has been widely believed that reactive oxygen species may be implicated in the etiology of lung injuries. In order to elucidated how this oxidant causes lung cell injury, we investigated the effects of exogenous $H_2O_2$ on alveolar epithelial barrier characteristics. Methods: Rat type II alveolar epithelial cells were plated onto tissue culture-treated polycarbonate membrane filters. The resulting confluent monolayers on days 3 and 4 were mounted in a modified Ussing chamber and bathed on both sides with HEPES-buffered Ringer solution. The changes in short-circuit current (Isc) and monolayer resistance (R) in response to the exogenous hydroperoxide were measured. To determine the degree of cellular catalase participation in protection against $H_2O_2$ injury to the barrier, experiments were repeated in the presence of 20 mM aminotriazole (ATAZ, an inhibitor of catalase) in the same bathing fluid as the hydroperoxide. Results: These monolayers have a high transepithelial resistance (>2000 ohm-$cm^2$) and actively transport $Na^+$ from apical fluid. $H_2O_2$(0-100 mM) was then delivered to either apical or basolateral fluid. Resulting indicated that $H_2O_2$ decreased Isc and R gradually in dose-dependent manner. The effective concentration of apical $H_2O_2$ at which Isc (or R) was decreased by 50% at one hour ($ED_{50}$) was about 4 mM. However, basolateral $H_2O_2$ exposure led to $ED_{50}$ for Isc (and R) of about 0.04 mM. Inhibition of cellular catalase yielded $ED_{50}$ for Isc (and R) of about 0.4 mM when $H_2O_2$ was given apically, while $ED_{50}$ for basolateral exposure to $H_2O_2$ did not change in the presence of ATAZ. The rate of $H_2O_2$ consumption in apical and basolateral bathing fluids was the same, while cellualr catalase activity rose gradually with time in culture. Conclusion: Our data suggest that basolateral $H_2O_2$ may affect directly membrane component (e.g., $Na^+,\;K^+$-ATPase) located on the basolateral cell surface. Apical $H_2O_2$, on the other hand, may be largely degraded by catalase as it passes through the cells before reaching these membrane components. We conclude that alveolar epithelial barrier integrity as measured by Isc and R are compromised by $H_2O_2$ being relatively sensitive to basolateral (and insensitive to apical) $H_2O_2$.

  • PDF

An Empirical Study on the Influencing Factors for Big Data Intented Adoption: Focusing on the Strategic Value Recognition and TOE Framework (빅데이터 도입의도에 미치는 영향요인에 관한 연구: 전략적 가치인식과 TOE(Technology Organizational Environment) Framework을 중심으로)

  • Ka, Hoi-Kwang;Kim, Jin-soo
    • Asia pacific journal of information systems
    • /
    • v.24 no.4
    • /
    • pp.443-472
    • /
    • 2014
  • To survive in the global competitive environment, enterprise should be able to solve various problems and find the optimal solution effectively. The big-data is being perceived as a tool for solving enterprise problems effectively and improve competitiveness with its' various problem solving and advanced predictive capabilities. Due to its remarkable performance, the implementation of big data systems has been increased through many enterprises around the world. Currently the big-data is called the 'crude oil' of the 21st century and is expected to provide competitive superiority. The reason why the big data is in the limelight is because while the conventional IT technology has been falling behind much in its possibility level, the big data has gone beyond the technological possibility and has the advantage of being utilized to create new values such as business optimization and new business creation through analysis of big data. Since the big data has been introduced too hastily without considering the strategic value deduction and achievement obtained through the big data, however, there are difficulties in the strategic value deduction and data utilization that can be gained through big data. According to the survey result of 1,800 IT professionals from 18 countries world wide, the percentage of the corporation where the big data is being utilized well was only 28%, and many of them responded that they are having difficulties in strategic value deduction and operation through big data. The strategic value should be deducted and environment phases like corporate internal and external related regulations and systems should be considered in order to introduce big data, but these factors were not well being reflected. The cause of the failure turned out to be that the big data was introduced by way of the IT trend and surrounding environment, but it was introduced hastily in the situation where the introduction condition was not well arranged. The strategic value which can be obtained through big data should be clearly comprehended and systematic environment analysis is very important about applicability in order to introduce successful big data, but since the corporations are considering only partial achievements and technological phases that can be obtained through big data, the successful introduction is not being made. Previous study shows that most of big data researches are focused on big data concept, cases, and practical suggestions without empirical study. The purpose of this study is provide the theoretically and practically useful implementation framework and strategies of big data systems with conducting comprehensive literature review, finding influencing factors for successful big data systems implementation, and analysing empirical models. To do this, the elements which can affect the introduction intention of big data were deducted by reviewing the information system's successful factors, strategic value perception factors, considering factors for the information system introduction environment and big data related literature in order to comprehend the effect factors when the corporations introduce big data and structured questionnaire was developed. After that, the questionnaire and the statistical analysis were performed with the people in charge of the big data inside the corporations as objects. According to the statistical analysis, it was shown that the strategic value perception factor and the inside-industry environmental factors affected positively the introduction intention of big data. The theoretical, practical and political implications deducted from the study result is as follows. The frist theoretical implication is that this study has proposed theoretically effect factors which affect the introduction intention of big data by reviewing the strategic value perception and environmental factors and big data related precedent studies and proposed the variables and measurement items which were analyzed empirically and verified. This study has meaning in that it has measured the influence of each variable on the introduction intention by verifying the relationship between the independent variables and the dependent variables through structural equation model. Second, this study has defined the independent variable(strategic value perception, environment), dependent variable(introduction intention) and regulatory variable(type of business and corporate size) about big data introduction intention and has arranged theoretical base in studying big data related field empirically afterwards by developing measurement items which has obtained credibility and validity. Third, by verifying the strategic value perception factors and the significance about environmental factors proposed in the conventional precedent studies, this study will be able to give aid to the afterwards empirical study about effect factors on big data introduction. The operational implications are as follows. First, this study has arranged the empirical study base about big data field by investigating the cause and effect relationship about the influence of the strategic value perception factor and environmental factor on the introduction intention and proposing the measurement items which has obtained the justice, credibility and validity etc. Second, this study has proposed the study result that the strategic value perception factor affects positively the big data introduction intention and it has meaning in that the importance of the strategic value perception has been presented. Third, the study has proposed that the corporation which introduces big data should consider the big data introduction through precise analysis about industry's internal environment. Fourth, this study has proposed the point that the size and type of business of the corresponding corporation should be considered in introducing the big data by presenting the difference of the effect factors of big data introduction depending on the size and type of business of the corporation. The political implications are as follows. First, variety of utilization of big data is needed. The strategic value that big data has can be accessed in various ways in the product, service field, productivity field, decision making field etc and can be utilized in all the business fields based on that, but the parts that main domestic corporations are considering are limited to some parts of the products and service fields. Accordingly, in introducing big data, reviewing the phase about utilization in detail and design the big data system in a form which can maximize the utilization rate will be necessary. Second, the study is proposing the burden of the cost of the system introduction, difficulty in utilization in the system and lack of credibility in the supply corporations etc in the big data introduction phase by corporations. Since the world IT corporations are predominating the big data market, the big data introduction of domestic corporations can not but to be dependent on the foreign corporations. When considering that fact, that our country does not have global IT corporations even though it is world powerful IT country, the big data can be thought to be the chance to rear world level corporations. Accordingly, the government shall need to rear star corporations through active political support. Third, the corporations' internal and external professional manpower for the big data introduction and operation lacks. Big data is a system where how valuable data can be deducted utilizing data is more important than the system construction itself. For this, talent who are equipped with academic knowledge and experience in various fields like IT, statistics, strategy and management etc and manpower training should be implemented through systematic education for these talents. This study has arranged theoretical base for empirical studies about big data related fields by comprehending the main variables which affect the big data introduction intention and verifying them and is expected to be able to propose useful guidelines for the corporations and policy developers who are considering big data implementationby analyzing empirically that theoretical base.

A Study on the Meaning and Strategy of Keyword Advertising Marketing

  • Park, Nam Goo
    • Journal of Distribution Science
    • /
    • v.8 no.3
    • /
    • pp.49-56
    • /
    • 2010
  • At the initial stage of Internet advertising, banner advertising came into fashion. As the Internet developed into a central part of daily lives and the competition in the on-line advertising market was getting fierce, there was not enough space for banner advertising, which rushed to portal sites only. All these factors was responsible for an upsurge in advertising prices. Consequently, the high-cost and low-efficiency problems with banner advertising were raised, which led to an emergence of keyword advertising as a new type of Internet advertising to replace its predecessor. In the beginning of 2000s, when Internet advertising came to be activated, display advertisement including banner advertising dominated the Net. However, display advertising showed signs of gradual decline, and registered minus growth in the year 2009, whereas keyword advertising showed rapid growth and started to outdo display advertising as of the year 2005. Keyword advertising refers to the advertising technique that exposes relevant advertisements on the top of research sites when one searches for a keyword. Instead of exposing advertisements to unspecified individuals like banner advertising, keyword advertising, or targeted advertising technique, shows advertisements only when customers search for a desired keyword so that only highly prospective customers are given a chance to see them. In this context, it is also referred to as search advertising. It is regarded as more aggressive advertising with a high hit rate than previous advertising in that, instead of the seller discovering customers and running an advertisement for them like TV, radios or banner advertising, it exposes advertisements to visiting customers. Keyword advertising makes it possible for a company to seek publicity on line simply by making use of a single word and to achieve a maximum of efficiency at a minimum cost. The strong point of keyword advertising is that customers are allowed to directly contact the products in question through its more efficient advertising when compared to the advertisements of mass media such as TV and radio, etc. The weak point of keyword advertising is that a company should have its advertisement registered on each and every portal site and finds it hard to exercise substantial supervision over its advertisement, there being a possibility of its advertising expenses exceeding its profits. Keyword advertising severs as the most appropriate methods of advertising for the sales and publicity of small and medium enterprises which are in need of a maximum of advertising effect at a low advertising cost. At present, keyword advertising is divided into CPC advertising and CPM advertising. The former is known as the most efficient technique, which is also referred to as advertising based on the meter rate system; A company is supposed to pay for the number of clicks on a searched keyword which users have searched. This is representatively adopted by Overture, Google's Adwords, Naver's Clickchoice, and Daum's Clicks, etc. CPM advertising is dependent upon the flat rate payment system, making a company pay for its advertisement on the basis of the number of exposure, not on the basis of the number of clicks. This method fixes a price for advertisement on the basis of 1,000-time exposure, and is mainly adopted by Naver's Timechoice, Daum's Speciallink, and Nate's Speedup, etc, At present, the CPC method is most frequently adopted. The weak point of the CPC method is that advertising cost can rise through constant clicks from the same IP. If a company makes good use of strategies for maximizing the strong points of keyword advertising and complementing its weak points, it is highly likely to turn its visitors into prospective customers. Accordingly, an advertiser should make an analysis of customers' behavior and approach them in a variety of ways, trying hard to find out what they want. With this in mind, her or she has to put multiple keywords into use when running for ads. When he or she first runs an ad, he or she should first give priority to which keyword to select. The advertiser should consider how many individuals using a search engine will click the keyword in question and how much money he or she has to pay for the advertisement. As the popular keywords that the users of search engines are frequently using are expensive in terms of a unit cost per click, the advertisers without much money for advertising at the initial phrase should pay attention to detailed keywords suitable to their budget. Detailed keywords are also referred to as peripheral keywords or extension keywords, which can be called a combination of major keywords. Most keywords are in the form of texts. The biggest strong point of text-based advertising is that it looks like search results, causing little antipathy to it. But it fails to attract much attention because of the fact that most keyword advertising is in the form of texts. Image-embedded advertising is easy to notice due to images, but it is exposed on the lower part of a web page and regarded as an advertisement, which leads to a low click through rate. However, its strong point is that its prices are lower than those of text-based advertising. If a company owns a logo or a product that is easy enough for people to recognize, the company is well advised to make good use of image-embedded advertising so as to attract Internet users' attention. Advertisers should make an analysis of their logos and examine customers' responses based on the events of sites in question and the composition of products as a vehicle for monitoring their behavior in detail. Besides, keyword advertising allows them to analyze the advertising effects of exposed keywords through the analysis of logos. The logo analysis refers to a close analysis of the current situation of a site by making an analysis of information about visitors on the basis of the analysis of the number of visitors and page view, and that of cookie values. It is in the log files generated through each Web server that a user's IP, used pages, the time when he or she uses it, and cookie values are stored. The log files contain a huge amount of data. As it is almost impossible to make a direct analysis of these log files, one is supposed to make an analysis of them by using solutions for a log analysis. The generic information that can be extracted from tools for each logo analysis includes the number of viewing the total pages, the number of average page view per day, the number of basic page view, the number of page view per visit, the total number of hits, the number of average hits per day, the number of hits per visit, the number of visits, the number of average visits per day, the net number of visitors, average visitors per day, one-time visitors, visitors who have come more than twice, and average using hours, etc. These sites are deemed to be useful for utilizing data for the analysis of the situation and current status of rival companies as well as benchmarking. As keyword advertising exposes advertisements exclusively on search-result pages, competition among advertisers attempting to preoccupy popular keywords is very fierce. Some portal sites keep on giving priority to the existing advertisers, whereas others provide chances to purchase keywords in question to all the advertisers after the advertising contract is over. If an advertiser tries to rely on keywords sensitive to seasons and timeliness in case of sites providing priority to the established advertisers, he or she may as well make a purchase of a vacant place for advertising lest he or she should miss appropriate timing for advertising. However, Naver doesn't provide priority to the existing advertisers as far as all the keyword advertisements are concerned. In this case, one can preoccupy keywords if he or she enters into a contract after confirming the contract period for advertising. This study is designed to take a look at marketing for keyword advertising and to present effective strategies for keyword advertising marketing. At present, the Korean CPC advertising market is virtually monopolized by Overture. Its strong points are that Overture is based on the CPC charging model and that advertisements are registered on the top of the most representative portal sites in Korea. These advantages serve as the most appropriate medium for small and medium enterprises to use. However, the CPC method of Overture has its weak points, too. That is, the CPC method is not the only perfect advertising model among the search advertisements in the on-line market. So it is absolutely necessary that small and medium enterprises including independent shopping malls should complement the weaknesses of the CPC method and make good use of strategies for maximizing its strengths so as to increase their sales and to create a point of contact with customers.

  • PDF

A Study on the Expressed Desire at Discharge of Patients to Use Home Nursing and Affecting Factors of the Desire (퇴원환자의 가정간호 이용의사와 관련 요인)

  • Lee, Ji-Hyun;Lee, Young-Eun;Lee, Myung-Hwa;Sohn, Sue-Kyung
    • The Korean Journal of Rehabilitation Nursing
    • /
    • v.2 no.2
    • /
    • pp.257-270
    • /
    • 1999
  • The purpose of this study is to investigate factors related to the intent of using home nursing of chronic disease patients who got out of a university hospital. For the purpose, the study selected 153 patients who were hospitalized and left K university hospital with diagnoses of cancer, hypertension, diabetes and cerebral vascular accident and ordered to be discharged and performed interviews with them and surveys on their medical records to obtain the following results. For this study a direct-interview survey and medical record review was conducted from June 28 to Aug. 30, 1998. The frequency and mean values were computed to find the characteristics of the study subjects, and $X^2$-test, t-test, factor analysis and multiple logistic regession analysis were applied for the analysis of the data. The following results were obtained. 1) When characteristics of the subjects were examined, men and women occupied for 58.8% and 41.2%, respectively. The subjects were 41.3 years old in aver age and had the monthly aver age earning of 0.99 million won or below, which was the most out of the total subjects at 34.6%. Among the total, 87.6% resided in cities and 12.4 in counties. The most left the hospital with diagnosis of cancer at 51.6%, followed by hyper tension at 24.2%, diabetes at 13.7% and cerebral vascular accident at 7.2%. 2) 93.5% of the selected patients had the intent of using home nursing and 6.5%, didn't. Among those patients having the intent, 85.6% had the intent of paying for home nursing and 14.4%, didn't. The subjects expected that the nursing would be paid 9,143 won in aver age and 47.7% of them preferred national authorities as the main servers. 86.3% of the subjects thought that home nursing business had the main advantage of making it possible to learn nursing methods at home and thereby contributing to improving the ability of patients and their facilities to solve health problems. 3) Relations between the intent of use and characteristics of the subjects such as demography-related social, home environment, disease and physical function characteristics did not show statistically significant differences among one another. Compared to those who had no intent of using home nursing, the group having the intent had more cases of male patients, the age of 39 or below, residence in cities, 5 family member s or more, no existence of home nursing servers, leaving the hospital from a non-hospitalized building, disease development for five months or below, hospitalization for ten days or more, non-hospitalization with in the recent one month, two times or over of hospitalization, leaving the hospital with no demand of special treatment, operation underwent, poor results of treatment, leaving the hospital with demand of rehabilitation services, physical disablement and high evaluation point of daily life. 4) Among those patients having the intent of using home nursing, 47.6% demanded technical nursing and 55.9%, supportive nursing. As technical nursing,' inject into a blood vessel ' and 'treat pustule and teach basic prevention methods occupied for 57.4%, respectively, topping the list. Among demands of supportive nursing, 'observe patients 'status and refer them to hospitals or community resources as available, if necessary' was the most with percent age point of 59.5. Regarding the intent of paying for home nursing, 39.2% of those patients wishing to use the nursing responded paying for technical services and 20.2, supportive services. In detail, 70.0% wanted to pay for a service stated as 'inject into a blood vessel', highest among the former services and 30.7%, a service referred to as 'teaching exercises needed to make the body of patients move', highest among the latter. When this was analyzed in terms of a relation between the need(the need for home nursing) and the demand(the intent of paying for home nursing), The rate of the need to the demand was found two or three times higher in technical nursing(0.82) than in supportive nursing(0.35). In aspects of tech ical nursing, muscle injection(1.26, the 1st rank) was highest in the rate while among aspects of supportive nursing, a service referred to as 'teach exercises needed for making patients move their bodies normally'(0.58, the 1st rank). 5) factors I(satisfaction with hospital services), II(recognition of disease state), III(economy) and IV(period of disease) occupied for 34.4, 13.8, 11.9 and 9.2 percents, respectively among factors related to the intent by the subjects of using home nursing, totaled 59.3%. In conclusion, most of chronic disease patients have the intent of using hospital-based home nursing and satisfaction with hospital services is a factor affecting the intent most. Thus a post-management system is needed to continue providing health management to those patients after they leave the hospital. Further, supportive services should be provided in order that those who are satisfied with hospital services return to their community and live their in dependent lives. Based on these results, the researcher would make the following recommendation. 1) Because home nursing becomes more and more needed due to a sharp increase in chronic disease patients and elderly people, related rules and regulations should be made and implemented. 2) Hospital nurses specializing in home nursing should be cultivated.

  • PDF

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

Application of Support Vector Regression for Improving the Performance of the Emotion Prediction Model (감정예측모형의 성과개선을 위한 Support Vector Regression 응용)

  • Kim, Seongjin;Ryoo, Eunchung;Jung, Min Kyu;Kim, Jae Kyeong;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.185-202
    • /
    • 2012
  • .Since the value of information has been realized in the information society, the usage and collection of information has become important. A facial expression that contains thousands of information as an artistic painting can be described in thousands of words. Followed by the idea, there has recently been a number of attempts to provide customers and companies with an intelligent service, which enables the perception of human emotions through one's facial expressions. For example, MIT Media Lab, the leading organization in this research area, has developed the human emotion prediction model, and has applied their studies to the commercial business. In the academic area, a number of the conventional methods such as Multiple Regression Analysis (MRA) or Artificial Neural Networks (ANN) have been applied to predict human emotion in prior studies. However, MRA is generally criticized because of its low prediction accuracy. This is inevitable since MRA can only explain the linear relationship between the dependent variables and the independent variable. To mitigate the limitations of MRA, some studies like Jung and Kim (2012) have used ANN as the alternative, and they reported that ANN generated more accurate prediction than the statistical methods like MRA. However, it has also been criticized due to over fitting and the difficulty of the network design (e.g. setting the number of the layers and the number of the nodes in the hidden layers). Under this background, we propose a novel model using Support Vector Regression (SVR) in order to increase the prediction accuracy. SVR is an extensive version of Support Vector Machine (SVM) designated to solve the regression problems. The model produced by SVR only depends on a subset of the training data, because the cost function for building the model ignores any training data that is close (within a threshold ${\varepsilon}$) to the model prediction. Using SVR, we tried to build a model that can measure the level of arousal and valence from the facial features. To validate the usefulness of the proposed model, we collected the data of facial reactions when providing appropriate visual stimulating contents, and extracted the features from the data. Next, the steps of the preprocessing were taken to choose statistically significant variables. In total, 297 cases were used for the experiment. As the comparative models, we also applied MRA and ANN to the same data set. For SVR, we adopted '${\varepsilon}$-insensitive loss function', and 'grid search' technique to find the optimal values of the parameters like C, d, ${\sigma}^2$, and ${\varepsilon}$. In the case of ANN, we adopted a standard three-layer backpropagation network, which has a single hidden layer. The learning rate and momentum rate of ANN were set to 10%, and we used sigmoid function as the transfer function of hidden and output nodes. We performed the experiments repeatedly by varying the number of nodes in the hidden layer to n/2, n, 3n/2, and 2n, where n is the number of the input variables. The stopping condition for ANN was set to 50,000 learning events. And, we used MAE (Mean Absolute Error) as the measure for performance comparison. From the experiment, we found that SVR achieved the highest prediction accuracy for the hold-out data set compared to MRA and ANN. Regardless of the target variables (the level of arousal, or the level of positive / negative valence), SVR showed the best performance for the hold-out data set. ANN also outperformed MRA, however, it showed the considerably lower prediction accuracy than SVR for both target variables. The findings of our research are expected to be useful to the researchers or practitioners who are willing to build the models for recognizing human emotions.

A Regression-Model-based Method for Combining Interestingness Measures of Association Rule Mining (연관상품 추천을 위한 회귀분석모형 기반 연관 규칙 척도 결합기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.127-141
    • /
    • 2017
  • Advances in Internet technologies and the proliferation of mobile devices enabled consumers to approach a wide range of goods and services, while causing an adverse effect that they have hard time reaching their congenial items even if they devote much time to searching for them. Accordingly, businesses are using the recommender systems to provide tools for consumers to find the desired items more easily. Association Rule Mining (ARM) technology is advantageous to recommender systems in that ARM provides intuitive form of a rule with interestingness measures (support, confidence, and lift) describing the relationship between items. Given an item, its relevant items can be distinguished with the help of the measures that show the strength of relationship between items. Based on the strength, the most pertinent items can be chosen among other items and exposed to a given item's web page. However, the diversity of the measures may confuse which items are more recommendable. Given two rules, for example, one rule's support and confidence may not be concurrently superior to the other rule's. Such discrepancy of the measures in distinguishing one rule's superiority from other rules may cause difficulty in selecting proper items for recommendation. In addition, in an online environment where a web page or mobile screen can provide a limited number of recommendations that attract consumer interest, the prudent selection of items to be included in the list of recommendations is very important. The exposure of items of little interest may lead consumers to ignore the recommendations. Then, such consumers will possibly not pay attention to other forms of marketing activities. Therefore, the measures should be aligned with the probability of consumer's acceptance of recommendations. For this reason, this study proposes a model-based approach to combine those measures into one unified measure that can consistently determine the ranking of recommended items. A regression model was designed to describe how well the measures (independent variables; i.e., support, confidence, and lift) explain consumer's acceptance of recommendations (dependent variables, hit rate of recommended items). The model is intuitive to understand and easy to use in that the equation consists of the commonly used measures for ARM and can be used in the estimation of hit rates. The experiment using transaction data from one of the Korea's largest online shopping malls was conducted to show that the proposed model can improve the hit rates of recommendations. From the top of the list to 13th place, recommended items in the higher rakings from the proposed model show the higher hit rates than those from the competitive model's. The result shows that the proposed model's performance is superior to the competitive model's in online recommendation environment. In a web page, consumers are provided around ten recommendations with which the proposed model outperforms. Moreover, a mobile device cannot expose many items simultaneously due to its limited screen size. Therefore, the result shows that the newly devised recommendation technique is suitable for the mobile recommender systems. While this study has been conducted to cover the cross-selling in online shopping malls that handle merchandise, the proposed method can be expected to be applied in various situations under which association rules apply. For example, this model can be applied to medical diagnostic systems that predict candidate diseases from a patient's symptoms. To increase the efficiency of the model, additional variables will need to be considered for the elaboration of the model in future studies. For example, price can be a good candidate for an explanatory variable because it has a major impact on consumer purchase decisions. If the prices of recommended items are much higher than the items in which a consumer is interested, the consumer may hesitate to accept the recommendations.