• Title/Summary/Keyword: Performance Function

Search Result 9,329, Processing Time 0.075 seconds

Prediction of Life Expectancy for Terminally Ill Cancer Patients Based on Clinical Parameters (말기 암 환자에서 임상변수를 이용한 생존 기간 예측)

  • Yeom, Chang-Hwan;Choi, Youn-Seon;Hong, Young-Seon;Park, Yong-Gyu;Lee, Hye-Ree
    • Journal of Hospice and Palliative Care
    • /
    • v.5 no.2
    • /
    • pp.111-124
    • /
    • 2002
  • Purpose : Although the average life expectancy has increased due to advances in medicine, mortality due to cancer is on an increasing trend. Consequently, the number of terminally ill cancer patients is also on the rise. Predicting the survival period is an important issue in the treatment of terminally ill cancer patients since the choice of treatment would vary significantly by the patents, their families, and physicians according to the expected survival. Therefore, we investigated the prognostic factors for increased mortality risk in terminally ill cancer patients to help treat these patients by predicting the survival period. Methods : We investigated 31 clinical parameters in 157 terminally ill cancer patients admitted to in the Department of Family Medicine, National Health Insurance Corporation Ilsan Hospital between July 1, 2000 and August 31, 2001. We confirmed the patients' survival as of October 31, 2001 based on medical records and personal data. The survival rates and median survival times were estimated by the Kaplan-Meier method and Log-rank test was used to compare the differences between the survival rates according to each clinical parameter. Cox's proportional hazard model was used to determine the most predictive subset from the prognostic factors among many clinical parameters which affect the risk of death. We predicted the mean, median, the first quartile value and third quartile value of the expected lifetimes by Weibull proportional hazard regression model. Results : Out of 157 patients, 79 were male (50.3%). The mean age was $65.1{\pm}13.0$ years in males and was $64.3{\pm}13.7$ years in females. The most prevalent cancer was gastric cancer (36 patients, 22.9%), followed by lung cancer (27, 17.2%), and cervical cancer (20, 12.7%). The survival time decreased with to the following factors; mental change, anorexia, hypotension, poor performance status, leukocytosis, neutrophilia, elevated serum creatinine level, hypoalbuminemia, hyperbilirubinemia, elevated SGPT, prolonged prothrombin time (PT), prolonged activated partial thromboplastin time (aPTT), hyponatremia, and hyperkalemia. Among these factors, poor performance status, neutrophilia, prolonged PT and aPTT were significant prognostic factors of death risk in these patients according to the results of Cox's proportional hazard model. We predicted that the median life expectancy was 3.0 days when all of the above 4 factors were present, $5.7{\sim}8.2$ days when 3 of these 4 factors were present, $11.4{\sim}20.0$ days when 2 of the 4 were present, and $27.9{\sim}40.0$ when 1 of the 4 was present, and 77 days when none of these 4 factors were present. Conclusions : In terminally ill cancer patients, we found that the prognostic factors related to reduced survival time were poor performance status, neutrophilia, prolonged PT and prolonged am. The four prognostic factors enabled the prediction of life expectancy in terminally ill cancer patients.

  • PDF

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

A Study on Perception and Attitudes of Health Workers Towards the Organization and Activities of Urban Health Centers (도시보건소 직원의 보건소 업무에 대한 인식 및 견해)

  • Lee, Jae-Mu;Kang, Pock-Soo;Lee, Kyeong-Soo;Kim, Cheon-Tae
    • Journal of Yeungnam Medical Science
    • /
    • v.12 no.2
    • /
    • pp.347-365
    • /
    • 1995
  • A survey was conducted to study perception and attitudes of health workers towards health center's activities and organization of health services, from August 15 to September 30, 1994. The study population was 310 health workers engaged in seven urban health centers in Taegu City area. A questionnaire method was used to collect data and response rate was 81.3 percent or 252 respondents. The following are summaries of findings: Profiles of study population: Health workers were predominantly female(62.3%); had college education(60.3%); and held medical and nursing positions(39.6%), technicians(30.6%) and public health/administrative positions(29.8%). Perceptions on health center's resources: Slightly more than a half(51.1%) of respondents expressed that physical facilities of the centers are inadequate; equipments needed are short(39.0%); human resource is inadequate(44.8%); and health budget allocated is insufficient(38.5%) to support the performance of health center's activities. Decentralization and health services: The majority revealed that the decentralization of government system would affect the future activities of health centers(51.9%) which may have to change. However, only one quarter of respondents(25.4%) seemed to view the decentralization positively as they expect that it would help perform health activities more effectively. The majority of the respondents(78.6%) insisted that the function and organization of the urban health centers should be changed. Target workload and job satisfaction: A large proportion (43.3%) of respondents felt that present target setting systems for various health activities are unrealistic in terms of community needs and health center's situation while only 11.1 percent responded it positively; the majority(57.5%) revealed that they need further training in professional fields to perform their job more effectively; more than one third(35.7%) expressed that they enjoy their professional autonomy in their job performance; and a considerable proportion (39.3%) said they are satisfied with their present work. Regarding the personnel management, more worker(47.3%) perceived it negatively than positive(11.5%) as most of workers seemed to think the personnel management practiced at the health centers is not fair or justly done. Health services rendered: Among health services rendered, health workers perceived the following services are most successfully delivered; they are, in order of importance, Tb control, curative services, and maternal and child health care. Such areas as health education, oral health, environmental sanitation, and integrated health services are needed to be strengthening. Regarding the community attitudes towards health workers, 41.3 percent of respondents think they are trusted by the community they serve. New areas of concern identified which must be included in future activities of health centers are, in order of priority, health care of elderly population, home health care, rehabilitation services, and such chronic diseases control programs as diabetes, hypertension, school health and mental health care. In conclusion, the study revealed that health workers seemed to have more negative perceptions and attitudes than positive ones towards organization and management of health services and activities performed by the urban health centers where they are engaged. More specifically, the majority of health workers studied revealed to have the following areas of health center's organization and management inadequate or insufficient to support effective performance of their health activities: Namely, physical facilities and equipments required are inadequate; human and financial resources are insufficient; personnel management is unsatisfactory; setting of service target system is unrealistic in terms of the community needs. However, respondents displayed a number of positive perceptions, particularly to those areas as further training needs and implementation of decentralization of government system which will bring more autonomy of local government as they perceived these change would bring the necessary changes to future activities of the health center. They also displayed positive perceptions in their job autonomy and have job satisfactions.

  • PDF

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

The Comparison of Results Among Hepatitis B Test Reagents Using National Standard Substance (국가 표준물질을 이용한 B형 간염 검사 시약 간의 결과 비교)

  • Lee, Young-Ji;Sim, Seong-Jae;Back, Song-Ran;Seo, Mee-Hye;Yoo, Seon-Hee;Cho, Shee-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.203-207
    • /
    • 2010
  • Purpose: Hepatitis B is infection caused by Hepatitis B virus (HBV). Currently, there are several methods, Kits and equipments for conducting Hepatitis B test. Due to ununiformed methods, it would cause some differences. To manage these differences, it needs process evaluating function of test system and reagent using particular standard substance. The aim of this study is to investigate tendency of RIA method's reagent used in Asan Medical Center through comparing several other test reagents using national standard substance. Materials and Methods: The standard substance in National Institute of Food and Drug Safety Evaluation's biology medicine consists of 5 things, 4 antigens and 1 antibody. We tested reagents using A, B company's Kits according to each test method. All tests are measured repeatedly to obtain accurate results. Results: Test result of "HBs Ag Mixed titer Performance panel" is obtained match rate compared S/CO unit standard with RIA method and EIA 3 reagents, CIA 2 reagents is that company A's reagent is 94.4% (17/18), 83.3% (15/18), B is 88.9% (16/18), 77.8% (14/18). Test result of "HBs Ag Low titer Performance panel" is obtain that EIA 2 reagents is shown 7 posive results, CIA 3 reagents is 11, and RIA method's company A's reagent is 3, B is 2 of 13 in low panel. "HBV surface antigen 86.76 IU/vial" tested dilution. A is obtain positive results to 600 times(0.14 IU/mL), B is 300 times (0.29 IU/mL). Case of "HBV human immunoglobulin 95.45 IU/vial", A is shown positive result to 10,000 times (9.5 mIU/mL) and B is 4,000 times (24 mIU/mL). Test result of "HBs Ag Working Standards 0.02~11.52 IU/mL" is shown that Company A's kit concentration level was 0.38IU/mL, company B was 2.23 IU/mL and higher level of concentration was positive results. Conclusion: When comparing various test reagents and RIA method according to National Standard substances for Hepatitis B test, we recognized that there were no significant trends between reagents. For hepatitis B virus antigen-antibody titers even in parts of the test up to 600 times the antigen, antibodies to 10,000 times the maximum positive results could be obtained. Therefore, we confirmed that results from Asan Medical Center are performed smoothly by reagents and system for hepatitis B virus test.

  • PDF

Rough Set Analysis for Stock Market Timing (러프집합분석을 이용한 매매시점 결정)

  • Huh, Jin-Nyung;Kim, Kyoung-Jae;Han, In-Goo
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.77-97
    • /
    • 2010
  • Market timing is an investment strategy which is used for obtaining excessive return from financial market. In general, detection of market timing means determining when to buy and sell to get excess return from trading. In many market timing systems, trading rules have been used as an engine to generate signals for trade. On the other hand, some researchers proposed the rough set analysis as a proper tool for market timing because it does not generate a signal for trade when the pattern of the market is uncertain by using the control function. The data for the rough set analysis should be discretized of numeric value because the rough set only accepts categorical data for analysis. Discretization searches for proper "cuts" for numeric data that determine intervals. All values that lie within each interval are transformed into same value. In general, there are four methods for data discretization in rough set analysis including equal frequency scaling, expert's knowledge-based discretization, minimum entropy scaling, and na$\ddot{i}$ve and Boolean reasoning-based discretization. Equal frequency scaling fixes a number of intervals and examines the histogram of each variable, then determines cuts so that approximately the same number of samples fall into each of the intervals. Expert's knowledge-based discretization determines cuts according to knowledge of domain experts through literature review or interview with experts. Minimum entropy scaling implements the algorithm based on recursively partitioning the value set of each variable so that a local measure of entropy is optimized. Na$\ddot{i}$ve and Booleanreasoning-based discretization searches categorical values by using Na$\ddot{i}$ve scaling the data, then finds the optimized dicretization thresholds through Boolean reasoning. Although the rough set analysis is promising for market timing, there is little research on the impact of the various data discretization methods on performance from trading using the rough set analysis. In this study, we compare stock market timing models using rough set analysis with various data discretization methods. The research data used in this study are the KOSPI 200 from May 1996 to October 1998. KOSPI 200 is the underlying index of the KOSPI 200 futures which is the first derivative instrument in the Korean stock market. The KOSPI 200 is a market value weighted index which consists of 200 stocks selected by criteria on liquidity and their status in corresponding industry including manufacturing, construction, communication, electricity and gas, distribution and services, and financing. The total number of samples is 660 trading days. In addition, this study uses popular technical indicators as independent variables. The experimental results show that the most profitable method for the training sample is the na$\ddot{i}$ve and Boolean reasoning but the expert's knowledge-based discretization is the most profitable method for the validation sample. In addition, the expert's knowledge-based discretization produced robust performance for both of training and validation sample. We also compared rough set analysis and decision tree. This study experimented C4.5 for the comparison purpose. The results show that rough set analysis with expert's knowledge-based discretization produced more profitable rules than C4.5.

Adsorption and Transfer of Trace Elements in Repellent Soils (토양 소수성에 따른 미량원소의 흡착 및 이동)

  • Choi, Jun-Yong;Lee, Sang-Soo;Ok, Yong-Sik;Chun, So-Ul;Joo, Young-Kyoo
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.45 no.2
    • /
    • pp.204-208
    • /
    • 2012
  • Water repellency which affects infiltration, evaporation, erosion and other water transfer mechanisms through soil has been observed under several natural conditions. Water repellency is thought to be caused by hydrophobic organic compounds, which are present as coatings on soil particles or as an interstitial matter between soil particles. This study was conducted to evaluate the characteristics of the water repellent soil and transport characteristics of trace elements within this soil. Capillary height of the water repellent soil was measured. Batch and column studies were accompanied to identify sorption and transport mechanism of trace elements such as $Cu^{2+}$, $Mn^{2+}$, $Fe^{2+}$, $Zn^{2+}$ and $Mo^{5+}$. Difference of sorption capacity between common and repellent soils was observed depended on the degree of repellency. In the column study, the desorption of trace elements and the spatial concentration distribution as a function of time were evaluated. The capillary height was in the repellency order of 0% > 15% > 40% > 70% > 100%. No water was absorbed in soil indicating >70% repellency. Using trace elements, $Fe^{2+}$ and $Mo^{5+}$ showed higher sorption capacity in the repellent soil than in non-repellent soil. The sorption performance of $Fe^{2+}$ was found to be in the repellency order of 40% > 15% > 0%. Our results found that transfer of $Mo^{5+}$ had similar sorption tendency in soils having 0%, 15% and 40% repellency at the beginning, however, the higher desorption capacity was observed as time passes in the repellent soil compared to in non-repellent soils.

Skin Protection Effect of Grape Pruning Stem Extract on UVB-induced Connective Tissue Injury (포도전정가지 추출물이 UVB로 유도된 결합 조직 손상에 미치는 피부 보호 효과)

  • Kim, Joung-Hee;Kim, Keuk-Jun
    • Journal of Life Science
    • /
    • v.28 no.2
    • /
    • pp.141-147
    • /
    • 2018
  • This study aimed to analyze the contents of rutin, procyanidin B3, quercetin, and kaempferol, known to have antioxidant, anti-inflammatory, and anti-carcinogenic effects, among the polyphenol types contained in grape pruning stem extracts (GPSE). It utilized grape stems discarded after harvest to measure the effects of GPSE on skin moisture, inhibition of skin cell proliferation, and anti-inflammatory activity on the damaged skin of HR-1 mice induced with ultraviolet B (UVB), and to verify the applicability of GPSE as a material for functional food and functional cosmetics. The polyphenol was extracted from grape pruning stems with 80% EtOH, and then the extract was used while storing at $-20^{\circ}C$, after filtering, concentrating, and freeze-drying it. The content of an active ingredient of GPSE was analyzed using high performance liquid chromatography (HPLC). From 53 kg of the grape pruning stem specimen, 2.34 kg of the EtOH fraction extracts were extracted to achieve a 4.42% yield ratio. Analysis of the active ingredients showed 0.28 mg/g of procyanidin B3, 12.81 mg/g of rutin, 0.51 mg/g of quercetin, and 8.24 mg/g of kaempferol. After UVB irradiation on the dermis, to confirm the degree of inhibition of collagen synthesis, we examined the protein expression of MMP-9 using immunohistochemical staining. The results of this study confirm the existence of active polyphenol types, such as rutin, kaempferol, quercetin, and procyanidin B3, in GPSE. Moreover, the study found that GPSE has anti-collagenase effects and it decreases the effects of UV damage on skin barrier function. GPSE is a functional ingredient with a potential for skin protection effects, and it has high utilization potential as an ingredient for functional cosmetics.

Relationships between Nailfold Plexus Visibility, and Clinical Variables and Neuropsychological Functions in Schizophrenic Patients (정신분열병 환자에서 손톱 주름 총 시도(叢 視度) (Nailfold Plexus Visibility)와 임상양상, 신경심리 기능과의 관계)

  • Kang, Dae-Yeob;Jang, Hye-Ryeon
    • Korean Journal of Biological Psychiatry
    • /
    • v.9 no.1
    • /
    • pp.50-61
    • /
    • 2002
  • Objectives:High nailfold plexus visibility can reflect central nervous system defects as an etiologic factor of schizophrenia indirectly. Previous studies suggest that this visibility is particularly related to the negative symptoms of schizophrenia and frontal lobe deficiency. In this study, we examined the relationships between nailfold plexus visibility, and various clinical variables and neuropsychological functions in schizo-phrenic patients. Methods:Forty patients(21males, 19 females) satisfying the DSM-IV criteria for schizophrenia and thirty eight normal controls(20 males, 18 females) were measured for Plexus Visualization Score(PVS) by using the capillary microscopic examination. For the assessment of psychopathology, process-reactivity, premorbid adjustment, and neuropsychological functions, we used Positive and Negative Syndrome Scale(PANSS), Ullmann-Giovannoni Process-Reactive Questionnaire(PRQ), Phillips Premorbid Adjustment Scale(PAS), Korean Wechsler Adult Intelligence Scale(KWIS), Continuous Performance Test(CPT), Wisconsin Card Sort Test (WCST), and Word Fluency Test. We also collected data about clinical variables. Results:PVS was correlated with PANSS positive symptom score and composite score negatively. There were no correlations between PVS and PRQ score, PAS score and neuropsychological variables respectively. Conclusions:This study showed that nailfold plexus visibility was a characteristic feature in some schizophrenic patients, and that higher plexus visibility was associated with the negative symptoms of schizophrenia. There was no association between plexus visibility and neuropsychological functions.

  • PDF