• Title/Summary/Keyword: two-level

Search Result 19,443, Processing Time 0.057 seconds

Approach to the Extraction Method on Minerals of Ginseng Extract (추출조건(抽出條件)에 따른 인삼(人蔘)엑기스의 무기성분정량(無機成分定量)에 관(關)한 연구(硏究))

  • Cho, Han-Ok;Lee, Joong-Hwa;Cho, Sung-Hwan;Choi, Young-Hee
    • Korean Journal of Food Science and Technology
    • /
    • v.8 no.2
    • /
    • pp.95-106
    • /
    • 1976
  • In order to investigate chemical components and mineral of ginseng cultivated in Korea and to establish an appropriate extraction method, the present work was carried out with Raw ginseng(SC), White ginseng(SB) and Ginseng tail(SA). The results determined could be summarized as follows : 1. Among the proximate components, moisture content of SC, SB and SA were 66.37%, 12.61% and 12.20% respectively. The content of crude ash in SA was the highest value of three kinds of ginseng root: SA 6.04%, SB 3.52% and SC 1.56%. The crude protein of Dried ginseng root(SA and SB) was about 12-14%, which was more than two times compared with that of SC(6.30%) The content of pure protein seemed to be in similar tendency with that of crude protein in three kinds of ginseng root: 2.26% in SC, 5.94% in SB and 5.76% in SA. There was no significant difference in the content of fat among the kinds of ginseng root. $(1.1{\sim}2.5%)$ 2. The highest Ginseng extract was obtained by use of Continuous extractor which is a modified Soxhlet apparatus for 60 hours extraction with 60-80% ethanol. 3. Ginseng and the above-mentioned ginseng extract (Ginseng tail extract: SAE, White Ginseng extract : SBE, Raw Ginseng extract: SCE) were analyzed by volumetric method for the determination of Chlorine and Calcium, by colorimetric method for that of Iron and Phosphorus, by Atomic Absorption Spectrophotometer for that of Zinc, Copper and Manganese. The results were as follows : 1. The content of phosphorus in SA, SB and SC were 1.818%, 1.362%, 0.713% respectively and phosphorus content in three kinds of extract were in low level (SAE: 0.03%, SBE: 0.063%, SCE: 0.036%) 2. In the Calcium content, SA, SB and SC were 0.147%, 0.238%, 0.126% and the Calcium contents of Ginseng extracts were 0.023%, 0.011% and 0.016%. The extraction ratio of Calcium from SA was the highest value (15.6%), while that in the case of SB was 4.6%. 3. The Chlorine content of SA was 0.11%, this was slightly higher than others(SB: 0.07%, SC: 0.09%) and extraction ratio of SA and SB were 36.4%, 67.1% while that of SC was 84.4%. 4. The Iron content of SA, SB and SC were 125ppm, 32.5ppm and 20ppm but extraction ratio was extremely low (SAE: 1.33%, SBE: 0.83%, SCE: 1.08%), 5. The Manganese content of SA, SB and SC were 62.5ppm, 25.0ppm and 5.0ppm respectively but the Manganese content of extract could not determined, Copper content of SA, SB and SC were 15.0ppm, 20.0ppm and those of extract were 7.5ppm, 6.5ppm, 4.5ppm while those of extraction ratio were 50%, 32.5% and 90% respectively, Zinc was abundant in Ginseng compared with other herbs, (SA: 45.5ppm, SB: 27.5ppm and SC: 5.5ppm) and the extracted amount were 4.5ppm, 1.25ppm 1.50ppm respectively.

  • PDF

Serial Changes of Serum Thyroid-Stimulating Hormone after Total Thyroidectomy or Withdrawal of Suppressive Thyroxine Therapy in Patients with Differentiated Thyroid Cancer (분화성 갑상선 암 환자에서 갑상선 전절제술후 또는 갑상선 호르몬 억제 요법 중단에 따른 갑상선 자극호르몬의 변화)

  • Bae, Jin-Ho;Lee, Jae-Tae;Seo, Ji-Hyoung;Jeong, Shin-Young;Jung, Jin-Hyang;Park, Ho-Yong;Kim, Jung-Guk;Ahn, Byeong-Cheol;Sohn, Jin-Ho;Kim, Bo-Wan;Park, June-Sik;Lee, Kyu-Bo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.6
    • /
    • pp.516-521
    • /
    • 2004
  • Background: Radioactive iodine (RAI) therapy and whole-body scanning are the fundamentals of treatment and follow-up of patients with differentiated thyroid cancer. It is generally accepted that a Thyroid-Stimulating Hormone (TSH) level of at least 30 ${\mu}U/ml$ is a prerequisite for the effective use of RAI, and that it requires 4-6 weeks of off-thyroxine to attain these levels. Because thyroxine withdrawal and the consequent hypothyroidism are often poorly tolerated, and occasionally might be hazardous, it is important to be certain that these assumptions are correct. We have measured serial changes in serum TSH after total thyroidectomy or withdrawl of thyroxine in patients with thyroid cancer. Subjects and Methods: Serum TSH levels were measured weekly after thyroidectomy in 10 patients (group A) and after the discontinuation of thyroxine in 12 patients (group B). Symptoms and signs of hypothyroidism were also evaluated weekly by modified Billewicz diagnostic index. Results: By the second week, 78% of group A patients and 17% of group B patients had serum TSH levels ${\geq}30{\mu}U/ml$. By the third week, 89% of group A patients and 90% of group B patients had serum TSH levels ${\geq}30{\mu}U/ml$. By the fourth week, all patients in two groups achieved target TSH levels and there were no overt hypothyroidism. Conclusion: in all patients, serum TSH elevated to the target concentration (${\geq}30{\mu}U/ml$) within 4 weeks without significant manifestation of hypothyroidism. The schedule of RAI administration could be adjusted to fit the needs and circumstances of individual patients with a shorter preparation period than the conventional.

The Ontology Based, the Movie Contents Recommendation Scheme, Using Relations of Movie Metadata (온톨로지 기반 영화 메타데이터간 연관성을 활용한 영화 추천 기법)

  • Kim, Jaeyoung;Lee, Seok-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.25-44
    • /
    • 2013
  • Accessing movie contents has become easier and increased with the advent of smart TV, IPTV and web services that are able to be used to search and watch movies. In this situation, there are increasing search for preference movie contents of users. However, since the amount of provided movie contents is too large, the user needs more effort and time for searching the movie contents. Hence, there are a lot of researches for recommendations of personalized item through analysis and clustering of the user preferences and user profiles. In this study, we propose recommendation system which uses ontology based knowledge base. Our ontology can represent not only relations between metadata of movies but also relations between metadata and profile of user. The relation of each metadata can show similarity between movies. In order to build, the knowledge base our ontology model is considered two aspects which are the movie metadata model and the user model. On the part of build the movie metadata model based on ontology, we decide main metadata that are genre, actor/actress, keywords and synopsis. Those affect that users choose the interested movie. And there are demographic information of user and relation between user and movie metadata in user model. In our model, movie ontology model consists of seven concepts (Movie, Genre, Keywords, Synopsis Keywords, Character, and Person), eight attributes (title, rating, limit, description, character name, character description, person job, person name) and ten relations between concepts. For our knowledge base, we input individual data of 14,374 movies for each concept in contents ontology model. This movie metadata knowledge base is used to search the movie that is related to interesting metadata of user. And it can search the similar movie through relations between concepts. We also propose the architecture for movie recommendation. The proposed architecture consists of four components. The first component search candidate movies based the demographic information of the user. In this component, we decide the group of users according to demographic information to recommend the movie for each group and define the rule to decide the group of users. We generate the query that be used to search the candidate movie for recommendation in this component. The second component search candidate movies based user preference. When users choose the movie, users consider metadata such as genre, actor/actress, synopsis, keywords. Users input their preference and then in this component, system search the movie based on users preferences. The proposed system can search the similar movie through relation between concepts, unlike existing movie recommendation systems. Each metadata of recommended candidate movies have weight that will be used for deciding recommendation order. The third component the merges results of first component and second component. In this step, we calculate the weight of movies using the weight value of metadata for each movie. Then we sort movies order by the weight value. The fourth component analyzes result of third component, and then it decides level of the contribution of metadata. And we apply contribution weight to metadata. Finally, we use the result of this step as recommendation for users. We test the usability of the proposed scheme by using web application. We implement that web application for experimental process by using JSP, Java Script and prot$\acute{e}$g$\acute{e}$ API. In our experiment, we collect results of 20 men and woman, ranging in age from 20 to 29. And we use 7,418 movies with rating that is not fewer than 7.0. In order to experiment, we provide Top-5, Top-10 and Top-20 recommended movies to user, and then users choose interested movies. The result of experiment is that average number of to choose interested movie are 2.1 in Top-5, 3.35 in Top-10, 6.35 in Top-20. It is better than results that are yielded by for each metadata.

Impact of Semantic Characteristics on Perceived Helpfulness of Online Reviews (온라인 상품평의 내용적 특성이 소비자의 인지된 유용성에 미치는 영향)

  • Park, Yoon-Joo;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.29-44
    • /
    • 2017
  • In Internet commerce, consumers are heavily influenced by product reviews written by other users who have already purchased the product. However, as the product reviews accumulate, it takes a lot of time and effort for consumers to individually check the massive number of product reviews. Moreover, product reviews that are written carelessly actually inconvenience consumers. Thus many online vendors provide mechanisms to identify reviews that customers perceive as most helpful (Cao et al. 2011; Mudambi and Schuff 2010). For example, some online retailers, such as Amazon.com and TripAdvisor, allow users to rate the helpfulness of each review, and use this feedback information to rank and re-order them. However, many reviews have only a few feedbacks or no feedback at all, thus making it hard to identify their helpfulness. Also, it takes time to accumulate feedbacks, thus the newly authored reviews do not have enough ones. For example, only 20% of the reviews in Amazon Review Dataset (Mcauley and Leskovec, 2013) have more than 5 reviews (Yan et al, 2014). The purpose of this study is to analyze the factors affecting the usefulness of online product reviews and to derive a forecasting model that selectively provides product reviews that can be helpful to consumers. In order to do this, we extracted the various linguistic, psychological, and perceptual elements included in product reviews by using text-mining techniques and identifying the determinants among these elements that affect the usability of product reviews. In particular, considering that the characteristics of the product reviews and determinants of usability for apparel products (which are experiential products) and electronic products (which are search goods) can differ, the characteristics of the product reviews were compared within each product group and the determinants were established for each. This study used 7,498 apparel product reviews and 106,962 electronic product reviews from Amazon.com. In order to understand a review text, we first extract linguistic and psychological characteristics from review texts such as a word count, the level of emotional tone and analytical thinking embedded in review text using widely adopted text analysis software LIWC (Linguistic Inquiry and Word Count). After then, we explore the descriptive statistics of review text for each category and statistically compare their differences using t-test. Lastly, we regression analysis using the data mining software RapidMiner to find out determinant factors. As a result of comparing and analyzing product review characteristics of electronic products and apparel products, it was found that reviewers used more words as well as longer sentences when writing product reviews for electronic products. As for the content characteristics of the product reviews, it was found that these reviews included many analytic words, carried more clout, and related to the cognitive processes (CogProc) more so than the apparel product reviews, in addition to including many words expressing negative emotions (NegEmo). On the other hand, the apparel product reviews included more personal, authentic, positive emotions (PosEmo) and perceptual processes (Percept) compared to the electronic product reviews. Next, we analyzed the determinants toward the usefulness of the product reviews between the two product groups. As a result, it was found that product reviews with high product ratings from reviewers in both product groups that were perceived as being useful contained a larger number of total words, many expressions involving perceptual processes, and fewer negative emotions. In addition, apparel product reviews with a large number of comparative expressions, a low expertise index, and concise content with fewer words in each sentence were perceived to be useful. In the case of electronic product reviews, those that were analytical with a high expertise index, along with containing many authentic expressions, cognitive processes, and positive emotions (PosEmo) were perceived to be useful. These findings are expected to help consumers effectively identify useful product reviews in the future.

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

Ensemble Learning with Support Vector Machines for Bond Rating (회사채 신용등급 예측을 위한 SVM 앙상블학습)

  • Kim, Myoung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.29-45
    • /
    • 2012
  • Bond rating is regarded as an important event for measuring financial risk of companies and for determining the investment returns of investors. As a result, it has been a popular research topic for researchers to predict companies' credit ratings by applying statistical and machine learning techniques. The statistical techniques, including multiple regression, multiple discriminant analysis (MDA), logistic models (LOGIT), and probit analysis, have been traditionally used in bond rating. However, one major drawback is that it should be based on strict assumptions. Such strict assumptions include linearity, normality, independence among predictor variables and pre-existing functional forms relating the criterion variablesand the predictor variables. Those strict assumptions of traditional statistics have limited their application to the real world. Machine learning techniques also used in bond rating prediction models include decision trees (DT), neural networks (NN), and Support Vector Machine (SVM). Especially, SVM is recognized as a new and promising classification and regression analysis method. SVM learns a separating hyperplane that can maximize the margin between two categories. SVM is simple enough to be analyzed mathematical, and leads to high performance in practical applications. SVM implements the structuralrisk minimization principle and searches to minimize an upper bound of the generalization error. In addition, the solution of SVM may be a global optimum and thus, overfitting is unlikely to occur with SVM. In addition, SVM does not require too many data sample for training since it builds prediction models by only using some representative sample near the boundaries called support vectors. A number of experimental researches have indicated that SVM has been successfully applied in a variety of pattern recognition fields. However, there are three major drawbacks that can be potential causes for degrading SVM's performance. First, SVM is originally proposed for solving binary-class classification problems. Methods for combining SVMs for multi-class classification such as One-Against-One, One-Against-All have been proposed, but they do not improve the performance in multi-class classification problem as much as SVM for binary-class classification. Second, approximation algorithms (e.g. decomposition methods, sequential minimal optimization algorithm) could be used for effective multi-class computation to reduce computation time, but it could deteriorate classification performance. Third, the difficulty in multi-class prediction problems is in data imbalance problem that can occur when the number of instances in one class greatly outnumbers the number of instances in the other class. Such data sets often cause a default classifier to be built due to skewed boundary and thus the reduction in the classification accuracy of such a classifier. SVM ensemble learning is one of machine learning methods to cope with the above drawbacks. Ensemble learning is a method for improving the performance of classification and prediction algorithms. AdaBoost is one of the widely used ensemble learning techniques. It constructs a composite classifier by sequentially training classifiers while increasing weight on the misclassified observations through iterations. The observations that are incorrectly predicted by previous classifiers are chosen more often than examples that are correctly predicted. Thus Boosting attempts to produce new classifiers that are better able to predict examples for which the current ensemble's performance is poor. In this way, it can reinforce the training of the misclassified observations of the minority class. This paper proposes a multiclass Geometric Mean-based Boosting (MGM-Boost) to resolve multiclass prediction problem. Since MGM-Boost introduces the notion of geometric mean into AdaBoost, it can perform learning process considering the geometric mean-based accuracy and errors of multiclass. This study applies MGM-Boost to the real-world bond rating case for Korean companies to examine the feasibility of MGM-Boost. 10-fold cross validations for threetimes with different random seeds are performed in order to ensure that the comparison among three different classifiers does not happen by chance. For each of 10-fold cross validation, the entire data set is first partitioned into tenequal-sized sets, and then each set is in turn used as the test set while the classifier trains on the other nine sets. That is, cross-validated folds have been tested independently of each algorithm. Through these steps, we have obtained the results for classifiers on each of the 30 experiments. In the comparison of arithmetic mean-based prediction accuracy between individual classifiers, MGM-Boost (52.95%) shows higher prediction accuracy than both AdaBoost (51.69%) and SVM (49.47%). MGM-Boost (28.12%) also shows the higher prediction accuracy than AdaBoost (24.65%) and SVM (15.42%)in terms of geometric mean-based prediction accuracy. T-test is used to examine whether the performance of each classifiers for 30 folds is significantly different. The results indicate that performance of MGM-Boost is significantly different from AdaBoost and SVM classifiers at 1% level. These results mean that MGM-Boost can provide robust and stable solutions to multi-classproblems such as bond rating.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Therapeutic Efficacy of Prednisolone Withdrawal Followed by Recombinant ${\alpha}$ Interferon in Children with Chronic Hepatitis B (소아 만성 B형 간염 환자에서 스테로이드 이탈 요법 후 인터페론 병용 투여의 치료 효과)

  • Ryu, Na-Eun;Kim, Byung-Ju;Ma, Jae-Sook;Hwang, Tai-Ju
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.2 no.2
    • /
    • pp.169-177
    • /
    • 1999
  • Purpose: To evaluate the efficacy of interferon alpha therapy with or without prednisolone in children with chronic hepatitis B. Methods: Twenty-eight children (22 boys, 6 girls, mean age 130 months) had seropositive results for HBsAg, HBeAg and HBV DNA; 11 had chronic persistent hepatitis and 17 had chronic active hepatitis. The patients were divided into two groups depending upon their inflammatory activity on liver biopsy, pretreatment serum ALT levels and HBV DNA levels. Fourteen children (group 1: chronic active hepatitis, ALT ${\geq}$ 100 IU/L and HBV DNA ${\leq}$ 100 pg/$300\;{\mu}L$) received interferon alpha 2a 5 $MU/m^2$ of body surface three times weekly for 6 months. Fourteen children (group 2: chronic persistent hepatitis or chronic active hepatitis with ALT < 100 IU/L or HBV DNA > 100 pg/$300\;{\mu}L$) received prednisolone in decreasing daily doses of 60 mg/$m^2$, 40 mg/$m^2$, and 20 mg/$m^2$, each for 2 weeks, followed after 2 weeks by interferon alpha 2a on the same schedule. At the end of therapy, 3 end points were analyzed: HBeAg seroconversion, serum ALT normalization rate and clearance of serum HBV DNA. Results: At the end of treatment, HBe antigen-to antibody seroconversion was higher but not more significant in group 1 than group 2 (71.4% vs. 50.0%). Only one patient in group 2 who lost HBeAg, also cleared HBsAg. ALT normalization was similar in both groups (64.3% in group 1 vs. 55.6% in group 2). Clearance of serum HBV DNA was observed in 78.6% of patients in group 1 and 64.3% in group 2, but no significant differences. Complete response was similarly achieved in both groups (57.1% in group 1 vs. 50.0% in group 2). Interferon alpha therapy with prednisolone priming was well tolerated and all children finished therapy. Conclusion: The combined therapy with prednisolone followed by interferon alpha may be safe and effective in inducing a serological and biochemical remission of the disease in approximately 50% of children with chronic hepatitis B and with a high level of viral replication and less active disease. However, a controlled study should be performed to confirm these results.

  • PDF

Effects of High Glucose and Advanced Glycosylation Endproducts(AGE) on the in vitro Permeability Model (당과 후기당화합물의 생체 외 사구체여과율 모델에 대한 역할)

  • Lee Jun-Ho;Ha Tae-Sun
    • Childhood Kidney Diseases
    • /
    • v.10 no.1
    • /
    • pp.8-17
    • /
    • 2006
  • Purpose : We describe the changes of rat glomerular epithelial cells when exposed to high levels of glucose and advanced glycosylation endproducts(AGE) in the in vitro diabetic condition. We expect morphological alteration of glomerular epithelial cells and permeability changes experimentally and we may correlate the results with a mechanism of proteinuria in DM. Methods : We made 0.2 M glucose-6-phsphate solution mixed with PBS(pH 7.4) containing 50 mg/mL BSA and pretense inhibitor for preparation of AGE. As control, we used BSA. We manufactured and symbolized five culture dishes as follows; B5 - normal glucose(5 mM) + BSA, B30 - high glucose(30 mM) + BSA, A5 - normal glucose(5 mM) + AGE, A30 - high glucose(30 mM) + AGE, A/B 25 - normal glucose(5 mM) + 25 mM of mannitol(osmotic control). After the incubation period of both two days and seven days, we measured the amount of heparan sulfate proteoglycan(HSPG) in each dish by ELISA and compared them with the B5 dish at 2nd and 7th incubation days. We observed the morphological changes of epithelial cells in each culture dish using scanning electron microscopy(SEM). We tried the permeability assay of glomerular epithelial cells using cellulose semi-permeable membrane measuring the amount of filtered BSA through the apical chamber for 2 hours by sandwich ELISA. Results : On the 2nd incubation day, there was no significant difference in the amount of HSPG between the 5 culture dishes. But on the 7th incubation day, the amount of HSPG increased by 10% compared with the B5 dish on the 2nd day except the A30 dish(P<0.05). Compared with the B5 dish on the 7th day the amount of HSPG in A30 and B30 dish decreased to 77.8% and 95.3% of baseline, respectively(P>0.05). In the osmotic control group (A/B 25) no significant correlation was observed. On the SEM, we could see the separated intercellular junction and fused microvilli of glomerular epithelial cells in the culture dishes where AGE was added. The permeability of BSA increased by 19% only in the A30 dish on the 7th day compared with B5 dish on the 7th day in the permeability assay(P<0.05). Conclusion: We observed not only the role of a high level of glucose and AGE in decreasing the production of HSPG of glomerular epithelial cells in vitro, but also their additive effect. However, the role of AGE is greater than that of glucose. These results seems to correlate with the defects in charge selective barrier. Morphological changes of the disruption of intercellular junction and fused microvilli of glomerular epithelial cells seem to correlate with the defects in size-selective barrier. Therefore, we can explain the increased permeability of glomerular epithelial units in the in vitro diabetic condition.

  • PDF

Understanding User Motivations and Behavioral Process in Creating Video UGC: Focus on Theory of Implementation Intentions (Video UGC 제작 동기와 행위 과정에 관한 이해: 구현의도이론 (Theory of Implementation Intentions)의 적용을 중심으로)

  • Kim, Hyung-Jin;Song, Se-Min;Lee, Ho-Geun
    • Asia pacific journal of information systems
    • /
    • v.19 no.4
    • /
    • pp.125-148
    • /
    • 2009
  • UGC(User Generated Contents) is emerging as the center of e-business in the web 2.0 era. The trend reflects changing roles of users in production and consumption of contents on websites and helps us to understand new strategies of websites such as web portals and social network websites. Nowadays, we consume contents created by other non-professional users for both utilitarian (e.g., knowledge) and hedonic values (e.g., fun). Also, contents produced by ourselves (e.g., photo, video) are posted on websites so that our friends, family, and even the public can consume those contents. This means that non-professionals, who used to be passive audience in the past, are now creating contents and share their UGCs with others in the Web. Accessible media, tools, and applications have also reduced difficulty and complexity in the process of creating contents. Realizing that users create plenty of materials which are very interesting to other people, media companies (i.e., web portals and social networking websites) are adjusting their strategies and business models accordingly. Increased demand of UGC may lead to website visits which are the source of benefits from advertising. Therefore, they put more efforts into making their websites open platforms where UGCs can be created and shared among users without technical and methodological difficulties. Many websites have increasingly adopted new technologies such as RSS and openAPI. Some have even changed the structure of web pages so that UGC can be seen several times to more visitors. This mainstream of UGCs on websites indicates that acquiring more UGCs and supporting participating users have become important things to media companies. Although those companies need to understand why general users have shown increasing interest in creating and posting contents and what is important to them in the process of productions, few research results exist in this area to address these issues. Also, behavioral process in creating video UGCs has not been explored enough for the public to fully understand it. With a solid theoretical background (i.e., theory of implementation intentions), parts of our proposed research model mirror the process of user behaviors in creating video contents, which consist of intention to upload, intention to edit, edit, and upload. In addition, in order to explain how those behavioral intentions are developed, we investigated influences of antecedents from three motivational perspectives (i.e., intrinsic, editing software-oriented, and website's network effect-oriented). First, from the intrinsic motivation perspective, we studied the roles of self-expression, enjoyment, and social attention in forming intention to edit with preferred editing software or in forming intention to upload video contents to preferred websites. Second, we explored the roles of editing software for non-professionals to edit video contents, in terms of how it makes production process easier and how it is useful in the process. Finally, from the website characteristic-oriented perspective, we investigated the role of a website's network externality as an antecedent of users' intention to upload to preferred websites. The rationale is that posting UGCs on websites are basically social-oriented behaviors; thus, users prefer a website with the high level of network externality for contents uploading. This study adopted a longitudinal research design; we emailed recipients twice with different questionnaires. Guided by invitation email including a link to web survey page, respondents answered most of questions except edit and upload at the first survey. They were asked to provide information about UGC editing software they mainly used and preferred website to upload edited contents, and then asked to answer related questions. For example, before answering questions regarding network externality, they individually had to declare the name of the website to which they would be willing to upload. At the end of the first survey, we asked if they agreed to participate in the corresponding survey in a month. During twenty days, 333 complete responses were gathered in the first survey. One month later, we emailed those recipients to ask for participation in the second survey. 185 of the 333 recipients (about 56 percentages) answered in the second survey. Personalized questionnaires were provided for them to remind the names of editing software and website that they reported in the first survey. They answered the degree of editing with the software and the degree of uploading video contents to the website for the past one month. To all recipients of the two surveys, exchange tickets for books (about 5,000~10,000 Korean Won) were provided according to the frequency of participations. PLS analysis shows that user behaviors in creating video contents are well explained by the theory of implementation intentions. In fact, intention to upload significantly influences intention to edit in the process of accomplishing the goal behavior, upload. These relationships show the behavioral process that has been unclear in users' creating video contents for uploading and also highlight important roles of editing in the process. Regarding the intrinsic motivations, the results illustrated that users are likely to edit their own video contents in order to express their own intrinsic traits such as thoughts and feelings. Also, their intention to upload contents in preferred website is formed because they want to attract much attention from others through contents reflecting themselves. This result well corresponds to the roles of the website characteristic, namely, network externality. Based on the PLS results, the network effect of a website has significant influence on users' intention to upload to the preferred website. This indicates that users with social attention motivations are likely to upload their video UGCs to a website whose network size is big enough to realize their motivations easily. Finally, regarding editing software characteristic-oriented motivations, making exclusively-provided editing software more user-friendly (i.e., easy of use, usefulness) plays an important role in leading to users' intention to edit. Our research contributes to both academic scholars and professionals. For researchers, our results show that the theory of implementation intentions is well applied to the video UGC context and very useful to explain the relationship between implementation intentions and goal behaviors. With the theory, this study theoretically and empirically confirmed that editing is a different and important behavior from uploading behavior, and we tested the behavioral process of ordinary users in creating video UGCs, focusing on significant motivational factors in each step. In addition, parts of our research model are also rooted in the solid theoretical background such as the technology acceptance model and the theory of network externality to explain the effects of UGC-related motivations. For practitioners, our results suggest that media companies need to restructure their websites so that users' needs for social interaction through UGC (e.g., self-expression, social attention) are well met. Also, we emphasize strategic importance of the network size of websites in leading non-professionals to upload video contents to the websites. Those websites need to find a way to utilize the network effects for acquiring more UGCs. Finally, we suggest that some ways to improve editing software be considered as a way to increase edit behavior which is a very important process leading to UGC uploading.