• Title/Summary/Keyword: 연하

Search Result 23,174, Processing Time 0.052 seconds

Clinical Indices Predicting Resorption of Pleural Effusion in Tuberculous Pleurisy (결핵성 늑막염에서 삼출액의 흡수에 영향을 미치는 임상적 지표)

  • Lee, Joe-Ho;Chung, Hee-Soon;Lee, Jeong-Sang;Cho, Sang-Rok;Yoon, Hae-Kyung;Song, Chee-Sung
    • Tuberculosis and Respiratory Diseases
    • /
    • v.42 no.5
    • /
    • pp.660-668
    • /
    • 1995
  • Background: It is said that tuberculous pleuritis responds well to anti-tuberculous drug in general, so no further aggressive therapeutic management is unnecesarry except in case of diagnostic thoracentesis. But in clinical practice, we often see some patients who need later decortication due to dyspnea caused by pleural loculation or thickening despite several months of anti-tuberculous drug therapy. Therefore, we want to know the clinical difference between a group who received decortication due to complication of tuberculous pleuritis despite of anti-tuberculous drug and a group who improved after 9 months of anti-tuberculous drug only. Methods: We reviewed 20 tuberculous pleuritis patients(group 1) who underwent decortication due to dyspnea caused by pleural loculation or severe pleural thickening despite of anti-tuberculous drug therapy for 9 or more months, and 20 other tuberculous pleuritis patients(group 2) who improved by anti-tuberculous drug only and had similar degrees of initial pleural effusion and similar age, sex distribution. Then we compared between the two groups the duration of symptoms before anti-tuberculous drug treatment and pleural fluid biochemistry like glucose, LDH, protein and pleural fluid cell count and WBC differential count, and we also wanted to know whether there was any difference in preoperative PFT value and postoperative PFT value in the patients who underwent decortication, and obtained following results. Results: 1) Group 1 patients had lower glucose level{$63.3{\pm}30.8$(mg/dl)} than that of the group 2{$98.5{\pm}34.2$(mg/dl), p<0.05}, and higher LDH level{$776.3{\pm}266.0$(IU/L)} than the group 2 patients{$376.3{\pm}123.1$(IU/L), p<0.05}, and also longer duration of symptom before treatment{$2.0{\pm}1.7$(month)} than the group 2{$1.1{\pm}1.2$(month), p<0.05}, respectively. 2) In group 1, FVC changed from preoperative $2.55{\pm}0.80$(L) to postoperative $2.99{\pm}0.78$(L)(p<0.05), and FEV1 changed from preoperative $2.19{\pm}0.70$(L/sec) to postoperative $2.50{\pm}0.69$(L/sec)(p<0.05). 3) There was no difference in pleural fluid protein level($5.05{\pm}1.01$(gm/dL) and $5.15{\pm}0.77$(gm/dl), p>0.05) and WBC differential count between group 1 and group 2. Conclusion: It is probable that in tuberculous pleuritis there is a risk of complication in the case of showing relatively low pleural fluid glucose or high LDH level, or in the case of having long duraton of symptom before treatment. We thought prospective study should be performed to confirm this.

  • PDF

A Clinical Study of Corrosive Esophagitis (식도부식증에 대한 임상적 고찰)

  • 조진규;차창일;조중생;최춘기
    • Proceedings of the KOR-BRONCHOESO Conference
    • /
    • 1981.05a
    • /
    • pp.7-8
    • /
    • 1981
  • Authors observed clinically 34 cases of the corrosive esophagitis caused by various corrosive agents at Kyung Hee University Hospital from Aug. 1978 to Dec. 1980. The results obtained were as follows; 1. Among the 34 patients, male was 19 (55.9%) and female 15(44.1%). Most frequently found age was 3rd decade. 2. 18 cases(52.9%) came to the hospital within 24 hours after ingestion of the agents, and 13 cases(38.2%) within 2 to 7 days. 3. Seasonal distribution showed most frequently in spring(35.3%). 4. The moment of the accident was suicidal attempt in 27 cases(79.4%) and misdrinking in 7 cases(20.6%). 5. Acetic acid was a most commonly used agent, showing 23 cases(67.6%), lye and insecticides were next in order. 6. Common chief complaints were swallowing difficulty and sore throat. 7. The average hospital days was 14.8 days. 8. Esophagogram was performed between 3 to 7 days after ingestion in 13 cases(38.2 %), findings were constrictions on the 1st narrowing portion in 4 cases(30.8%) and within normal limits in 3 cases(23.1%). 9. Esophagoscopy was performed in 31 cases(91.2%) between 2 to 7 days after ingestion, which revealed edema and coating on entrance of the esophagus in 9 cases (29.0 %). Diffuse edema on entire length of the esophagus and within normal limits were next in order. 10. Laboratory results were as follows: Anemia was in 1 cases(2.9%), leukocytosis. in 21 cases (61.8%), increase ESR in 9 cases (26.5%), markedly increased BUN and creatinine in 3 cases (8.8%), and hypokalemia in 1 cases(2.9%). Proteinuria in 10 cases(29.4%) hematuria in 4 cases(l1.8%), and coca cola urine in 3 cases (8.8%). 11. Associated diseases were 3 cases(8.8%) of cancer, 1 cases (2.9%) of diabetes mellitus, and 1 cases(2.9%) of manic depressive illness. 12. Various treatment was given: Esophageal and gastric washing in 23 cases(67.6%) for the emergent treatment, antibiotics in 32 cases(94.1%), steroids in 30 cases(88.2%), bougienation in 5 cases(14.7%), hemodialysis in 1 case(2.9%), and partial esophagectomy with gastrostomy and gastroileal anastomosis in 1 cases(2.9%). 13. Serious complications were observed in 9 cases (26.5%), consisted of 6 cases(17.6%) of esophageal stricture, 1 cases(2.9%), of aute renal failure, 1 cases (2.9%) of pneu momediastinum with pneumonia, and 1 cases (2.9%) of pneumonia.

  • PDF

The Framework of Research Network and Performance Evaluation on Personal Information Security: Social Network Analysis Perspective (개인정보보호 분야의 연구자 네트워크와 성과 평가 프레임워크: 소셜 네트워크 분석을 중심으로)

  • Kim, Minsu;Choi, Jaewon;Kim, Hyun Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.177-193
    • /
    • 2014
  • Over the past decade, there has been a rapid diffusion of electronic commerce and a rising number of interconnected networks, resulting in an escalation of security threats and privacy concerns. Electronic commerce has a built-in trade-off between the necessity of providing at least some personal information to consummate an online transaction, and the risk of negative consequences from providing such information. More recently, the frequent disclosure of private information has raised concerns about privacy and its impacts. This has motivated researchers in various fields to explore information privacy issues to address these concerns. Accordingly, the necessity for information privacy policies and technologies for collecting and storing data, and information privacy research in various fields such as medicine, computer science, business, and statistics has increased. The occurrence of various information security accidents have made finding experts in the information security field an important issue. Objective measures for finding such experts are required, as it is currently rather subjective. Based on social network analysis, this paper focused on a framework to evaluate the process of finding experts in the information security field. We collected data from the National Discovery for Science Leaders (NDSL) database, initially collecting about 2000 papers covering the period between 2005 and 2013. Outliers and the data of irrelevant papers were dropped, leaving 784 papers to test the suggested hypotheses. The co-authorship network data for co-author relationship, publisher, affiliation, and so on were analyzed using social network measures including centrality and structural hole. The results of our model estimation are as follows. With the exception of Hypothesis 3, which deals with the relationship between eigenvector centrality and performance, all of our hypotheses were supported. In line with our hypothesis, degree centrality (H1) was supported with its positive influence on the researchers' publishing performance (p<0.001). This finding indicates that as the degree of cooperation increased, the more the publishing performance of researchers increased. In addition, closeness centrality (H2) was also positively associated with researchers' publishing performance (p<0.001), suggesting that, as the efficiency of information acquisition increased, the more the researchers' publishing performance increased. This paper identified the difference in publishing performance among researchers. The analysis can be used to identify core experts and evaluate their performance in the information privacy research field. The co-authorship network for information privacy can aid in understanding the deep relationships among researchers. In addition, extracting characteristics of publishers and affiliations, this paper suggested an understanding of the social network measures and their potential for finding experts in the information privacy field. Social concerns about securing the objectivity of experts have increased, because experts in the information privacy field frequently participate in political consultation, and business education support and evaluation. In terms of practical implications, this research suggests an objective framework for experts in the information privacy field, and is useful for people who are in charge of managing research human resources. This study has some limitations, providing opportunities and suggestions for future research. Presenting the difference in information diffusion according to media and proximity presents difficulties for the generalization of the theory due to the small sample size. Therefore, further studies could consider an increased sample size and media diversity, the difference in information diffusion according to the media type, and information proximity could be explored in more detail. Moreover, previous network research has commonly observed a causal relationship between the independent and dependent variable (Kadushin, 2012). In this study, degree centrality as an independent variable might have causal relationship with performance as a dependent variable. However, in the case of network analysis research, network indices could be computed after the network relationship is created. An annual analysis could help mitigate this limitation.

Growth Efficiency, Carcass Quality Characteristics and Profitability of 'High'-Market Weight Pigs ('고체중' 출하돈의 성장효율, 도체 품질 특성 및 수익성)

  • Park, M.J.;Ha, D.M.;Shin, H.W.;Lee, S.H.;Kim, W.K.;Ha, S.H.;Yang, H.S.;Jeong, J.Y.;Joo, S.T.;Lee, C.Y.
    • Journal of Animal Science and Technology
    • /
    • v.49 no.4
    • /
    • pp.459-470
    • /
    • 2007
  • Domestically, finishing pigs are marketed at 110 kg on an average. However, it is thought to be feasible to increase the market weight to 120kg or greater without decreasing the carcass quality, because most domestic pigs for pork production have descended from lean-type lineages. The present study was undertaken to investigate the growth efficiency and profitability of ‘high’-market wt pigs and the physicochemical characteristics and consumers' acceptability of the high-wt carcass. A total of 96 (Yorkshire × Landrace) × Duroc-crossbred gilts and barrows were fed a finisher diet ad laibtum in 16 pens beginning from 90-kg BW, after which the animals were slaughtered at 110kg (control) or ‘high’ market wt (135 and 125kg in gilts & barrows, respectively) and their carcasses were analyzed. Average daily gain and gain:feed did not differ between the two sex or market wt groups, whereas average daily feed intake was greater in the barrow and high market wt groups than in the gilt and 110-kg market wt groups, respectively(P<0.01). Backfat thickness of the high-market wt gilts and barrows corrected for 135 and 125-kg live wt, which were 23.7 and 22.5 mm, respectively, were greater (P<0.01) than their corresponding 110-kg counterparts(19.7 & 21.1 mm). Percentages of the trimmed primal cuts per total trimmed lean (w/w), except for that of loin, differed statistically (P<0.05) between two sex or market wt groups, but their numerical differences were rather small. Crude protein content of the loin was greater in the high vs. 110-kg market group (P<0.01), but crude fat and moisture contents and other physicochemical characteristics including the color of this primal cut were not different between the two sexes or market weights. Aroma, marbling and overall acceptability scores were greater in the high vs. 110-kg market wt group in sensory evaluation for fresh loin (P<0.01); however, overall acceptabilities for cooked loin, belly and ham were not different between the two market wt groups. Marginal profits of the 135- and 125-kg high-market wt gilt and barrow relative to their corresponding 110-kg ones were approximately -35,000 and 3,500 wons per head under the current carcass grading standard and price. However, if it had not been for the upper wt limits for the A- and B-grade carcasses, marginal profits of the high market wt gilt and barrow would have amounted to 22,000 and 11,000 wons per head, respectively. In summary, 120~125-kg market pigs are likely to meet the consumers' preference better than the 110-kg ones and also bring a profit equal to or slightly greater than that of the latter even under the current carcass grading standard. Moreover, if only the upper wt limits of the A- & B-grade carcasses were removed or increased to accommodate the high-wt carcass, the optimum market weights for the gilt and barrow would fall upon their target weights of the present study, i.e. 135 and 125 kg, respectively.

A Thermal Time-Driven Dormancy Index as a Complementary Criterion for Grape Vine Freeze Risk Evaluation (포도 동해위험 판정기준으로서 온도시간 기반의 휴면심도 이용)

  • Kwon, Eun-Young;Jung, Jea-Eun;Chung, U-Ran;Lee, Seung-Jong;Song, Gi-Cheol;Choi, Dong-Geun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.8 no.1
    • /
    • pp.1-9
    • /
    • 2006
  • Regardless of the recent observed warmer winters in Korea, more freeze injuries and associated economic losses are reported in fruit industry than ever before. Existing freeze-frost forecasting systems employ only daily minimum temperature for judging the potential damage on dormant flowering buds but cannot accommodate potential biological responses such as short-term acclimation of plants to severe weather episodes as well as annual variation in climate. We introduce 'dormancy depth', in addition to daily minimum temperature, as a complementary criterion for judging the potential damage of freezing temperatures on dormant flowering buds of grape vines. Dormancy depth can be estimated by a phonology model driven by daily maximum and minimum temperature and is expected to make a reasonable proxy for physiological tolerance of buds to low temperature. Dormancy depth at a selected site was estimated for a climatological normal year by this model, and we found a close similarity in time course change pattern between the estimated dormancy depth and the known cold tolerance of fruit trees. Inter-annual and spatial variation in dormancy depth were identified by this method, showing the feasibility of using dormancy depth as a proxy indicator for tolerance to low temperature during the winter season. The model was applied to 10 vineyards which were recently damaged by a cold spell, and a temperature-dormancy depth-freeze injury relationship was formulated into an exponential-saturation model which can be used for judging freeze risk under a given set of temperature and dormancy depth. Based on this model and the expected lowest temperature with a 10-year recurrence interval, a freeze risk probability map was produced for Hwaseong County, Korea. The results seemed to explain why the vineyards in the warmer part of Hwaseong County have been hit by more freeBe damage than those in the cooler part of the county. A dormancy depth-minimum temperature dual engine freeze warning system was designed for vineyards in major production counties in Korea by combining the site-specific dormancy depth and minimum temperature forecasts with the freeze risk model. In this system, daily accumulation of thermal time since last fall leads to the dormancy state (depth) for today. The regional minimum temperature forecast for tomorrow by the Korea Meteorological Administration is converted to the site specific forecast at a 30m resolution. These data are input to the freeze risk model and the percent damage probability is calculated for each grid cell and mapped for the entire county. Similar approaches may be used to develop freeze warning systems for other deciduous fruit trees.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Studies on Epidemiological Survey of Infectious Disease of Chicken in Korea (국내 닭 전염성 질병에 관한 역학적 조사 연구)

  • 이용호;박근식;오세정
    • Korean Journal of Poultry Science
    • /
    • v.16 no.3
    • /
    • pp.175-192
    • /
    • 1989
  • A total of 9, 012 cases was submitted for diagnosis of chicken diseases to Veterinary Research Institute, Rural Development Administration from domestic chicken farms during 18 years from 1971 to 1988. Of them, 6, 181 cases diagnosed as the infectious disease were investigated for the detection rate of infections on basis of you, season , and chicken age. The results obtained were summarized as followings:1. Detection rate or the infections was lowest as 49.3% in the year 1973, and highest as 78.6% in 1985 (average 68.6%). 2. Of infections detected, bacterial diseases were most frequent (32.6%), and followed in order by viral (26.3%), parasitic (7.7%), and fungal diseases (2.1%) in geneal. 3. The most frequently detected bacterial diseases in order of prevalence were mycoplasmosis (8.8%), colibacillosis (8.5%), and staphylococcosis (5.8%), and followed by salmonellosis pullorum disease , yolk sac disease, and salpingitis (0.8-1.5%). 4. In viral diseases, 7.5% of infections detected was lymphoid leukosis and 7.2%-Marek's disease, 4.4%-Newcastle disease, 2.0%-infectious laryngotracheitis, 1.7%-infectious bursal disease, and 1.0%-avian encephalomyelitis, while detection rate of infectious bronchitis, egg drop syndrome '76, and inclusion body hepatitis was less than 1.0%, respectively. 5. The most prevalent parasitic disease was coccidiosis (4.5%), followed by ascariasis (1.4%). The detection rate of other parasitic diseases including leucocytozoonosis, black head , heterakiasis, and ectoparasitosis was very as 0.2-0.7%, respectively: In fungal diseases, 2.0% of infections was detected as aspergillosis, and followed by candidiasis (0.1%). 6. Detection rate of the infections on basis of season was somewhat higher in summer. (27.7%), and autumn (27.7%) than in winter (23.5%), and spring (21.5%) in general. In bacterial, viral, and fungal diseases, there were the similar tendencies of detection rate as in infections, while parasitic diseases were much highly detected in summer (34.3%), and autumn (39.5%) than in any other season. 7. Among bacterial diseases colibacillosis was most frequently detected in summer, and staphylococcosis in autumn. In detection rate of viral diseases, Marek's disease, infectious laryngotracheitis, and infectious bursal disease was highest in summer, lymphold leukosis, fowl pox and egg drop syndrome '76 in autumn, and infectious trachitis in winter, repectively. The majority of important parasitic diseases including coccidiosis were highly detected in summer and autumn. 8. On basis of chicken age, detection rate of infections were highest in chicken of growing period between 30 and 150 days of age (41.4%), and followed by 35.3% in laying chicken over 150 days of age, and 17.3% in chicken of brooding age under 30 days of age. Bacterial, and parasitic diseases were most frequently detected in chicken of growing period, viral diseases in chicken of growing, and laying period as nearly equal rate of detection, and fungal diseases in chicken of brooding age.

  • PDF

The Effects of Live Yeast(Saaccharomyces cerevisiae) Supplementation on the Performance of Laying Hens (활성효모 첨가가 산란계의 생산성에 미치는 영향)

  • 유종석;백인기
    • Korean Journal of Poultry Science
    • /
    • v.17 no.3
    • /
    • pp.179-191
    • /
    • 1990
  • In order to study the effects of supplementation of live yeast(Saccharomyces cerevisiae) on the performance of laying hens, five experiments were conducted. Two experiment were conducted during summer period, one with 37 wk old Dekalb-Delta strain laying birds(Exp. 1) and the other one with 100 wk old molted Nick Chick Brown laying birds(Esp.2) . In each experiment, 240 birds were divided into 12 groups of 20 birds each and randomly distributed. Each of the two experimental diets(Control 71 and 0.05% live yeast supplemented 72) was fed to 6 groups for 4 wks in Exp.1 and 3 wks in Exp.2. Three experiments were conducted during winter period, Exp.2 with 54 wk old Hy-Line strain laying birds, Exp.4 with 52 wk old Hy-Line strain laying birds, and Exp.5 with 36 wk old broiler breeder(Indian River strain). In each experiment, 540 birds were divided into 18 groups of 30 birds each and randomly distributed. Each of the 3 experimental diets(Control:$T_1$0.05% live yeast supplemented:$T_2$ and 0.1% live yeast supplemented : T$_3$) was fed to 9 groups for 6 wks in Exp.3, 9 wks in Exp.4 and 4 wks in Exp.5. In Exp.4, Latin Square design was employed to determine the effects of switching feeds at 3 wk intervals. All hens were housed in cages of commercial farm and experimental diets were made with commercial layer feeds. In Experiment 1, egg production was significantly(P<0.05) higher in $T_2$. Feed intake was significantly (P<higher in 72 at 1st wk but 4 wk average was not significantly different. Feed efficiency was significantly(P<0.01) better in 72 at End wk but 4 wk average was not significantly different. Other parameters, such as weight, soft egg production, cracked egg production and mortality were not significantly different. In Experiment 2, egg production was significantly(P<0.05) higher in $T_2$. Feed efficiency was significantly (P<0.05 and P<0.01) better at End wk and 3rd wk but 3 wk average was not significantly different. Soft egg production was significantly(P<0.05) higher in 72. Other parameters were not significantly different. In Experiment 3, egg productions were significantly(P<0.05) different among treatments : $T_3$ was higher than $T_1$ and $T_2$ was higher than $T_1$. Egg weight of $T_1$ and $T_2$was significantly(P<0.05) heavier than $T_3$. Feed intake of $T_2$ and $T_3$ was significantly(P<0.05) higher than $T_1$ at 6th wk but overall average was not significantly different. Soft egg production were significantly(P<0.01) different among treatments:$T_1$ was higher than $T_3$ was higher than $T_2$. Feed efficiency cracked e99 Production and mortality were not signifcantly different. In Experiment 4, egg production tended to increase as the level of live yeast supplementation increased but they were not statistically different. In Experiment 5, egg production of broiler breeders of $T_3$ was significantly(P<0.01) higher than $T_1$. Feed intake of $T_3$ was significantly(P<0.05) greater than $T_1$ and $T_2$ at 3rd wk but overall average was not significantly different. Fertility and hatchability tended to be higher in the supplemented groups than in the control.

  • PDF

Clinical Outcomes of Corrective Surgical Treatment for Esophageal Cancer (식도암의 외과적 근치 절제술에 대한 임상적 고찰)

  • Ryu Se Min;Jo Won Min;Mok Young Jae;Kim Hyun Koo;Cho Yang Hyun;Sohn Young-sang;Kim Hark Jei;Choi Young Ho
    • Journal of Chest Surgery
    • /
    • v.38 no.2 s.247
    • /
    • pp.157-163
    • /
    • 2005
  • Background: Clinical outcomes of esophageal cancer have not been satisfactory in spite of the development of surgical skills and protocols of adjuvant therapy. We analyzed the results of corrective surgical patients for esophageal cancer from January 1992 to July 2002. Material and Method: Among 129 patients with esophageal cancer, this study was performed in 68 patients who received corrective surgery. The ratio of sex was 59 : 9 (male : female) and mean age was $61.07\pm7.36$ years old. Chief complaints of this patients were dysphagia, epigastric pain and weight loss, etc. The locations of esophageal cancer were 4 in upper esophagus, 36 in middle, 20 in lower, 8 in esophagogastric junction. 60 patients had squamous cell cancer and 7 had adenocarcinoma, and 1 had malignant melanoma. Five patients had neoadjuvant chemotherapy. Result: The postoperative stage I, IIA, IIB, III, IV patients were 7, 25, 12, 17 and 7, respectively. The conduit for replacement of esophagus were stomach (62 patients) and colon (6 patients). The neck anastomosis was performed in 28 patients and intrathoracic anastomosis in 40 patients. The technique of anastomosis were hand sewing method (44 patients) and stapling method (24 patients). One of the early complications was anastomosis leakage (3 patients) which had only radiologic leakage that recovered spontaneously. The anastomosis technique had no correlation with postoperative leakage, which stapling method (2 patients) and hand sewing method (1 patient). There were 3 respiratory failures, 6 pneumonia, 1 fulminant hepatitis, 1 bleeding and 1 sepsis. The 2 early postoperative deaths were fulminant hepatitis and sepsis. Among 68 patients, 23 patients had postoperative adjuvant therapy and 55 paitents were followed up. The follow up period was $23.73\pm22.18$ months ($1\~76$ month). There were 5 patients in stage I, 21 in stage 2A, 9 in stage IIB, 15 in stage III and 5 in stage IV. The 1, 3, 5 year survival rates of the patients who could be followed up completely was $58.43\pm6.5\%,\;35.48\pm7.5\%\;and\;18.81\pm7.7\%$, respectively. Statistical analysis showed that long-term survival difference was associated with a stage, T stage, and N stage (p<0.05) but not associated with histology, sex, anastomosis location, tumor location, and pre and postoperative adjuvant therapy. Conclusion: The early diagnosis, aggressive operative resection, and adequate postoperative treatment may have contributed to the observed increase in survival for esophageal cancer patients.