• Title/Summary/Keyword: Non-essential

Search Result 1,870, Processing Time 0.034 seconds

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Studies on the Browning of Red Ginseng (홍삼(紅蔘)의 갈변(褐變)에 관(關)한 연구(硏究))

  • Kim, Dong-Youn
    • Applied Biological Chemistry
    • /
    • v.16 no.2
    • /
    • pp.60-77
    • /
    • 1973
  • The non-enzymatic browning phenomenons of red ginseng were studied to identify these compounds which function as the factors for browning. The samples were classified into five divisions; Fresh ginseng, blanched ginseng, sun dried red ginseng, dehydrated red ginseng, and browning accelerated red ginseng respectively, and the various compounds in each of them were analyzed quantitatively and investigated the compounds which were thought to function for browning during the drying and the dehydration processes; the results were as follows. 1. The chemical compositions among five divisions did not show any difference except a) total and reducing sugars, b) total acids, c) water soluble extracts; a) and b) were decreased during the drying process, c) was decreased about 6-7% in red ginseng divisions. 2. Sixteen free amino acids; asp., thr., ser., glu., gly., ala., val., cys., met., ileu., leu., tyr., phe., lys., his., and arg, were identified in each division. Among them the arg, was extremly high. All of the essential amino acids were contained, while generally these amino acids were decreased in drying period and their rates were smaller in dehydrated red ginseng than in sun dried red ginseng. 3. Three kinds of sugars; fructose, glucose and sucrose were identified and other four kinds of unidentified sugars were seperated. The content of sucrose was 80% and all kind of sugars were generally less in red ginseng divisions than in the other two divisions. The decreasing rate of sngars was higher in the sun dried red ginseng than in the dehydrated red ginseng. Especially the decreasing rate of the reducing sugars was high as compared with that of sucrose. 4. Almost all the ascorbic acid was decomposed during the blanching whereas there could'nt be shown any change of the ascorbic acid content during the period of drying. 5. Eleven kinds of volatile acids; acetic acid, propionic acid, acrylic acid, iso-butyric acid, n-butyric acid, isovaleric acid, n-valeric acid, isoheptylic acid, n-heptylic acid, and an unknown volatile acid were identified. They showed a little decrease during the period of blanching perhaps on account of their volatility whereas they were increased in drying period. 6. Six kinds of non-volatile acids; citric acid, malic acid, ${\alpha}-ketoglutaric$ acid, succinic acid, pyruvic acid and glutaric acid were identified. The content of them were decreased during the drying procedures in red ginseng but only that of succinic acid was increased. 7. Three kinds of polyphenols; 3-caffeyl quinic acid, 4-caffeyl quinic acid, 5-caffeyl quinic acid and an unknown polyphenol were identified. The content of them showed considerable decrease during the drying procedures, especially in sun drying. 8. The intensity of the browning in each divisior was as follows; browning accelerated red ginseng> sun dried red ginseng> dehydrated red ginseng. 9. In the process of red ginseng preparation, a. certain relationship could be found between the decreasing rates of amino acids, reducing sugars, polyphenols and the intensity of browning. Therefore the browning phenomenon may be concluded that nonenzymatic browning reactions of the amino-carbonyl reaction and autoxidation of polyphenols are the most important processes, furthermore, as their reactions could be controlled it is thought to be possible to accelerate effectively browning within a relatively short period.

  • PDF

A Study on Legal and Institutional Improvement Measures for the Effective Implementation of SMS -Focusing on Aircraft Accident Investigation-

  • Yoo, Kyung-In
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.32 no.2
    • /
    • pp.101-127
    • /
    • 2017
  • Even with the most advanced aviation technology benefits, aircraft accidents are constantly occurring while air passenger transportation volume is expected to double in the next 15 years. Since it is not possible to secure aviation safety only by the post aircraft accident safety action of accident investigations, it has been recognized and consensus has been formed that proactive and predictive prevention measures are necessary. In this sense, the aviation safety management system (SMS) was introduced in 2008 and has been carried out in earnest since 2011. SMS is a proactive and predictive aircraft accident preventive measure, which is a mechanism to eliminate the fundamental risk factors by approaching organizational factors beyond technological factors and human factors related to aviation safety. The methodology is to collect hazards in all the sites required for aircraft operations, to build a database, to analyze the risks, and through managing risks, to keep the risks acceptable or below. Therefore, the improper implementation of SMS indicates that the aircraft accident prevention is insufficient and it is to be directly connected with the aircraft accident. Reports of duty performance related hazards including their own errors are essential and most important in SMS. Under the policy of just culture for voluntary reporting, the guarantee of information providers' anonymity, non-punishment and non-blame should be basically secured, but to this end, under-reporting is stagnant due to lack of trust in their own organizations. It is necessary for the accountable executive(CEO) and senior management to take a leading role to foster the safety culture initiating from just culture with the safety consciousness, balancing between safety and profit for the organization. Though a Ministry of Land, Infrastructure and Transport's order, "Guidance on SMS Implementation" states the training required for the accountable executive(CEO) and senior management, it is not legally binding. Thus it is suggested that the SMS training completion certificates of accountable executive(CEO) and senior management be included in SMS approval application form that is legally required by "Korea Aviation Safety Program" in addition to other required documents such as a copy of SMS manual. Also, SMS related items are missing in the aircraft accident investigation, so that organizational factors in association with safety culture and risk management are not being investigated. This hinders from preventing future accidents, as the root cause cannot be identified. The Aircraft Accident Investigation Manuals issued by ICAO contain the SMS investigation wheres it is not included in the final report form of Annex 13 to the Convention on International Civil Aviation. In addition, the US National Transportation Safety Board(NTSB) that has been a substantial example of the aircraft accident investigation for the other accident investigation agencies worldwide does not appear to expand the scope of investigation activities further to SMS. For these reasons, it is believed that investigation agencies conducting their investigations under Annex 13 do not include SMS in the investigation items, and the aircraft accident investigators are hardly exposed to SMS investigation methods or techniques. In this respect, it is necessary to include the SMS investigation in the organization and management information of the final report format of Annex 13. In Korea as well, in the same manner, SMS item should be added to the final report format of the Operating Regulation of the Aircraft and Railway Accident Investigation Board. If such legal and institutional improvement methods are complemented, SMS will serve the purpose of aircraft accident prevention effectively and contribute to the improvement of aviation safety in the future.

  • PDF

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

An Experimental Study on Real Time CO Concentration Measurement of Combustion Gas in LPG/Air Flame Using TDLAS (TDLAS를 이용한 LPG/공기 화염 연소가스의 실시간 CO 농도 측정에 관한 연구)

  • So, Sunghyun;Park, Daegeun;Park, Jiyeon;Song, Aran;Jeong, Nakwon;Yoo, Miyeon;Hwang, Jungho;Lee, Changyeop
    • Clean Technology
    • /
    • v.25 no.4
    • /
    • pp.316-323
    • /
    • 2019
  • In order to enhance combustion efficiency and reduce atmosphere pollutants, it is essential to measure carbon monoxide (CO) concentration precisely in combustion exhaust. CO is the important gas species regarding pollutant emission and incomplete combustion because it can trade off with NOx and increase rapidly when incomplete combustion occurs. In the case of a steel annealing system, CO is generated intentionally to maintain the deoxidation atmosphere. However, it is difficult to measure the CO concentration in a combustion environment in real-time, because of unsteady combustion reactions and harsh environment. Tunable Diode Laser Absorption Spectroscopy (TDLAS), which is an optical measurement method, is highly attractive for measuring the concentration of certain gas species, temperature, velocity, and pressure in a combustion environment. TDLAS has several advantages such as sensitive, non-invasive, and fast response, and in-situ measurement capability. In this study, a combustion system is designed to control the equivalence ratio. Also, the combustion exhaust gases are produced in a Liquefied Petroleum Gas (LPG)/air flame. Measurement of CO concentration according to the change of equivalence ratio is confirmed through TDLAS method and compared with the simulation based on Voigt function. In order to measure the CO concentration without interference from other combustion products, a near-infrared laser at 4300.6 cm-1 was selected.

North Korea's Nuclear Strategy: Its Type Characteristics and Prospects (북한 핵전략의 유형적 특징과 전망)

  • Kim, Kang-nyeong
    • Korea and Global Affairs
    • /
    • v.1 no.2
    • /
    • pp.171-208
    • /
    • 2017
  • This paper is to analyze the type characteristics and prospects of the North Korean nuclear strategy. To this end, the paper is composed of 5 chapters titled introduction; the concept and type of nuclear strategy; the nuclear capabilities of North Korea and the declarative nuclear strategy; the operational characteristics and prospects of the North Korean nuclear strategy; and conclusion. Recently, the deployment of nuclear weapons and the enhancement of nuclear capabilities in North Korea have raised serious problems in our security and military preparedness. Nuclear strategy means military strategy related to the organization, deployment and operation of nuclear weapons. The study of North Korea's nuclear strategy begins with a very realistic assumption that the nuclear arsenal of North Korea has been substantiated. It is a measure based on North Korea's nuclear arsenal that our defense authorities present the concepts of preemptive attack, missile defense, and mass retaliation as countermeasures against the North Korean nuclear issue and are in the process of introducing and deploying them. The declared nuclear declaration strategy of the DPRK is summarized as: (1)Nuclear deterrence and retaliation strategy under the (North Korea's) Nuclear Weapons Act, (2)Nuclear preemptive aggression, (3)The principle of 'no first use' of nuclear weapons in the 7th Congress. And the intentions and operational characteristics of the North Korean nuclear strategy are as follows: (1)Avoiding blame through imitation of existing nuclear state practices, (2)Favoring of nuclear strategy through declarative nuclear strategy, (3)Non-settlement of nuclear strategy due to gap between nuclear capability and nuclear posture. North Korea has declared itself a nuclear-weapon state through the revised Constitution(2012.7), the Line of 'Construction of the Nuclear Armed Forces and the Economy'(2013.3), and the Nuclear Weapons Act(2013.4). However, the status of "nuclear nations" can only be granted by the NPT, which is already a closed system. Realistically, a robust ROK-US alliance and close US-ROK cooperation are crucial to curbing and overcoming the North Korean nuclear threat we face. On this basis, it is essential not only to deter North Korea's nuclear attacks, but also to establish and implement our own short-term, middle-term and long-term political and military countermeasures for North Korea's denuclearization and disarmament.

Performance Test of Portable Hand-Held HPGe Detector Prototype for Safeguard Inspection (안전조치 사찰을 위한 휴대형 HPGe 검출기 시제품 성능평가 실험)

  • Kwak, Sung-Woo;Ahn, Gil Hoon;Park, Iljin;Ham, Young Soo;Dreyer, Jonathan
    • Journal of Radiation Protection and Research
    • /
    • v.39 no.1
    • /
    • pp.54-60
    • /
    • 2014
  • IAEA has employed various types of radiation detectors - HPGe, NaI, CZT - for accountancy of nuclear material. Among them, HPGe has been mainly used in verification activities required for high accuracy. Due to its essential cooling component(a liquid-nitrogen cooling or a mechanical cooling system), it is large and heavy and needs long cooling time before use. New hand-held portable HPGe has been developed to address such problems. This paper deals with results of performance evaluation test of the new hand-held portable HPGe prototype which was used during IAEA's inspection activities. Radioactive spectra obtained with the new portable HPGe showed different characteristics depending on types and enrichments of nuclear materials inspected. Also, Gamma-rays from daughter radioisotopes in the decay series of $^{235}U$ and $^{238}U$ and characteristic x-rays from uranium were able to be remarkably separated from other peaks in the spectra. A relative error of enrichment measured by the new portable HPGe was in the range of 9 to 27%. The enrichment measurement results didn't meet partially requirement of IAEA because of a small size of a radiation sensing material. This problem might be solved through a further study. This paper discusses how to determine enrichment of nuclear material as well as how to apply the new hand-held portable HPGe to safeguard inspection. There have been few papers to deal with IAEA inspection activity in Korea to verify accountancy of nuclear material in national nuclear facilities. This paper would contribute to analyzing results of safeguards inspection. Also, it is expected that things discussed about further improvement of a radiation detector would make contribution to development of a radiation detector in the related field.

Recipe Standardization and Nutrient Analysis of 'Dong-rae Pajeon' (Local Food in Busan) (부산 향토음식 동래파전의 조리표준화 및 영양분석)

  • Kim, Sang-Ae;Shin, Eun-Soo
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.36 no.11
    • /
    • pp.1472-1481
    • /
    • 2007
  • The purposes of the study were to find refined taste of ancestor through historical research about traditional cooking method and ingredient for the purpose of enriching today#s dietary life and to hand down a particular style of regional dish and excellence of nutritional aspect by providing a standard recipe and nutrition analysis data on #Dong-rae Pajeon#. To collect data about traditional ingredients and cooking method, researcher interviewed seven local natives who have kept a traditional food costumes, visited four restaurants, and reviewed ten cookbooks. The interviewees recalled and demonstrated the cooking procedure. The standard recipe of #Dong-rae Pajeon# was created after three experimental cookings, based on the recipes of the natives, restaurants, and cookbooks. According to the natives# statements, #Dong-rae Pajeon# was a special dish that was offered to the king at #Samzi-nal# (March 3rd of the lunar calendar). It was also a seasonal (before cherry blooming time) and memorial service dish of the province#s high society. The main ingredients were small green onion, dropwort, beef, seafood (large clam, mussel, clam meat, oyster, shrimp, fresh water conch), waxy rice powder, non-wax rice powder, and sesame oil which were abundant in Busan and Kijang region. Energy per 100 g of #Dong-rae Pajeon# was 148 kcal. Protein, lipid, fiber, Ca, and Fe contents were 8.8 g, 2.0 g, 8.6 g, 57.7 mg, and 1.8 mg respectively. Contents of cystine, lysine, leucine, valine, isoleucine which are essential amino acids were high in #Dong-rae Pajeon#. Fatty acids contents are oleic acid (20.5%), linoleic acid (20.1%) and linolenic acid (10.4%) while P/M/S ratio was 0.73/0.67/1.

Characterization and Modification of Milk Lipids (유지방의 특성과 변화)

  • Yeo, Yeong-Geun;Choe, Byeong-Guk;Im, A-Yeong;Kim, Hyo-Jeong;Kim, Su-Min;Kim, Dae-Gon
    • Journal of Dairy Science and Biotechnology
    • /
    • v.16 no.2
    • /
    • pp.119-136
    • /
    • 1998
  • The lipids of milk provide energy and many essential nutrients for the newborn animal. They also have distinctive physical properties that affect the processing of dairy products. Milk fat globules mainly consist of neutral lipids like triacylglycerols, whereas the globule membranes contain the complex lipids mostly, Phospholipids are a small but important fraction of the milk lipids and are found mainly in the milk fat globule membrane and other membranous material in the skim-milk phase. The milk fats of ruminant animals are characterized by the presence of relatively high concentrations of short-chain fatty acids, especially butyric and hexanoic acids, which are rarely found in milks of non-ruminants. The fatty acids of milk lipids arise from de novo synthesis in the mammary gland and uptake from the circulating blood. The fatty acid compositions of milks are usually complex and distinctive, depending on the nature of the fatty acids synthesized de novo in the mammary gland and those received from the diet in each species. The content and composition of milks from different species vary widely; presumably, these are evolutionary adaptations to differing environments. The actual process by which these globules are formed is unkonwn, but there are indications that triglyceride-containing vesicles which bleb from endoplasmic reticulum may serve as nucleation sites for globules. Recent studies on milk have centred on the manipulation of milk lipids to increase specific fatty acids, i.e. 20-carbon omega-3 fatty acids (eicosapentaenoic acid 20:5n3, decosahexaenoic acid 22:6n3) from marine sources because the fatty acids are closely associated with a decreased risk of coronary heart disease.

  • PDF