• Title/Summary/Keyword: Model I

Search Result 10,556, Processing Time 0.043 seconds

Lung cancer, chronic obstructive pulmonary disease and air pollution (대기오염에 의한 폐암 및 만성폐색성호흡기질환 -개인 흡연력을 보정한 만성건강영향평가-)

  • Sung, Joo-Hon;Cho, Soo-Hun;Kang, Dae-Hee;Yoo, Keun-Young
    • Journal of Preventive Medicine and Public Health
    • /
    • v.30 no.3 s.58
    • /
    • pp.585-598
    • /
    • 1997
  • Background : Although there are growing concerns about the adverse health effect of air pollution, not much evidence on health effect of current air pollution level had been accumulated yet in Korea. This study was designed to evaluate the chronic health effect of ai. pollution using Korean Medical Insurance Corporation (KMIC) data and air quality data. Medical insurance data in Korea have some drawback in accuracy, but they do have some strength especially in their national coverage, in having unified ID system and individual information which enables various data linkage and chronic health effect study. Method : This study utilized the data of Korean Environmental Surveillance System Study (Surveillance Study), which consist of asthma, acute bronchitis, chronic obstructive pulmonary diseases (COPD), cardiovascular diseases (congestive heart failure and ischemic heart disease), all cancers, accidents and congenital anomaly, i. e., mainly potential environmental diseases. We reconstructed a nested case-control study wit5h Surveillance Study data and air pollution data in Korea. Among 1,037,210 insured who completed? questionnaire and physical examination in 1992, disease free (for chronic respiratory disease and cancer) persons, between the age of 35-64 with smoking status information were selected to reconstruct cohort of 564,991 persons. The cohort was followed-up to 1995 (1992-5) and the subjects who had the diseases in Surveillance Study were selected. Finally, the patients, with address information and available air pollution data, left to be 'final subjects' Cases were defined to all lung cancer cases (424) and COPD admission cases (89), while control groups are determined to all other patients than two case groups among 'final subjects'. That is, cases are putative chronic environmental diseases, while controls are mainly acute environmental diseases. for exposure, Air quality data in 73 monitoring sites between 1991 - 1993 were analyzed to surrogate air pollution exposure level of located areas (58 areas). Five major air pollutants data, TSP, $O_3,\;SO_2$, CO, NOx was available and the area means were applied to the residents of the local area. 3-year arithmetic mean value, the counts of days violating both long-term and shot-term standards during the period were used as indices of exposure. Multiple logistic regression model was applied. All analyses were performed adjusting for current and past smoking history, age, gender. Results : Plain arithmetic means of pollutants level did not succeed in revealing any relation to the risk of lung cancer or COPD, while the cumulative counts of non-at-tainment days did. All pollutants indices failed to show significant positive findings with COPD excess. Lung cancer risks were significantly and consistently associated with the increase of $O_3$ and CO exceedance counts (to corrected error level -0.017) and less strongly and consistently with $SO_2$ and TSP. $SO_2$ and TSP showed weaker and less consistent relationship. $O_3$ and CO were estimated to increase the risks of lung cancer by 2.04 and 1.46 respectively, the maximal probable risks, derived from comparing more polluted area (95%) with cleaner area (5%). Conclusions : Although not decisive due to potential misclassication of exposure, these results wert drawn by relatively conservative interpretation, and could be used as an evidence of chronic health effect especially for lung cancer. $O_3$ might be a candidate for promoter of lung cancer, while CO should be considered as surrogated measure of motor vehicle emissions. The control selection in this study could have been less appropriate for COPD, and further evaluation with another setting might be necessary.

  • PDF

Purification and Characterization of Sulfated Polysaccharide Isolated from Hot Water Extract of Pachymeniopsis elliptica (Pachymeniopsis elliptica의 열수 추출물로부터 분리한 함황 다당류의 정제 및 특성)

  • Lee, Sun-Hee;Jun, Woo-Jin;Yu, Kwang-Won;Chun, Hyug;Shin, Dong-Hoon;Hong, Bum-Shik;Cho, Hong-Yon;Yang, Han-Chul
    • Korean Journal of Food Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.1191-1197
    • /
    • 2000
  • In the preliminary study, we investigated the anti-complementary activities of 62 extracts from Korean edible seaweeds. Of those, Pachymeniopsis elliptica showed the highest anti-complementary activity. Therefore, it was purified as follows; i) PE-1 by ethanol precipitation, ii) PE-1-C by ultrafiltration, iii) PE-1-CIV by DEAE-Toyopearl 650C, and iv) PE-1-CIV-ii by Sepharose CL-6B. The purified compound, PE-1-CIV-ii, was the complexed homogeneous polysaccharide (molecular mass: 780 kDa) with 82.9% of anti-complementary activity. Also, it contained a significant amount of sulfate group (30.5%), which indicated it as a sulfated algal polysaccharide. Its structural monosaccharides were galactose (44.3%), 3,6-anhydrogalactose (34.0%), glucose (8.2%), fucose (5.4%), xylose (5.2%) and rhamnose (2.9%). After the treatment of periodate on a sample, a significant decrease in anti-complementary activity was found, which was a characteristic of bioactive polysaccharides. And-tumor activity of PE-1-A, B and C was tested in the sarcoma-180 solid tumor model. The PE-1-C with the largest molecular mass (more than 300 kDa) showed 81% of inhibition on the solid tumors, suggesting that the anti-complementary activity was, at least in part, related to anti-tumor activity. Based upon these results, the purified polysacchardes could be an immunopotentiator in vivo.

  • PDF

Adaptive Lock Escalation in Database Management Systems (데이타베이스 관리 시스템에서의 적응형 로크 상승)

  • Chang, Ji-Woong;Lee, Young-Koo;Whang, Kyu-Young;Yang, Jae-Heon
    • Journal of KIISE:Databases
    • /
    • v.28 no.4
    • /
    • pp.742-757
    • /
    • 2001
  • Since database management systems(DBMSS) have limited lock resources, transactions requesting locks beyond the limit mutt be aborted. In the worst carte, if such transactions are aborted repeatedly, the DBMS can become paralyzed, i.e., transaction execute but cannot commit. Lock escalation is considered a solution to this problem. However, existing lock escalation methods do not provide a complete solution. In this paper, we prognose a new lock escalation method, adaptive lock escalation, that selves most of the problems. First, we propose a general model for lock escalation and present the concept of the unescalatable look, which is the major cause making the transactions to abort. Second, we propose the notions of semi lock escalation, lock blocking, and selective relief as the mechanisms to control the number of unescalatable locks. We then propose the adaptive lock escalation method using these notions. Adaptive lock escalation reduces needless aborts and guarantees that the DBMS is not paralyzed under excessive lock requests. It also allows graceful degradation of performance under those circumstances. Third, through extensive simulation, we show that adaptive lock escalation outperforms existing lock escalation methods. The results show that, compared to the existing methods, adaptive lock escalation reduces the number of aborts and the average response time, and increases the throughput to a great extent. Especially, it is shown that the number of concurrent transactions can be increased more than 16 ~256 fold. The contribution of this paper is significant in that it has formally analysed the role of lock escalation in lock resource management and identified the detailed underlying mechanisms. Existing lock escalation methods rely on users or system administrator to handle the problems of excessive lock requests. In contrast, adaptive lock escalation releases the users of this responsibility by providing graceful degradation and preventing system paralysis through automatic control of unescalatable locks Thus adaptive lock escalation can contribute to developing self-tuning: DBMSS that draw a lot of attention these days.

  • PDF

The Need for Paradigm Shift in Semantic Similarity and Semantic Relatedness : From Cognitive Semantics Perspective (의미간의 유사도 연구의 패러다임 변화의 필요성-인지 의미론적 관점에서의 고찰)

  • Choi, Youngseok;Park, Jinsoo
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.111-123
    • /
    • 2013
  • Semantic similarity/relatedness measure between two concepts plays an important role in research on system integration and database integration. Moreover, current research on keyword recommendation or tag clustering strongly depends on this kind of semantic measure. For this reason, many researchers in various fields including computer science and computational linguistics have tried to improve methods to calculating semantic similarity/relatedness measure. This study of similarity between concepts is meant to discover how a computational process can model the action of a human to determine the relationship between two concepts. Most research on calculating semantic similarity usually uses ready-made reference knowledge such as semantic network and dictionary to measure concept similarity. The topological method is used to calculated relatedness or similarity between concepts based on various forms of a semantic network including a hierarchical taxonomy. This approach assumes that the semantic network reflects the human knowledge well. The nodes in a network represent concepts, and way to measure the conceptual similarity between two nodes are also regarded as ways to determine the conceptual similarity of two words(i.e,. two nodes in a network). Topological method can be categorized as node-based or edge-based, which are also called the information content approach and the conceptual distance approach, respectively. The node-based approach is used to calculate similarity between concepts based on how much information the two concepts share in terms of a semantic network or taxonomy while edge-based approach estimates the distance between the nodes that correspond to the concepts being compared. Both of two approaches have assumed that the semantic network is static. That means topological approach has not considered the change of semantic relation between concepts in semantic network. However, as information communication technologies make advantage in sharing knowledge among people, semantic relation between concepts in semantic network may change. To explain the change in semantic relation, we adopt the cognitive semantics. The basic assumption of cognitive semantics is that humans judge the semantic relation based on their cognition and understanding of concepts. This cognition and understanding is called 'World Knowledge.' World knowledge can be categorized as personal knowledge and cultural knowledge. Personal knowledge means the knowledge from personal experience. Everyone can have different Personal Knowledge of same concept. Cultural Knowledge is the knowledge shared by people who are living in the same culture or using the same language. People in the same culture have common understanding of specific concepts. Cultural knowledge can be the starting point of discussion about the change of semantic relation. If the culture shared by people changes for some reasons, the human's cultural knowledge may also change. Today's society and culture are changing at a past face, and the change of cultural knowledge is not negligible issues in the research on semantic relationship between concepts. In this paper, we propose the future directions of research on semantic similarity. In other words, we discuss that how the research on semantic similarity can reflect the change of semantic relation caused by the change of cultural knowledge. We suggest three direction of future research on semantic similarity. First, the research should include the versioning and update methodology for semantic network. Second, semantic network which is dynamically generated can be used for the calculation of semantic similarity between concepts. If the researcher can develop the methodology to extract the semantic network from given knowledge base in real time, this approach can solve many problems related to the change of semantic relation. Third, the statistical approach based on corpus analysis can be an alternative for the method using semantic network. We believe that these proposed research direction can be the milestone of the research on semantic relation.

How effective has the Wairau River erodible embankment been in removing sediment from the Lower Wairau River?

  • Kyle, Christensen
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2015.05a
    • /
    • pp.237-237
    • /
    • 2015
  • The district of Marlborough has had more than its share of river management projects over the past 150 years, each one uniquely affecting the geomorphology and flood hazard of the Wairau Plains. A major early project was to block the Opawa distributary channel at Conders Bend. The Opawa distributary channel took a third and more of Wairau River floodwaters and was a major increasing threat to Blenheim. The blocking of the Opawa required the Wairau and Lower Wairau rivers to carry greater flood flows more often. Consequently the Lower Wairau River was breaking out of its stopbanks approximately every seven years. The idea of diverting flood waters at Tuamarina by providing a direct diversion to the sea through the beach ridges was conceptualised back around the 1920s however, limits on resources and machinery meant the mission of excavating this diversion didn't become feasible until the 1960s. In 1964 a 10 m wide pilot channel was cut from the sea to Tuamarina with an initial capacity of $700m^3/s$. It was expected that floods would eventually scour this 'Wairau Diversion' to its design channel width of 150 m. This did take many more years than initially thought but after approximately 50 years with a little mechanical assistance the Wairau Diversion reached an adequate capacity. Using the power of the river to erode the channel out to its design width and depth was a brilliant idea that saved many thousands of dollars in construction costs and it is somewhat ironic that it is that very same concept that is now being used to deal with the aggradation problem that the Wairau Diversion has caused. The introduction of the Wairau Diversion did provide some flood relief to the lower reaches of the river but unfortunately as the Diversion channel was eroding and enlarging the Lower Wairau River was aggrading and reducing in capacity due to its inability to pass its sediment load with reduced flood flows. It is estimated that approximately $2,000,000m^3$ of sediment was deposited on the bed of the Lower Wairau River in the time between the Diversion's introduction in 1964 and 2010, raising the Lower Wairau's bed upwards of 1.5m in some locations. A numerical morphological model (MIKE-11 ST) was used to assess a number of options which led to the decision and resource consent to construct an erodible (fuse plug) bank at the head of the Wairau Diversion to divert more frequent scouring-flows ($+400m^3/s$)down the Lower Wairau River. Full control gates were ruled out on the grounds of expense. The initial construction of the erodible bank followed in late 2009 with the bank's level at the fuse location set to overtop and begin washing out at a combined Wairau flow of $1,400m^3/s$ which avoids berm flooding in the Lower Wairau. In the three years since the erodible bank was first constructed the Wairau River has sustained 14 events with recorded flows at Tuamarina above $1,000m^3/s$ and three of events in excess of $2,500m^3/s$. These freshes and floods have resulted in washout and rebuild of the erodible bank eight times with a combined rebuild expenditure of $80,000. Marlborough District Council's Rivers & Drainage Department maintains a regular monitoring program for the bed of the Lower Wairau River, which consists of recurrently surveying a series of standard cross sections and estimating the mean bed level (MBL) at each section as well as an overall MBL change over time. A survey was carried out just prior to the installation of the erodible bank and another survey was carried out earlier this year. The results from this latest survey show for the first time since construction of the Wairau Diversion the Lower Wairau River is enlarging. It is estimated that the entire bed of the Lower Wairau has eroded down by an overall average of 60 mm since the introduction of the erodible bank which equates to a total volume of $260,000m^3$. At a cost of $$0.30/m^3$ this represents excellent value compared to mechanical dredging which would likely be in excess of $$10/m^3$. This confirms that the idea of using the river to enlarge the channel is again working for the Wairau River system and that in time nature's "excavator" will provide a channel capacity that will continue to meet design requirements.

  • PDF

Effects of Humic Acid on the pH-dependent Sorption of Europium (Eu) to Kaolinite (PH 변화에 따른 카올리나이트와 유로퓸(Eu)의 흡착에 대한 휴믹산의 영향)

  • Harn, Yoon-I;Shin, Hyun-Sang;Rhee, Dong-Seok;Lee, Myung-Ho;Chung, Euo-Cang
    • Journal of Soil and Groundwater Environment
    • /
    • v.14 no.4
    • /
    • pp.23-32
    • /
    • 2009
  • The sorption of europium (Eu (III)) onto kaolinite and the influence of humic acids over a range of pH 3 ~ 11 has been studied by batch adsorption experiment (V/m = 250 : 1 mL/g, $C_{Eu(III)}\;=\;1\;{\times}\;10^{-5}\;mol/L$, $C_{HA}\;=\;5{\sim}50\;mg/L$, $P_{CO2}=10^{-3.5}\;atm$). The concentrations of HA and Eu(III) in aqueous phase were measured by UV absorbance at 254nm (e.g., $UV_{254}$) and ICP-MS after microwave digestion for HA removals, respectively. Results showed that the HA sorption onto kaolinite was decreased with increasing pH and their sorption isotherms fit well with the Langmuir adsorption model (except pH 3). Maximum amount ($q_{max}$) for the HA sorption at pH 4 to 11 was ranged from 4.73 to 0.47 mg/g. Europium adsorption onto the kaolinite in the absence of HA was typical, showing an increases with pH and a distinct adsorption edge at pH 3 to 5. However in the presence of HA, Eu adsorption to kaolinite was significantly affected. HA was shown to enhance Eu adsorption in the acidic pH range (pH 3 ~ 4) due to the formation of additional binding sites for Eu coming from HA adsorbed onto kaolinite surface, but reduce Eu adsorption in the intermediate and high pH above 6 due to the formation of aqueous Eu-HA complexes. The results on the ternary interaction of kaolinte-Eu-HA are compared with those on the binary system of kaolinite-HA and kaolinite-Eu and adsorption mechanism with pH was discussed.

Development of an Appropriate Deposit-Estimation System for Restoration of Land-Use-Changed Forest Lands Using the Delphi Technique (델파이 기법을 활용한 적정 산지복구비 산출체계의 개발)

  • Koo, Kiwoon;Kweon, Hyeongkeun;Lee, Sang In;Kwon, Semyung;Seo, Jung Il
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.4
    • /
    • pp.630-647
    • /
    • 2021
  • We determined the current problem of the restoration deposit-estimation system, stipulated by the Mountainous Districts Management Act, using the Delphi technique. Consequently, we proposed a standard model for forest land restoration to derive a reasonable deposit-estimation system. With the result of the Delphi survey, the inappropriateness of land-use type and slope gradient classifications was shown; the insufficiency of standard works was a significant problem in the current system. A way to solve these problems was devised, to reorganize the current land-use type into the subject of the site. The specific subjects included the following: (i) to permit or report forest land-use change and temporary use of forest land, (ii) to report temporary use of forest land, (iii) to permit stone collection or sale for mineral mining, and (iv) to allow sediment collection. The current slope gradient subdivision into (a) θ<10°, (b) 10°≦θ<15°, (c) 15°≦θ<20°, (d) 20°≦θ<25°, (e) 25°≦θ<30°, and (f) θ≧30° and the reorganization of 17 standard works into 22 standard works were deemed as solutions, along with seven additional works. We developed 24 standard models for the forest land restoration project based on the aforementioned results. The deposits estimated by these models ranged from 34,185,000 (Korean) won to 607,403,000 won. If additional works, premiums, discounts, and supervision fees are added to the models, the deposit increases to an estimated 668,143,000 won subject to permission for stone collection or sale and mineral mining. Experts agree on the distribution of the restoration deposits estimated by these models at a high level in the Delphi survey. Our findings are expected to contribute to securing the appropriateness of the restoration cost deposited for the smooth performance of the vicariously executed restoration project.

Producing Technique and the Transition of Wan(Bowl) of Hanseong Baekje Period - Focus in Seoul·Gyeonggi Area - (한성백제기(漢城百濟期) 완(盌)의 제작기법(製作技法)과 그 변천(變遷) - 서울경기권 출토유물을 중심으로 -)

  • Han, Ji Sun
    • Korean Journal of Heritage: History & Science
    • /
    • v.44 no.4
    • /
    • pp.86-111
    • /
    • 2011
  • Wan is a tableware in which boiled rice or soup, side dish are put, and it is a representative model which shows the development of personal tableware. From the establishing period of Hanseong Baekje, the form of wan which is Jung-do Style(中島式) Plain Pottery of previous period Proto-Three Kingdoms Period was succeeded to, but wan is produced and used as a wan baked in the kiln, which is far development of the producing technique including hardness and clay. By and large, the size of $0.3{\sim}0.4{\ell}$ was the majority and the production technique of wan which used carefully selected soft quality clay are largely confirmed to be two methods which are, first, basic method by which on a clay tablet on the rotating table, clay band is accumulated and moulding is finished, and second, the new method which had the same basic moulding as that of basic method but in the last stage takes wan off the rotating table and reverse it to trim the bottom and remove the angle of flat bottom. The former, basic production method is the classical production method since wan of Jung-do Style Plain Pottery and wan was produced and used for all periods of Hanseong Baekje. On the other hand, the latter is the production method obtained through form imitation of China made porcelain flowed into through interchange between Baekje and China, and through comparison with Chinese chronogram material it is estimated to have been produced and used after middle of 4th century. Therefore it can be known that the Baekje people's demand for China made articles was big and imitation pottery was produced and used with Baekje pottery. In addition, bowl with outward mouth are confirmed in multiple number in Lakrang(樂浪) pottery wan and it is assumed that wan was the form produced under the influence.

An Investigation on the Periodical Transition of News related to North Korea using Text Mining (텍스트마이닝을 활용한 북한 관련 뉴스의 기간별 변화과정 고찰)

  • Park, Chul-Soo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.63-88
    • /
    • 2019
  • The goal of this paper is to investigate changes in North Korea's domestic and foreign policies through automated text analysis over North Korea represented in South Korean mass media. Based on that data, we then analyze the status of text mining research, using a text mining technique to find the topics, methods, and trends of text mining research. We also investigate the characteristics and method of analysis of the text mining techniques, confirmed by analysis of the data. In this study, R program was used to apply the text mining technique. R program is free software for statistical computing and graphics. Also, Text mining methods allow to highlight the most frequently used keywords in a paragraph of texts. One can create a word cloud, also referred as text cloud or tag cloud. This study proposes a procedure to find meaningful tendencies based on a combination of word cloud, and co-occurrence networks. This study aims to more objectively explore the images of North Korea represented in South Korean newspapers by quantitatively reviewing the patterns of language use related to North Korea from 2016. 11. 1 to 2019. 5. 23 newspaper big data. In this study, we divided into three periods considering recent inter - Korean relations. Before January 1, 2018, it was set as a Before Phase of Peace Building. From January 1, 2018 to February 24, 2019, we have set up a Peace Building Phase. The New Year's message of Kim Jong-un and the Olympics of Pyeong Chang formed an atmosphere of peace on the Korean peninsula. After the Hanoi Pease summit, the third period was the silence of the relationship between North Korea and the United States. Therefore, it was called Depression Phase of Peace Building. This study analyzes news articles related to North Korea of the Korea Press Foundation database(www.bigkinds.or.kr) through text mining, to investigate characteristics of the Kim Jong-un regime's South Korea policy and unification discourse. The main results of this study show that trends in the North Korean national policy agenda can be discovered based on clustering and visualization algorithms. In particular, it examines the changes in the international circumstances, domestic conflicts, the living conditions of North Korea, the South's Aid project for the North, the conflicts of the two Koreas, North Korean nuclear issue, and the North Korean refugee problem through the co-occurrence word analysis. It also offers an analysis of South Korean mentality toward North Korea in terms of the semantic prosody. In the Before Phase of Peace Building, the results of the analysis showed the order of 'Missiles', 'North Korea Nuclear', 'Diplomacy', 'Unification', and ' South-North Korean'. The results of Peace Building Phase are extracted the order of 'Panmunjom', 'Unification', 'North Korea Nuclear', 'Diplomacy', and 'Military'. The results of Depression Phase of Peace Building derived the order of 'North Korea Nuclear', 'North and South Korea', 'Missile', 'State Department', and 'International'. There are 16 words adopted in all three periods. The order is as follows: 'missile', 'North Korea Nuclear', 'Diplomacy', 'Unification', 'North and South Korea', 'Military', 'Kaesong Industrial Complex', 'Defense', 'Sanctions', 'Denuclearization', 'Peace', 'Exchange and Cooperation', and 'South Korea'. We expect that the results of this study will contribute to analyze the trends of news content of North Korea associated with North Korea's provocations. And future research on North Korean trends will be conducted based on the results of this study. We will continue to study the model development for North Korea risk measurement that can anticipate and respond to North Korea's behavior in advance. We expect that the text mining analysis method and the scientific data analysis technique will be applied to North Korea and unification research field. Through these academic studies, I hope to see a lot of studies that make important contributions to the nation.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.