• Title/Summary/Keyword: Engineering characteristics

Search Result 52,104, Processing Time 0.093 seconds

Evaluation of the Natural Vibration Modes and Structural Strength of WTIV Legs based on Seabed Penetration Depth (해상풍력발전기 설치 선박 레그의 해저면 관입 깊이에 따른 고유 진동 모드와 구조 강도 평가)

  • Myung-Su Yi;Kwang-Cheol Seo;Joo-Shin Park
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.30 no.1
    • /
    • pp.127-134
    • /
    • 2024
  • With the growth of offshore wind power generation market, the corresponding installation vessel market is also growing. It is anticipated that approximately 100 installation vessels will be required in the of shore wind power generation market by 2030. With a price range of 300 to 400 billion Korean won per vessel, this represents a high-value market compared to merchant vessels. Particularly, the demand for large installation vessels with a capacity of 11 MW or more is increasing. The rapid growth of the offshore wind power generation market in the Asia-Pacific region, centered around China, has led to several discussions on orders for operational installation vessels in this region. The seabed geology in the Asia-Pacific region is dominated by clay layers with low bearing capacity. Owing to these characteristics, during vessel operations, significant spudcan and leg penetration depths occur as the installation vessel rises and descends above the water surface. In this study, using penetration variables ranging from 3 to 21 m, the unique vibration period, structural safety of the legs, and conductivity safety index were assessed based on penetration depths. As the penetration depth increases, the natural vibration period and the moment length of the leg become shorter, increasing the margin of structural strength. It is safe against overturning moment at all angles of incidence, and the maximum value occurs at 270 degrees. The conditions reviewed through this study can be used as crucial data to determine the operation of the legs according to the penetration depth when developing operating procedures for WTIV in soft soil. In conclusion, accurately determining the safety of the leg structure according to the penetration depth is directly related to the safety of the WTIV.

Analysis of sustainability changes in the Korean rice cropping system using an emergy approach (에머지 접근법을 이용한 국내 벼농사 시스템의 지속가능성 변화 분석)

  • Yongeun Kim;Minyoung Lee;Jinsol Hong;Yun-Sik Lee;June Wee;Jaejun Song;Kijong Cho
    • Korean Journal of Environmental Biology
    • /
    • v.41 no.4
    • /
    • pp.482-496
    • /
    • 2023
  • Many changes in the scale and structure of the Korean rice cropping system have been made over the past few decades. Still, insufficient research has been conducted on the sustainability of this system. This study analyzed changes in the Korean rice cropping system's sustainability from a system ecology perspective using an emergy approach. For this purpose, an emergy table was created for the Korean rice cropping system in 2011, 2016, and 202, and an emergy-based indicator analysis was performed. The emergy analysis showed that the total emergy input to the rice cropping system decreased from 10,744E+18 sej year-1 to 8,342E+18 sej year-1 due to decreases in paddy field areas from 2011 to 2021, and the proportion of renewable resources decreased by 1.4%. The emergy input per area (ha) was found to have decreased from 13.13E+15 sej ha-1 year-1 in 2011 to 11.89E+15 sej ha-1 year-1 in 2021, and the leading cause was a decrease in nitrogen fertilizer usage and working hours. The amount of emergy used to grow 1 g of rice stayed the same between 2016 and 2021 (specific emergy: 13.3E+09 sej g-1), but the sustainability of the rice cropping system (emergy sustainability index, ESI) continued to decrease (2011: 0.107, 2016: 0.088, and 2021: 0.086). This study provides quantitative information on the emergy input structure and characteristics of Korean rice cropping systems. The results of this study can be used as a valuable reference in establishing measures to improve the ecological sustainability of the Korean rice cropping system.

Development of Crushing Device for Whole Crop Silage and the Characteristics of Crushed Whole Crop Silage (총체맥류 분쇄기 개발 및 분쇄 총체맥류 사일리지의 품질 특성)

  • Lee, Sunghyoun;Yu, Byeongkee;Ju, Sunyi;Park, Taeil
    • Journal of The Korean Society of Grassland and Forage Science
    • /
    • v.36 no.4
    • /
    • pp.344-349
    • /
    • 2016
  • This study was conducted to evaluate the possibility of expanding the usage of whole crop silage from beef cattle and dairy cow to hogs and chickens. For this purpose, a crushing device was developed to crush whole crop silage. The crushed silage was sealed, and analyzed for its feed value. The silage varieties used for the experiment included Saessal barley and Geumgang wheat. Whole crop barley and wheat were crushed in the crushing system as a whole without separating stems, leaves, grains, etc.. When the crushed whole crop silages (CWCS) were analyzed, full grain, grains above 10 mm in size, grains 5~10 mm in size, and grains below 5 mm in size accounted for, 20%, 4%, 27%, and 49 %, respectively. In order to facilitate the fermentation of CWCS, inoculated some fermenter into each CWCS sample (barley or wheat). As control, another set of sample was not inoculated. Crude protein (CP), ether extract (EE), crude fiber (CF), neutral detergent fiber (NDF), acid detergent fiber (ADF), lignin, cellulose content, total digestible nutrient (TDN), and relative feed value (RFV) of fermenter-inoculated Saessal barley were 2.45 %, 1.61%, 8.95%, 16.94%, 9.52%, 1.01%, 8.51%, 81.38%, and 447.5%, respectively. The CP, EE, CF, NDF, ADF, lignin, cellulose content, TDN, and RFV in the other sample of Saessal barley without inoculation of fermenter were 2.57%, 1.62%, 9.61%, 18.25%, 10.13%, 1.10%, 9.04%, 80.90%, and 412.9%, respectively. The CP, EE, CF, NDF, ADF, lignin, cellulose content, TDN, and RFV of fermenter-inoculated Geumgang wheat sample were 2.43%, 1.27%, 10.99%, 19.49%, 11.23%, 1.46%, 9.77%, 80.03%, and 382.6%, respectively. The CP, EE, CF, NDF, ADF, lignin, cellulose content, TDN, RFV of the other set sample of Geumgang wheat sample without the inoculation of fermenter were 2.28%, 1.44%, 10.08%, 18.02%, 10.44%, 1.26%, 9.18%, 80.65%, and 416.9%, respectively. The TDN and RFV content in the fermenter-inoculated Saessal barley were 81.38% and 447.5%, respectively, while the one in the fermenter-inoculated Geumgang wheat were 80.03% and 382.6% respectively. When the feed value of whole crop barley and wheat silage without crushing process was compared to the feed value of whole crop barley and wheat silage made from crushing system, the latter appeared to be higher than the former. This could be due to the process of sealing the crushed silage which might have minimized air content between samples and shortened the golden period of fermentation. In conclusion, these results indicate that a crushing process might be needed to facilitate fermentation and improve the quality of silage when making whole crop silage.

Low Temperature Growth of MCN(M=Ti, Hf) Coating Layers by Plasma Enhanced MOCVD and Study on Their Characteristics (플라즈마 보조 유기금속 화학기상 증착법에 의한 MCN(M=Ti, Hf) 코팅막의 저온성장과 그들의 특성연구)

  • Boo, Jin-Hyo;Heo, Cheol-Ho;Cho, Yong-Ki;Yoon, Joo-Sun;Han, Jeon-G.
    • Journal of the Korean Vacuum Society
    • /
    • v.15 no.6
    • /
    • pp.563-575
    • /
    • 2006
  • Ti(C,N) films are synthesized by pulsed DC plasma enhanced chemical vapor deposition (PEMOCVD) using metal-organic compounds of tetrakis diethylamide titanium at $200-300^{\circ}C$. To compare plasma parameter, in this study, $H_2$ and $He/H_2$ gases are used as carrier gas. The effect of $N_2\;and\;NH_3$ gases as reactive gas is also evaluated in reduction of C content of the films. Radical formation and ionization behaviors in plasma are analyzed in-situ by optical emission spectroscopy (OES) at various pulsed bias voltages and gas species. He and $H_2$ mixture is very effective in enhancing ionization of radicals, especially for the $N_2$. Ammonia $(NH_3)$ gas also highly reduces the formation of CN radical, thereby decreasing C content of Ti(C, N) films in a great deal. The microhardness of film is obtained to be $1,250\;Hk_{0.01}\;to\;1,760\;Hk_{0.01}$ depending on gas species and bias voltage. Higher hardness can be obtained under the conditions of $H_2\;and\;N_2$ gases as well as bias voltage of 600 V. Hf(C, N) films were also obtained by pulsed DC PEMOCYB from tetrakis diethyl-amide hafnium and $N_2/He-H_2$ mixture. The depositions were carried out at temperature of below $300^{\circ}C$, total chamber pressure of 1 Torr and varying the deposition parameters. Influences of the nitrogen contents in the plasma decreased the growth rate and attributed to amorphous components, to the high carbon content of the film. In XRD analysis the domain lattice plain was (111) direction and the maximum microhardness was observed to be $2,460\;Hk_{0.025}$ for a Hf(C,N) film grown under -600 V and 0.1 flow rate of nitrogen. The optical emission spectra measured during PEMOCVD processes of Hf(C, N) film growth were also discussed. $N_2,\;N_2^+$, H, He, CH, CN radicals and metal species(Hf) were detected and CH, CN radicals that make an important role of total PEMOCVD process increased carbon content.

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Query-based Answer Extraction using Korean Dependency Parsing (의존 구문 분석을 이용한 질의 기반 정답 추출)

  • Lee, Dokyoung;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.161-177
    • /
    • 2019
  • In this paper, we study the performance improvement of the answer extraction in Question-Answering system by using sentence dependency parsing result. The Question-Answering (QA) system consists of query analysis, which is a method of analyzing the user's query, and answer extraction, which is a method to extract appropriate answers in the document. And various studies have been conducted on two methods. In order to improve the performance of answer extraction, it is necessary to accurately reflect the grammatical information of sentences. In Korean, because word order structure is free and omission of sentence components is frequent, dependency parsing is a good way to analyze Korean syntax. Therefore, in this study, we improved the performance of the answer extraction by adding the features generated by dependency parsing analysis to the inputs of the answer extraction model (Bidirectional LSTM-CRF). The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. In this study, we compared the performance of the answer extraction model when inputting basic word features generated without the dependency parsing and the performance of the model when inputting the addition of the Eojeol tag feature and dependency graph embedding feature. Since dependency parsing is performed on a basic unit of an Eojeol, which is a component of sentences separated by a space, the tag information of the Eojeol can be obtained as a result of the dependency parsing. The Eojeol tag feature means the tag information of the Eojeol. The process of generating the dependency graph embedding consists of the steps of generating the dependency graph from the dependency parsing result and learning the embedding of the graph. From the dependency parsing result, a graph is generated from the Eojeol to the node, the dependency between the Eojeol to the edge, and the Eojeol tag to the node label. In this process, an undirected graph is generated or a directed graph is generated according to whether or not the dependency relation direction is considered. To obtain the embedding of the graph, we used Graph2Vec, which is a method of finding the embedding of the graph by the subgraphs constituting a graph. We can specify the maximum path length between nodes in the process of finding subgraphs of a graph. If the maximum path length between nodes is 1, graph embedding is generated only by direct dependency between Eojeol, and graph embedding is generated including indirect dependencies as the maximum path length between nodes becomes larger. In the experiment, the maximum path length between nodes is adjusted differently from 1 to 3 depending on whether direction of dependency is considered or not, and the performance of answer extraction is measured. Experimental results show that both Eojeol tag feature and dependency graph embedding feature improve the performance of answer extraction. In particular, considering the direction of the dependency relation and extracting the dependency graph generated with the maximum path length of 1 in the subgraph extraction process in Graph2Vec as the input of the model, the highest answer extraction performance was shown. As a result of these experiments, we concluded that it is better to take into account the direction of dependence and to consider only the direct connection rather than the indirect dependence between the words. The significance of this study is as follows. First, we improved the performance of answer extraction by adding features using dependency parsing results, taking into account the characteristics of Korean, which is free of word order structure and omission of sentence components. Second, we generated feature of dependency parsing result by learning - based graph embedding method without defining the pattern of dependency between Eojeol. Future research directions are as follows. In this study, the features generated as a result of the dependency parsing are applied only to the answer extraction model in order to grasp the meaning. However, in the future, if the performance is confirmed by applying the features to various natural language processing models such as sentiment analysis or name entity recognition, the validity of the features can be verified more accurately.

A Study on the Seawater Filtration Characteristics of Single and Dual-filter Layer Well by Field Test (현장실증시험에 의한 단일 및 이중필터층 우물의 해수 여과 특성 연구)

  • Song, Jae-Yong;Lee, Sang-Moo;Kang, Byeong-Cheon;Lee, Geun-Chun;Jeong, Gyo-Cheol
    • The Journal of Engineering Geology
    • /
    • v.29 no.1
    • /
    • pp.51-68
    • /
    • 2019
  • This study performs to evaluate adaptability of seashore filtering type seawater-intake which adapts dua1 filter well alternative for direct seawater-intake. This study varies filter condition of seashore free surface aquifer which is composed of sand layer then installs real size dual filter well and single filter well to evaluate water permeability and proper pumping amount according to filter condition. According to result of step aquifer test, it is analysed that 110.3% synergy effect of water permeability coefficient is happened compare to single filter since dual filter well has better improvement. dual filter has higher water permeability coefficient compare to same pumping amount, this means dual filter has more improved water permeability than single filter. According to analysis result of continuous aquifer test, it is evaluated that dual filter well (SD1200) has higher water permeability than single filter well (SS800) by analysis of water permeability coefficient using monitoring well and gauging well, it is also analysed dual filter has 110.7% synergy effect of water permeability coefficient. As a evaluation result of pumping amount according to analysis of water level dropping rate, it is analysed that dual filter well increased 122.8% pumping amount compare to single filter well when water level dropping is 2.0 m. As a result of calculating proper pumping amount using water level dropping rate, it is analysed that dual filter well shows 136.0% higher pumping amount compare to single filter well. It is evaluated that proper pumping amount has 122.8~160% improvement compare to single filter, pumping amount improvement rate is 139.6% compare to averaged single filter. In other words, about 40% water intake efficiency can be improved by just installation of dual filter compare to normal well. Proper pumping amount of dual filter well using inflection point is 2843.3 L/min and it is evaluated that daily seawater intake amount is about $4,100m^3/day$ (${\fallingdotseq}4094.3m^3/day$) in one hole of dual filter well. Since it is possible to intake plenty of water in one hole, higher adaptability is anticipated. In case of intaking seawater using dual filter well, no worries regarding damages on facilities caused by natural disaster such as severe weather or typhoon, improvement of pollution is anticipated due to seashore sand layer acts like filter. Therefore, It can be alternative of environmental issue for existing seawater intake technique, can save maintenance expenses related to installation fee or damages and has excellent adaptability in economic aspect. The result of this study will be utilized as a basic data of site demonstration test for adaptation of riverside filtered water of upcoming dual filter well and this study is also anticipated to present standard of well design and construction related to riverside filter and seashore filter technique.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Exposure Assessments of Environmental Contaminants in Ansim Briquette Fuel Complex, Daegu(II) - Concentration distribution and exposure characteristics of TSP, PM10, PM2.5, and heavy metals - (대구 안심연료단지 환경오염물질 노출 평가(II) - TSP, PM10, PM2.5 및 중금속 농도분포 및 노출특성 -)

  • Jung, Jong-Hyeon;Phee, Young-Gyu;Lee, Jun-Jung;Oh, In-Bo;Shon, Byung-Hyun;Lee, Hyung-Don;Yoon, Mi-Ra;Kim, Geun-Bae;Yu, Seung-do;Min, Young-Sun;Lee, Kwan;Lim, Hyun-Sul
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.25 no.3
    • /
    • pp.380-391
    • /
    • 2015
  • Objectives: The objective of this study is to assess airborne particulate matter pollution and its effect on health of residents living near Ansim Briquette Fuel Complex and its vicinities. Also, this study measured and analyzed the concentration of TSP, $PM_{10}$, $PM_{2.5}$, and heavy metals which influences on the environmental and respiratory disease in Ansim Briquette Fuel Complex, Daegu, Korea. Methods: In this study, we analyzed various environmental pollutants such as particulate matter and heavy metals from Ansim Briquette Fuel Complex that adversely affected local residents's health. In particular, we verified the concentration distribution and characteristics of exposure for TSP, $PM_{10}$, and $PM_{2.5}$ among particulate matters, and heavy metals(Cd, Cr, Cu, Mn, Ni, Pb, Fe, Zn, and Mg). In that regard, the official test method on air pollution in Korea for analysis of particulate matter and heavy metal in atmosphere were conducted. The large capacity air sampling method by the official test method on air pollution in Korea were applied for sampling of heavy metals in atmosphere. In addition, we evaluated the concentration of seasonal environmental pollutants for each point of residence in Ansim Briquette Fuel Complex and surrounding area. The sampling measured periods for air pollutants were from August 11, 2013 to February 21, 2014. Furthermore, we measured and analyzed the seasonal concentrations(summer, autumn and winter). Results: The average concentration for TSP, $PM_{10}$, and $PM_{2.5}$ by direct influence area at Ansim Briquette Fuel Complex were 1.7, 1.4 and 1.9 times higher than reference region. In analysis results of seasonal concentrations for particulate matter in four direct influence and reference area, concentration levels for winter were generally somewhat higher than concentrations for summer and autumn. The average concentrations for Cd, Cr, Mn, Ni, Pb, Fe, and Zn in direct influence area at Ansim Briquette Fuel Complex were $0.0008{\pm}0.0004{\mu}g/Sm^3$, $0.0141{\pm}0.0163{\mu}g/Sm^3$, $0.0248{\pm}0.0059{\mu}g/Sm^3$, $0.0026{\pm}0.0011{\mu}g/Sm^3$, $0.0272{\pm}0.0084{\mu}g/Sm^3$, $0.4855{\pm}0.1862{\mu}g/Sm^3$, and $0.3068{\pm}0.0631{\mu}g/Sm^3$, respectively. In particularly, the average concentrations for Cd, Cr, Mn, Ni, Pb, Fe, and Zn in direct influence area at Ansim Briquette Fuel Complex were 1.9, 3.6, 2.1, 1.9, 1.4, 2.6, and 1.2 times higher than reference area, respectively. The continuous monitoring and management were required for some heavy metals such as Cr and Ni. Moreover, the average concentration in winter for particulate matter in direct influence area at Ansim Briquette Fuel Complex were generally higher than concentrations in summer and autumn. Also, average concentrations for TSP, $PM_{10}$, and $PM_{2.5}$ were from 1.5 to 2.0 times, 1.2 to 1.8 times, and 1.1 to 2.3 times higher than reference area, respectively. In results for seasonal atmospheric environment, TSP, $PM_{10}$, $PM_{2.5}$, and heavy metal concentrations in direct influence area were higher than reference area. Especially, the concentrations in C station were a high level in comparison with other area. Conclusions: In the results, some particulate matters and heavy metals were relatively high concentration, in order to understand the environmental pollution level and health effect in surrounding area at Ansim Briquette Fuel Complex. The concentration of some heavy metals emitted from direct influence area at Ansim Briquette Fuel Complex were relatively higher than reference area. In particular, average concentration for heavy metals in this study were higher than average concentrations in air quality monitoring station for heavy metal for 7 years in Deagu metropolitan region. Especially, the residents near Ansim Briquette Fuel Complex may be exposed to the pollutants(TSP, $PM_{10}$, $PM_{2.5}$, and heavy metals, etc) emitted from the factories in Ansim Briquette Fuel Complex.