• Title/Summary/Keyword: optimal classification method

Search Result 368, Processing Time 0.029 seconds

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

An Analysis of the Locational Selection Factors of the Small- and Medium-sized Hospitals Using the AHP : Centered on the Spine and Joint Hospitals (AHP를 이용한 중·소 병원 입지선택요인 분석 : 척추·관절 병원중심으로)

  • Kim, Duck Ki;Shim, Gyo-Eon
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.5
    • /
    • pp.191-214
    • /
    • 2018
  • This research empirically analyzed the selection factors and the locational selection factors of the medical service facilities according to the gradual increase of the importance of the selection factors and the locational selection factors regarding the establishments of the small- and medium-sized hospitals according to the rapid changes of the socio-economic conditions. By analyzing the priority order according to the levels of the importance of each evaluation item factor through a research related to the selection factors and the locational selection factors of the small- and medium-sized hospitals and by drawing what the important factors that have the influences on the competitiveness of the pre-existent small- and medium-sized hospitals are through the classification of the real estate locational factors and the non-locational factors, the purpose lies in utilizing them as the basic data and materials for the opening strategies of the small- and medium-sized hospitals considering the special, locational characteristics according to the important factors of the selection factors of the small- and medium-sized hospitals, regarding the medical suppliers that have been preparing, for opening the new, small- and medium-sized hospitals. Based on the results of the preceding researches and the researches on the case examples, 28 evaluation factors were arrived at in terms of the level of the medical treatment, the medical services, the accessibilities of the hospitals, the conveniences of the hospitals, and the physical environment. And, regarding the 28 detailed evaluation factors that had been collected, through the interviews with the related experts, the 5 factors of the medical level, the medical service, the expertise of the hospital, the convenience of the hospital, and the physical environment were selected as the upper class evaluation factors. And, according to each upper class, a total of 28 low-part evaluation factors were selected. Regarding the optimal evaluation factors that were selected, the optimal locational factors were selected by carrying out an AHP questionnaire survey investigation with 200 medical experts as the subjects. Regarding the AHP analysis results, similarly with the case examples of the precedent researches, the levels of the importance appeared in the order of the medical level, the medical services, the accessibility of the hospital, the physical environment, and the convenience. And the factors that were related to the facilities of a hospital appeared low. The results of this research can be applied in providing the basis for the decision-makings regarding the selections of the locations of the small- and medium-sized hospitals in the future.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

The Effect of Data Size on the k-NN Predictability: Application to Samsung Electronics Stock Market Prediction (데이터 크기에 따른 k-NN의 예측력 연구: 삼성전자주가를 사례로)

  • Chun, Se-Hak
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.239-251
    • /
    • 2019
  • Statistical methods such as moving averages, Kalman filtering, exponential smoothing, regression analysis, and ARIMA (autoregressive integrated moving average) have been used for stock market predictions. However, these statistical methods have not produced superior performances. In recent years, machine learning techniques have been widely used in stock market predictions, including artificial neural network, SVM, and genetic algorithm. In particular, a case-based reasoning method, known as k-nearest neighbor is also widely used for stock price prediction. Case based reasoning retrieves several similar cases from previous cases when a new problem occurs, and combines the class labels of similar cases to create a classification for the new problem. However, case based reasoning has some problems. First, case based reasoning has a tendency to search for a fixed number of neighbors in the observation space and always selects the same number of neighbors rather than the best similar neighbors for the target case. So, case based reasoning may have to take into account more cases even when there are fewer cases applicable depending on the subject. Second, case based reasoning may select neighbors that are far away from the target case. Thus, case based reasoning does not guarantee an optimal pseudo-neighborhood for various target cases, and the predictability can be degraded due to a deviation from the desired similar neighbor. This paper examines how the size of learning data affects stock price predictability through k-nearest neighbor and compares the predictability of k-nearest neighbor with the random walk model according to the size of the learning data and the number of neighbors. In this study, Samsung electronics stock prices were predicted by dividing the learning dataset into two types. For the prediction of next day's closing price, we used four variables: opening value, daily high, daily low, and daily close. In the first experiment, data from January 1, 2000 to December 31, 2017 were used for the learning process. In the second experiment, data from January 1, 2015 to December 31, 2017 were used for the learning process. The test data is from January 1, 2018 to August 31, 2018 for both experiments. We compared the performance of k-NN with the random walk model using the two learning dataset. The mean absolute percentage error (MAPE) was 1.3497 for the random walk model and 1.3570 for the k-NN for the first experiment when the learning data was small. However, the mean absolute percentage error (MAPE) for the random walk model was 1.3497 and the k-NN was 1.2928 for the second experiment when the learning data was large. These results show that the prediction power when more learning data are used is higher than when less learning data are used. Also, this paper shows that k-NN generally produces a better predictive power than random walk model for larger learning datasets and does not when the learning dataset is relatively small. Future studies need to consider macroeconomic variables related to stock price forecasting including opening price, low price, high price, and closing price. Also, to produce better results, it is recommended that the k-nearest neighbor needs to find nearest neighbors using the second step filtering method considering fundamental economic variables as well as a sufficient amount of learning data.

A Study on the Structural Reinforcement of the Modified Caisson Floating Dock (개조된 케이슨 플로팅 도크의 구조 보강에 대한 연구)

  • Kim, Hong-Jo;Seo, Kwang-Cheol;Park, Joo-Shin
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.27 no.1
    • /
    • pp.172-178
    • /
    • 2021
  • In the ship repair market, interest in maintenance and repair is steadily increasing due to the reinforcement of prevention of environmental pollution caused by ships and the reinforcement of safety standards for ship structures. By reflecting this effect, the number of requests for repairs by foreign shipping companies increases to repair shipbuilders in the Southwest Sea. However, because most of the repair shipbuilders in the southwestern area are small and medium-sized companies, it is difficult to lead to the integrated synergy effect of the repair shipbuilding companies. Moreover, the infrastructure is not integrated; hence, using the infrastructure jointly is a challenge, which acts as an obstacle to the activation of the repair shipbuilding industry. Floating docks are indispensable to operating the repair shipbuilding business; in addition, most of them are operated through renovation/repair after importing aging caisson docks from overseas. However, their service life is more than 30 years; additionally, there is no structure inspection standard. Therefore, it is vulnerable to the safety field. In this study, the finite element analysis program of ANSYS was used to evaluate the structural safety of the modified caisson dock and obtain additional structural reinforcement schemes to solve the derived problems. For the floating docks, there are classification regulations; however, concerning structural strength, the regulations are insufficient, and the applicability is inferior. These insufficient evaluation areas were supplemented through a detailed structural FE-analysis. The reinforcement plan was decided by reinforcing the pontoon deck and reinforcement of the side tank, considering the characteristics of the repair shipyard condition. The final plan was selected to reinforce the side wing tank through the structural analysis of the decision; in addition, the actual structure was fabricated to reflect the reinforcement plan. Our results can be used as reference data for improving the structural strength of similar facilities; we believe that the optimal solution can be found quickly if this method is used during renovation/repair.

A Study on Environmental Standards of School Building (교사환경기준에 관한 연구)

  • Hong, Seok-Pyo;Park, Young-Soo
    • The Journal of Korean Society for School & Community Health Education
    • /
    • v.1 no.1
    • /
    • pp.11-43
    • /
    • 2000
  • The purpose of this study was, through analyzing the previous researches, to grasp the present status of environment of school building(ESB), research the sundry records of each element and, through comparative analysis of the standard of ESB in Korea, the United States, and Japan, select the normative standard of ESB, to clarify the point at issue presented in Regulation of Construction & facility Management for Elementary and and Secondary School in Korea, and to suggest an alternative preliminary standard of ESB. To carry out a research for this purpose, these were required: 1. to investigate the existing present status of ESB, 2. to make a comparative analysis of the standard of ESB in each country, 3. to suggest the normative standard of preliminary standard of ESB, 4. to analyze the controversial points of the standard of ESB in Korea, 5. to suggest an alternative preliminary standard of ESB. The conclusions were as follows: 1. Putting, through analyzing the previous researches, the existing present status of ESB together, it seemed that lighting environment, indoor air environment and noise environment were all in poor conditions. 2. In the result of a comparative analysis of the standard of ESB in Korea, Japan and the United States, in Korea the factors of each lighting and indoor air environment were not presented properly, in Japan, in lighting environment aspect, the standard on natural lighting and the factors on brightness were not presented., and in the USA the essential factors of each environment were throughly presented. In the comparison of the standards on each factor, Korea showed that the standard level presented was less properly prescribed than those of the USA and Japan but it also showed that the standard levels prescribed in the USA and in Japan were mostly similar to the standard levels in records investigated. 3. With the result of the normative standard selection on School Builiding environment factor of prescribed in this study, the controversial points of the standard of ESB in Korea were analyzed and the result was utilized to suggest new preliminary standard of ESB. 4. As the result of the analysis of the controversial points of the standard of ESB in Korea, it was found that the standard of ESB in Korea should be established on a basis of School Health Act and be concretely presented in School Health Regulation and School Health Rule. The factors of each environment was improperly presented in the existing standard of ESB in Korea. Moreover the standard of them was inferior to that of the records investigated and those of in the USA and in Japan and it also showed that the standard of it in Korea was improper to maintain Comfortable Learning Environment. 5. A suggested preliminary standard of ESB acquired through above study as follows: 1) In this study a new kind of preliminary standard of ESB is divided into lighting environment, indoor air environment, noise environment, odor environment and for above classification, reasonable factor and standard should be established and the controling way on each standard and countermeasures against it should be considered. 2) In lighting environment, the factors of natural lighting are divided into daylight rate, brightness, glare. In the standard on each factor, daylight rate should secure 5% of a mean daylight rate and 2% of a minimum daylight rate, brightness ratio of maximum illumination to minimum illumination should be under 10:1, and in glare there should not be an occurrence factor from a reflector outside of the classroom. And the factors of unnatural lighting are illumination, brightness, and glare. In the standard on each factor, illumination should be 750 lux or more, brightness ratio should be under 3 to 1, and glare should not occur. And Optimal reflection rate(%) of Colors and Facilities of Classroom which influences lighting environment should be considered. 3) In indoor air environment factors, thermal factors are divided into (1) room temperature, (2) relative humidity, (3) room air movement, (4) radiation heat, and harmful gases (5) CO, (6) $CO_2$ that are proceeded from using the heating fuel such as oval briquettes, firewood, charcoal being used in most of the classroom, and finally (7) dust. In the standard on each factor, the next are necessary; room temperature: $16^{\circ}C{\sim}26^{\circ}C$(summer : $E.T18.9{\sim}23.8^{\circ}C$, winter: $E.T16.7{\sim}21.7^{\circ}C$), relative humidity: $30{\sim}80%$, room air movement: under 0.5m/sec, radiation heat: under $5^{\circ}C$ gap between dry-bulb temperature and wet-bulb temperature, below 1000 ppm of ca and below 10ppm of $CO_2$, dust: below 0.10 $mg/m^3$ of Volume of dust in indoor air, and ventilation standard($CO_2$) for purification of indoor air : once/6 min.(about 7 times/40 min.) in an airtight classroom. 4) In the standard on noise environment, noise level should be under 40 dB(A) and the noise measuring way and the countermeasures against it should be considered. 5) In the standard on odor environment, odor level under Physical Method should be under 2 degrees, and the inspecting way and the countermeasures against it should be considered.

  • PDF

A STUDY ON THE PHYSICAL PROPERTIES OF A COMPOSITE RESIN INLAY BY CURING METHODS (중합방법에 따른 복합레진 인레이의 물리적 성질에 관한 연구)

  • Cho, Sung-A;Cho, Young-Gon;Moon, Joo-Hoon;Oh, Haeng-Jin
    • Restorative Dentistry and Endodontics
    • /
    • v.22 no.1
    • /
    • pp.254-266
    • /
    • 1997
  • This study was to know the usefulness of argon laser for composite resin, to prove the polymerized effect of heat treatment of composite resin inlay and to get the curing method for optimal physical properties of composite resin inlay. In this study we used four light curing units and one heat curing unit: Visilux $II^{TM}$, a visible light gun: $SPECTRUM^{TM}$, an argon laser: Unilux AC$^{(R)}$ and Astorn XL$^{(R)}$, visible light curing unit: CRC-$100^{TM}$ for heat treatment. Compared to a control group, we divided the experemental groups into five as follows: Control group: Light curing(Visilux $II^{TM}$) Experimental group 1 : Light curing(Visilux $II^{TM}$) + Light curing(Unilux AC$^{(R)}$) Experimental group 2: Light curing(Visilux $II^{TM}$) + Light curing(Astron XL$^{(R)}$) + Heat treatment(CRC-$100^{TM}$) Experimental group 3 : Laser curing($SPECTRUM^{TM}$) Experimental group 4 : Laser curing($SPECTRUM^{TM}$) + Light curing(Unilux AC$^{(R)}$) Experimental group 5 : Laser curing($SPECTRUM^{TM}$) + Light curing(Astron XL$^{(R)}$) + Heat treatment (CRC-$100^{TM}$) According to the above classification, we made samples through the curing of Clearfil CR Inlay$^{(R)}$, which is a composite resin for inlay, in a separable cylindrical metal mold and polycarbonate plate. And then, we measured and compared the value of compressive strength, diametral tensile strength and the surface micro hardness of each sample. The results were as follows : 1. Among the experimental groups, group 5 showed the highest value of compressive strength, $157.50{\pm}10.24$ kgf and control group showed the lowest value of compressive strength, $103.93{\pm}21.93$ kgf. Control group showed significant difference with the experimental groups(p<0.001). Group 2 which was treated by the heat showed higher compressive strength than that of group 1 which was not, and there was significant difference between group 1 and group 2(p<0.001). Group 5 which was treated by heat showed higher compressive strength than group 4 which was not, and there was significant difference group 4 and group 5(p<0.001). 2. Among the experimental groups, group 5 showed the highest value of diametral tensile strength, $95.84{\pm}1.97$ kgf and control group showed the lowest value of diametral tensile strength, $81.80{\pm}2.17$ kgf. Control group which was cured by visible light showed higher diametral tensile strength than group 3 which was cured Argon Laser. Group 2 which was treated by heat showed higher compressive strength than that of group 1 which was not, and there was significant difference between group 1 and group 2(p<0.001). Group 5 which was treated by heat showed higher compressive strength than group 4 which was not, and there was a significant difference group 4 and group 5(p<0.001). 3. Among the experimental groups, group 5 showed the highest value of microhardness of top surface, $148.42{\pm}9.57$ kgf and control group showed the lowest value of microhardness, $111.43{\pm}7.63$ kgf. In the case of bottom surface, group 5 showed the highest value of $146.19{\pm}7.62$ kgf, and control group showed the lowest, $104.03{\pm}11.05$ kgf. Group 3 which was cured by Argon Laser showed higher diametral tensile strength than control group which was cured only with a visible light gun. Group 2 which was treated by heat showed higher compressive strength than that of group 1 which was not, and there was a significant difference between group 1 and group 2(p<0.001). Group 5 which was treated by heat showed higher compressive strength than group 4 which was not, and there was a significant difference group 4 and group 5(p<0.001). 4. According to the above results, we took a conclusion that argon laser can be used as a useful unit for curing the composite resin and heat treatment can improve the physical properties of the composite resin inlay.

  • PDF

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.