• Title/Summary/Keyword: weights

Search Result 7,507, Processing Time 0.042 seconds

Effects of Molecular Weight of Polyethylene Glycol on the Dimensional Stabilization of Wood (Polyethylene Glycol의 분자량(分子量)이 목재(木材)의 치수 안정화(安定化)에 미치는 영향(影響))

  • Cheon, Cheol;Oh, Joung Soo
    • Journal of Korean Society of Forest Science
    • /
    • v.71 no.1
    • /
    • pp.14-21
    • /
    • 1985
  • This study was carried out in order to prevent the devaluation of wood itself and wood products causing by anisotropy, hygroscopicity, shrinkage and swelling - properties that wood itself only have, in order to improve utility of wood, by emphasizing the natural beautiful figures of wood, to develop the dimensional stabilization techniques of wood with PEG that it is a cheap, non-toxic and the impregnation treatment is not difficult, on the effects of PEG molecular weights (200, 400, 600, 1000, 1500, 2000, 4000, 6000) and species (Pinus densiflora S. et Z., Larix leptolepis Gordon., Cryptomeria japonica D. Don., Cornus controversa Hemsl., Quercus variabilis Blume., Prunus sargentii Rehder.). The results were as follows; 1) PEG loading showed the maximum value (137.22%, Pinus densiflora, in PEG 400), the others showed that relatively slow decrease. The lower specific gravity, the more polymer loading. 2) Bulking coefficient didn't particularly show the correlation with specific gravity, for the most part, indicated the maximum values in PEG 600, except that the bulking coefficient of Quercus variabilis distributed between the range of 12-18% in PEG 400-2000. In general, the bulking coefficient of hardwood was higher than that of softwood. 3) Although there was more or less an exception according to species, volumetric swelling reduction was the greatest in PEG 400. That is, its value of Cryptomeria japonica was the greatest value with 95.0%, the others indicated more than 80% except for Prunus sargentii, while volumetric swelling reduction was decreased less than 70% as the molecular weight increase more than 1000. 4) The relative effectiveness of hardwood with high specific gravity was outstandingly higher than softwood. In general, the relative effectiveness of low molecular weight PEG was superior to those of high molecular weight PEG except that Quercus variabilis showed more than 1.6 to the total molecular weight range, while it was no significant difference as the molecular weight increase more than 4000. 5) According to the analysis of the results mentioned above, the dimensional stabilization of hardwood was more effective than softwood. Although volumetric swelling reduction was the greatest at a molecular weight of 400. In the view of polymer loading, bulking coefficiency reduction of swelling and relative effectiveness, it is desirable to use the mixture of PEG of molecular weight in the range of 200-1500. To practical use, it is recommended to study about the effects on the mixed ratio on the bulking coefficient, reduction of swelling and relative effectiveness.

  • PDF

Breeding and Development of the Tscherskia triton in Jeju Island (제주도 서식 비단털쥐(Tscherskia triton)의 번식과 발달)

  • Park, Jun-Ho;Oh, Hong-Shik
    • Korean Journal of Environment and Ecology
    • /
    • v.31 no.2
    • /
    • pp.152-165
    • /
    • 2017
  • The greater long-tail hamster, Tscherskia triton, is widely distributed in Northern China, Korea and adjacent areas of Russia. Except for its distribution, biological characteristics related to life history, behavior, and ecological influences for this species are rarely studied in Korea. This study was conducted to obtain biological information on breeding, growth and development that are basic to species-specific studies. The study adopted laboratory management of a breeding programme for T. triton collected in Jeju Island from March, 2015 to December, 2016. According to the study results, the conception rate was 31.67% and the mice in the large cages had a higher rate of conception than those in the small cages (56.7 vs. 6.7%). The gestation period was $22{\pm}1.6days$ (ranges from 21 to27 days), and litter size ranged from 2 to 7, with a mean of $4.26{\pm}1.37$ in the species. The minimum age for weaning was between $19.2{\pm}1.4days$ (range of 18-21 days). There were no significant differences by sex between mean body weight and external body measurements at birth. However, a significant sexual difference was found from the period of weaning (21 days old) in head and body length, as well as tail length (HBL-weaning, $106.50{\pm}6.02$ vs. $113.34{\pm}4.72mm$, p<0.05; HBL-4 months, $163.93{\pm}5.42$ vs. $182.83{\pm}4.32mm$, p<0.05; TL-4 months, $107.23{\pm}3.25$ vs. $93.95{\pm}2.15mm$, p<0.05). Gompertz and Logistic growth curves were fitted to data for body weight and lengths of head and body, tail, ear, and hind foot. In two types of growth curves, males exhibited greater asymptotic values ($164.840{\pm}7.453$ vs. $182.830{\pm}4.319mm$, p<0.0001; $163.936{\pm}5.415$ vs. $182.840{\pm}4.333mm$, p<0.0001), faster maximum growth rates ($1.351{\pm}0.065$ vs. $1.435{\pm}0.085$, p<0.05; $2.870{\pm}0.253$ vs. $3.211{\pm}0.635$, p<0.05), and a later age of maximum growth than females in head and body length ($5.121{\pm}0.318$ vs. $5.520{\pm}0.333$, p<0.05; $6.884{\pm}0.336$ vs. $7.503{\pm}0.453$, p<0.05). However, females exhibited greater asymptotic values ($105.695{\pm}5.938$ vs. $94.150{\pm}2.507mm$, p<0.001; $111.609{\pm}14.881$ vs. $93.960{\pm}2.150mm$, p<0.05) and longer length of inflection ($60.306{\pm}1.992$ vs. $67.859{\pm}1.330mm$, p<0.0001; $55.714{\pm}7.458$ vs. $46.975{\pm}1.074mm$, p<0.05) than males in tail length. These growth rate constants, viz. the morphological characters and weights of the males and females, were similar to each other in two types of growth curves. These results will be used as necessary data to study species specificity of T. triton with biological foundations.

Studies on the Estimation of Growth Pattern Cut-up Parts in Four Broiler Strain in Growing Body Weight (육용계에 있어서 계통간 산육능력 및 체중증가에 따른 각 부위별 증가양상 추정에 관한 연구)

  • 양봉국;조병욱
    • Korean Journal of Poultry Science
    • /
    • v.17 no.3
    • /
    • pp.141-156
    • /
    • 1990
  • The experiments were conducted to investigate the possibility of improving the effectiveness of the existing method to estimate the edible meat weight in the live broiler chicken. A total of 360 birds, five male and female chicks from each line were sacrificed at Trial 1 (body weight 900-1, 000g), Trial 2 (body weight 1.200-1, 400g), Trial 3(body weight 1, 600-1, 700), and Trial 4(body weight 2, 000g) in order to measure the body weight, edible meat weight of breast, thigh and drumsticks, and various components of body weight. Each line was reared at the Poultry Breeding Farm, Seoul National University from the second of july, 1987 to the thirteenth of September, 1987. The results obtained from this study were summarized as follows : 1. The average body weights of each line( H. T, M, A) were $2150.5\pm$34.9, $2133.0\pm$26.2, $1960.0\pm$23.1, and $2319.3\pm$27.9, respectively. at 7 weeks of age. The feed to body weight eain ratio for each line chicks was 2.55, 2.13, 2.08, and 2.03, respectively, for 0 to 7 weeks of age. The viability of each line was 99.7. 99.7, 100.0, and 100.0%, respectively, for 0 to 7 weeks of age.01 was noticed that A Line chicks grow significantly heavier than did T, H, M line chic ks from 0 to 7 weeks of age. The regression coefficients of growth curves from each line chicks were bA=1.015, bH=0.265, bM=0.950 and bT=0.242, respectively. 2. Among the body weight components, the feather. abdominal fat, breast, and thigh and drumsticks increased in their weight percentage as the birds grew older, while neck. head, giblets and inedible viscera decreased. No difference wat apparent in shank, wings and hack. 3. The weight percentages of breast in edible part for each line thicks were 19.2, 19.0, 19.9 and 19.0% at Trial 4, respectively. The weight percentages of thigh and drumsticks in edible part for each line chicks were 23.1, 23.3, 22.8, and 23.0% at Trial 4. respective1y. 4. The values for the percentage meat yield from breast were 77.2. 78.9 73.5 and 74.8% at Trial 4 in H, T, M and A Line chicks. respectively. For thigh and drumstick, the values of 80.3, 78.4. 79.7 and 80.2% were obtained. These data indicate that the percentage meat yield increase as the birds grow older. 5. The correlation coefficients between body weight and blood. head, shanks. breast. thigh-drumstick were high. The degree if correlation between abdominal fat(%) and percentage of edible meat were extremely low at all times, but those between abdominal fat (%) and inedible viscera were significantly high.

  • PDF

Influence of Fertilizer Type on Physiological Responses during Vegetative Growth in 'Seolhyang' Strawberry (생리적 반응이 다른 비료 종류가 '설향' 딸기의 영양생장에 미치는 영향)

  • Lee, Hee Su;Jang, Hyun Ho;Choi, Jong Myung;Kim, Dae Young
    • Horticultural Science & Technology
    • /
    • v.33 no.1
    • /
    • pp.39-46
    • /
    • 2015
  • Objective of this research was to investigate the influence of compositions and concentrations of fertilizer solutions on the vegetative growth and nutrient uptake of 'Seolhyang' strawberry. To achieve this, the solutions of acid fertilizer (AF), neutral fertilizer (NF), and basic fertilizer (BF) were prepared at concentrations of 100 or $200mg{\cdot}L^{-1}$ based on N and applied during the 100 days after transplanting. The changes in chemical properties of the soil solution were analysed every two weeks, and crop growth measurements as well as tissue analyses for mineral contents were conducted 100 days after fertilization. The growth was the highest in the treatments with BF, followed by those with NF and AF. The heaviest fresh and dry weights among treatments were 151.3 and 37.8 g, respectively, with BF $200mg{\cdot}L^{-1}$. In terms of tissue nutrient contents, the highest N, P and Na contents, of 3.08, 0.54, and 0.10%, respectively, were observed in the treatment with NF $200mg{\cdot}L^{-1}$. The highest K content was 2.83%, in the treatment with AF $200mg{\cdot}L^{-1}$, while the highest Ca and Mg were 0.98 and 0.42%, respectively, in BF $100mg{\cdot}L^{-1}$. The AF treatments had higher tissue Fe, Mn, Zn, and Cu contents compared to those of NF or BF when fertilizer concentrations were controlled to equal. During the 100 days after fertilization, the highest and lowest pH in soil solution of root media among all treatments tested were 6.67 in BF $100mg{\cdot}L^{-1}$ and 4.69 in AF $200mg{\cdot}L^{-1}$, respectively. The highest and lowest ECs were $5.132dS{\cdot}m^{-1}$ in BF $200mg{\cdot}L^{-1}$ and $1.448dS{\cdot}m^{-1}$ in BF $100mg{\cdot}L^{-1}$, respectively. For the concentrations of macronutrients in the soil solution of root media, the AF $200mg{\cdot}L^{-1}$ treatment gave the highest $NH_4$ concentrations followed by NF $200mg{\cdot}L^{-1}$ and AF $100mg{\cdot}L^{-1}$. The K concentrations in all treatments rose gradually after day 42 in all treatments. When fertilizer concentrations were controlled to equal, the highest Ca and Mg concentrations were observed in AF followed by NF and BF until day 84 in fertilization. The BF treatments produced the highest $NO_3$ concentrations, followed by NF and AF. The trends in the change of $PO_4$ concentration were similar in all treatments. The $SO_4$ concentrations were higher in treatments with AF than those with NF or BF until day 70 in fertilization. These results indicate that compositions of fertilizer solution should to be modified to contain more alkali nutrients when 'Seolhyang' strawberry is cultivated through inert media and nutri-culture systems.

Effects of Alginic Acid, Cellulose and Pectin Level on Bowel Function in Rats (알긴산과 셀룰로오스 및 펙틴 수준이 흰쥐의 대장기능에 미치는 영향)

  • 이형자
    • Journal of Nutrition and Health
    • /
    • v.30 no.5
    • /
    • pp.465-477
    • /
    • 1997
  • The purpose of this article is to know the effects on bowel function of the kind of fiber and the amount of fiber in SD-rats. To do this experiment, we select of $\alpha$-cellulose as n insoluble cellulose source and alginic acid and pectin as soluble cellulose source. The rats diets contained callolose camcentrations of 1.0%, 3.6%, 6.0% and 10.0%. After that, we raised the SD-rats for 4weeks and measured the amount of food intake, body weight, the food effciency ratio, the length of liver and stomach the weight of the intestines, the transit time through the intestines, pH in feces, and the amount of bile acid and Ca, Mg, pp. 1) The amount of food intake was 15.75-31.00g/day. It was highest in the 10.0% cellulose group and the lowest in the 3.6% and 6.0% alginic acid group (p<0.05). The body weights of rats were 277.50-349.809. It was highest in the 1.0% pectin group and lowest in the 3.6% alginic acid group, 6.0% cellulose group, and 10.0% pectin group. It had differences according to the content fiber and the kind of dietary(p<0.01). The food efficiency ratio was (p<0.01). The higher the content of dietary fiber, the lower the calory and the food efficiency ratio. 2) Transit time was 446.0-775.0 minutes and it showed signidicant ifferences according to the content and kind of dietary fiber(p<0.01). It was long in the 1.0% cellulose group and 1.0% pectin group but short in the 10.0% alginic acid group. As the content of dietary fiber increased, the transit time through the intestines was shortened. The length of small intestine was 101.03-120.40cm and there were no difference cegardloss of the content and kind of fiber. The length of the large intestine was 20.92-25.42cm and there were significant differences according to the content and kind of the fiber. High-fiber diets resulted in increases in the length of the large intestine. 3) The weight of the liver was 8.68-10.96g and there were no differences according to the content and kind of fiber. The weight of stomach was 1.28-1.74g and there were no differences resulting from the kind of dietary fiber, but it was highest in the 10.0% alginic acid group. The weight of the small intestine was 5.52-8.04g with no difference resulting from to the kind of fiber. It was highest in the 10.0% the alginic acid group and lowest in the 1.0% alginic acid group(p<0.05). The weight of large intestine was 2.50-3.30g with no differences related to the kind of dietary fiber. It was heaviest in the 6.0% and 10.0% alginic acid groups and in the 10.0% pectin group with differences related to the content of fiber(p<0.05). 4) The pH of the feces was 5.82-6.86 according to the kind of dietary fiber, alginic acid group was high at 6.66, the cellulose group was 6.26. but the pectin group was low at 6.30. There were difference according to the content of fiber, but no consistency. The content of bile acid was 6.25-34.77umol per 1g of dry feces. According to the kind of dietary fiber, the alginic acid group was low at 12.91umol, cellulose group was 18.64umol and, the pectin group was the highest at 27.78umol(p<0.001). Based on the content of dietary fiber, alginic acid group was low at 1.0%, but high at 3.6% pectin group(p<0.001). 5) The amount of feces was 1.00-5.10g/day. The weight of rat feces was 2.23g/day in the alginic acid goup, 2.75g/day in the cellulose group, and 1.82g/day in the pectin group. According to the content of fiber, cellulose group was high at 10.0% but alginic acid group was 1.0%, and there were significant difference according to the dietary fiber. The more the content of fiber, the more increase the content of feces in alginic acid, cellulose and pectin group. The content of Ca in the feces was 80.10-207.82mg/1g of dry feces. In the dietary fiber, alginic acid group was 193.08mg, cellulose group was 87.5mg, pectin group was 138.16mg. In the content of fiber, alginic acid group was high at 1.0% and 3.6% but low at 10.0% of Pectin group. The content of Mg was 19.15-44.72mg/1g of dry feces. According to the kind of dietary fiber, alginic acid group was 35.33mg, cellulose group was 23.60mg, and pectin was 36.93mg. According to the content of fiber, pectin group was high at 1.0% and low at 10.0% of cellulose group. The content of P was 1.65-4.65mg/1g of dry feces. According to the kind of dietary fiber, alginic acid group 2.23mg/g dry feces, cellulose group was 2.29mg/g, pectin group wa 4.08mg/g dry feces. In the content of fiber, pectin group was high at 6.0% and low at 6.0% alginic acid group, but there were significant difference among the analysis value. The conetnt of Ca and MG was higher in soluble alginic acid group and pectin group than in insoluble cellulose group. The high the content of the dietary fiber, the lower the food efficiency ratio and the short the transit time through intestine with the increase of the length of large intestin as well as the higher level of the stomach, the small intestine and the large intestine. According to the content of the dietary fiber, the amount of the feces, Ca, Mg and P was increased but the length the small intestin, the weight of liver, pH of the feces and the amount of bile acid showed no differences and consistency.

  • PDF

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

A Study on Estimation of Edible Meat Weight in Live Broiler Chickens (육용계(肉用鷄)에서 가식육량(可食肉量)의 추정(推定)에 관(關)한 연구(硏究))

  • Han, Sung Wook;Kim, Jae Hong
    • Korean Journal of Agricultural Science
    • /
    • v.10 no.2
    • /
    • pp.221-234
    • /
    • 1983
  • A study was conducted to devise a method to estimate the edible meat weight in live broilers. White Cornish broiler chicks CC, Single Comb White Leghorn egg strain chicks LL, and two reciprocal cross breeds of these two parent stocks (CL and LC) were employed A total of 240 birds, 60 birds from each breed, were reared and sacrificed at 0, 2, 4, 6, 8 and 10 weeks of ages in order to measure various body parameters. Results obtained from this study were summarized as follows. 1) The average body weight of CC and LL were 1,820g and 668g, respectively, at 8 weeks of age. The feed to gain ratios for CC and LL were 2.24 and 3.28, respectively. 2) The weight percentages of edible meat to body weight were 34.7, 36.8 and 37.5% at 6, 8 and 10 weeks of ages, respectively, for CC. The values for LL were 30.7, 30.5 and 32.3%, respectively, The CL and LC were intermediate in this respect. No significant differences were found among four breeds employed. 3) The CC showed significantly smaller weight percentages than did the other breeds in neck, feather, and inedible viscera. In comparison, the LL showed the smaller weight percentages of leg and abdominal fat to body weight than did the others. No significant difference was found among breeds in terms of the weight percentages of blood to body weight. With regard to edible meat, the CC showed significantly heavier breast and drumstick, and the edible viscera was significantly heavier in LL. There was no consistent trend in neck, wing and back weights. 4) The CC showed significantly larger measurements body shape components than did the other breeds at all time. Moreover, significant difference was found in body shape measurements between CL and LC at 10 weeks of age. 5) All of the measurements of body shape components except breast angle were highly correlated with edible meat weight. Therefore, it appeared to be possible to estimate the edible meat wight of live chickens by the use of these values. 6) The optimum regression equations for the estimation of edible meat weight by body shape measurements at 10 weeks of age were as follows. $$Y_{cc}=-1,475.581 +5.054X_{26}+3.080X_{24}+3.772X_{25}+14.321X_{35}+1.922X_{27}(R^2=0.88)$$ $$Y_{LL}=-347.407+4.549X_{33}+3.003X_{31}(R^2=0.89)$$ $$Y_{CL}=-1,616.793+4.430X_{24}+8.566X_{32}(R^2=0.73)$$ $$Y_{LC}=-603.938+2.142X_{24}+3.039X_{27}+3.289X_{33}(R^2=0.96)$$ Where $X_{24}$=chest girth, $X_{25}$=breast width, $X_{26}$=breast length, $X_{27}$=keel length, $X_{31}$=drumstick girth, $X_{32}$=tibotarsus length, $X_{33}$=shank length, and $X_{35}$=shank diameter. 7) The breed and age factors caused considerable variations in assessing the edible meat weight in live chicken. It seems however that the edible meat weight in live chicken can be estimated fairly accurately with optimum regression equations derived from various body shape measurements.

  • PDF