• Title/Summary/Keyword: three different solutions

Search Result 519, Processing Time 0.028 seconds

A digital Audio Watermarking Algorithm using 2D Barcode (2차원 바코드를 이용한 오디오 워터마킹 알고리즘)

  • Bae, Kyoung-Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.97-107
    • /
    • 2011
  • Nowadays there are a lot of issues about copyright infringement in the Internet world because the digital content on the network can be copied and delivered easily. Indeed the copied version has same quality with the original one. So, copyright owners and content provider want a powerful solution to protect their content. The popular one of the solutions was DRM (digital rights management) that is based on encryption technology and rights control. However, DRM-free service was launched after Steve Jobs who is CEO of Apple proposed a new music service paradigm without DRM, and the DRM is disappeared at the online music market. Even though the online music service decided to not equip the DRM solution, copyright owners and content providers are still searching a solution to protect their content. A solution to replace the DRM technology is digital audio watermarking technology which can embed copyright information into the music. In this paper, the author proposed a new audio watermarking algorithm with two approaches. First, the watermark information is generated by two dimensional barcode which has error correction code. So, the information can be recovered by itself if the errors fall into the range of the error tolerance. The other one is to use chirp sequence of CDMA (code division multiple access). These make the algorithm robust to the several malicious attacks. There are many 2D barcodes. Especially, QR code which is one of the matrix barcodes can express the information and the expression is freer than that of the other matrix barcodes. QR code has the square patterns with double at the three corners and these indicate the boundary of the symbol. This feature of the QR code is proper to express the watermark information. That is, because the QR code is 2D barcodes, nonlinear code and matrix code, it can be modulated to the spread spectrum and can be used for the watermarking algorithm. The proposed algorithm assigns the different spread spectrum sequences to the individual users respectively. In the case that the assigned code sequences are orthogonal, we can identify the watermark information of the individual user from an audio content. The algorithm used the Walsh code as an orthogonal code. The watermark information is rearranged to the 1D sequence from 2D barcode and modulated by the Walsh code. The modulated watermark information is embedded into the DCT (discrete cosine transform) domain of the original audio content. For the performance evaluation, I used 3 audio samples, "Amazing Grace", "Oh! Carol" and "Take me home country roads", The attacks for the robustness test were MP3 compression, echo attack, and sub woofer boost. The MP3 compression was performed by a tool of Cool Edit Pro 2.0. The specification of MP3 was CBR(Constant Bit Rate) 128kbps, 44,100Hz, and stereo. The echo attack had the echo with initial volume 70%, decay 75%, and delay 100msec. The sub woofer boost attack was a modification attack of low frequency part in the Fourier coefficients. The test results showed the proposed algorithm is robust to the attacks. In the MP3 attack, the strength of the watermark information is not affected, and then the watermark can be detected from all of the sample audios. In the sub woofer boost attack, the watermark was detected when the strength is 0.3. Also, in the case of echo attack, the watermark can be identified if the strength is greater and equal than 0.5.

Understanding Problem-Solving Type Inquiry Learning and it's Effect on the Improvement of Ability to Design Experiments: A Case Study on Science-Gifted Students (문제해결형 탐구학습에 대한 인식과 학습이 실험 설계 능력에 미친 효과 : 과학 영재학생들에 대한 사례 연구)

  • Ju, Mi-Na;Kim, Hyun-Joo
    • Journal of The Korean Association For Science Education
    • /
    • v.33 no.2
    • /
    • pp.425-443
    • /
    • 2013
  • We developed problem-solving type inquiry learning programs reflecting scientists' research process and analyzed the activities of science-gifted high school students, and the understanding and the effects of the programs after implementation in class. For this study, twelve science-gifted students in the 10th grade participated in the program, which consisted of three different modules - making a cycloidal pendulum, surface growth, and synchronization using metronomes. Diet Cola Test (DCT) was used to find out the effect on the improvement of the ability to design experiments by comparing pre/post scores, with a survey and an interview being conducted after the class. Each module consisted of a series of processes such as questioning the phenomenon scientifically, designing experiments to find solutions, and doing activities to solve the problems. These enable students to experience problem-solving type research process through the program class. According to this analysis, most students were likely to understand the characteristics of problem-solving type inquiry learning programs reflecting the scientists' research process. According to the students, there are some differences between this program class and existing school class. The differences are: 'explaining phenomenon scientifically,' 'designing experiments for themselves,' and 'repeating the experiments several times.' During the class students have to think continuously, design several experiments, and carry them out to solve the problems they found at first. Then finally, they were able to solve the problems. While repeating this kind of activities they have been able to experience the scientists' research process. Also, they showed a positive attitude toward the scientists' research by understanding problem-solving type research process. These problem-solving type inquiry learning programs seem to have positive effects on students in designing experiments and offering the opportunity for critical argumentation on the causes of the phenomena. The results of comparing pre/post scores for DCT revealed that almost every student has improved his/her ability to design experiments. Students who were accustomed to following teacher's instructions have had difficulty in designing the experiments for themselves at the beginning of the class, but gradually, they become used to doing it through the class and finally were able to do it systematically.

Jet Lag and Circadian Rhythms (비행시차와 일중리듬)

  • Kim, Leen
    • Sleep Medicine and Psychophysiology
    • /
    • v.4 no.1
    • /
    • pp.57-65
    • /
    • 1997
  • As jet lag of modern travel continues to spread, there has been an exponential growth in popular explanations of jet lag and recommendations for curing it. Some of this attention are misdirected, and many of those suggested solutions are misinformed. The author reviewed the basic science of jet lag and its practical outcome. The jet lag symptoms stemed from several factors, including high-altitude flying, lag effect, and sleep loss before departure and on the aircraft, especially during night flight. Jet lag has three major components; including external de synchronization, internal desynchronization, and sleep loss. Although external de synchronization is the major culprit, it is not at all uncommon for travelers to experience difficulty falling asleep or remaining asleep because of gastrointestinal distress, uncooperative bladders, or nagging headaches. Such unwanted intrusions most likely to reflect the general influence of internal desynchronization. From the free-running subjects, the data has revealed that sleep tendency, sleepiness, the spontaneous duration of sleep, and REM sleep propensity, each varied markedly with the endogenous circadian phase of the temperature cycle, despite the facts that the average period of the sleep-wake cycle is different from that of the temperature cycle under these conditions. However, whereas the first ocurrence of slow wave sleep is usually associated with a fall in temperature, the amount of SWS is determined primarily by the length of prior wakefulness and not by circadian phase. Another factor to be considered for flight in either direction is the amount of prior sleep loss or time awake. An increase in sleep loss or time awake would be expected to reduce initial sleep latency and enhance the amount of SWS. By combining what we now know about the circadian characteristics of sleep and homeostatic process, many of the diverse findings about sleep after transmeridian flight can be explained. The severity of jet lag is directly related to two major variables that determine the reaction of the circadian system to any transmeridian flight, eg., the direction of flight, and the number of time zones crossed. Remaining factor is individual differences in resynchmization. After a long flight, the circadian timing system and homeostatic process can combine with each other to produce a considerable reduction in well-being. The author suggested that by being exposed to local zeit-gebers and by being awake sufficient to get sleep until the night, sleep improves rapidly with resynchronization following time zone change.

  • PDF

Predicting the Performance of Recommender Systems through Social Network Analysis and Artificial Neural Network (사회연결망분석과 인공신경망을 이용한 추천시스템 성능 예측)

  • Cho, Yoon-Ho;Kim, In-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.159-172
    • /
    • 2010
  • The recommender system is one of the possible solutions to assist customers in finding the items they would like to purchase. To date, a variety of recommendation techniques have been developed. One of the most successful recommendation techniques is Collaborative Filtering (CF) that has been used in a number of different applications such as recommending Web pages, movies, music, articles and products. CF identifies customers whose tastes are similar to those of a given customer, and recommends items those customers have liked in the past. Numerous CF algorithms have been developed to increase the performance of recommender systems. Broadly, there are memory-based CF algorithms, model-based CF algorithms, and hybrid CF algorithms which combine CF with content-based techniques or other recommender systems. While many researchers have focused their efforts in improving CF performance, the theoretical justification of CF algorithms is lacking. That is, we do not know many things about how CF is done. Furthermore, the relative performances of CF algorithms are known to be domain and data dependent. It is very time-consuming and expensive to implement and launce a CF recommender system, and also the system unsuited for the given domain provides customers with poor quality recommendations that make them easily annoyed. Therefore, predicting the performances of CF algorithms in advance is practically important and needed. In this study, we propose an efficient approach to predict the performance of CF. Social Network Analysis (SNA) and Artificial Neural Network (ANN) are applied to develop our prediction model. CF can be modeled as a social network in which customers are nodes and purchase relationships between customers are links. SNA facilitates an exploration of the topological properties of the network structure that are implicit in data for CF recommendations. An ANN model is developed through an analysis of network topology, such as network density, inclusiveness, clustering coefficient, network centralization, and Krackhardt's efficiency. While network density, expressed as a proportion of the maximum possible number of links, captures the density of the whole network, the clustering coefficient captures the degree to which the overall network contains localized pockets of dense connectivity. Inclusiveness refers to the number of nodes which are included within the various connected parts of the social network. Centralization reflects the extent to which connections are concentrated in a small number of nodes rather than distributed equally among all nodes. Krackhardt's efficiency characterizes how dense the social network is beyond that barely needed to keep the social group even indirectly connected to one another. We use these social network measures as input variables of the ANN model. As an output variable, we use the recommendation accuracy measured by F1-measure. In order to evaluate the effectiveness of the ANN model, sales transaction data from H department store, one of the well-known department stores in Korea, was used. Total 396 experimental samples were gathered, and we used 40%, 40%, and 20% of them, for training, test, and validation, respectively. The 5-fold cross validation was also conducted to enhance the reliability of our experiments. The input variable measuring process consists of following three steps; analysis of customer similarities, construction of a social network, and analysis of social network patterns. We used Net Miner 3 and UCINET 6.0 for SNA, and Clementine 11.1 for ANN modeling. The experiments reported that the ANN model has 92.61% estimated accuracy and 0.0049 RMSE. Thus, we can know that our prediction model helps decide whether CF is useful for a given application with certain data characteristics.

Criticism of Landscape Urbanism - Focused on Internal Structures of the Discourse - (랜드스케이프 어바니즘의 비판적 견해에 대한 고찰 - 담론의 내재적 체계를 중심으로 -)

  • Kim, Youngmin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.43 no.2
    • /
    • pp.87-104
    • /
    • 2015
  • As the influence of Landscape Urbanism has grown, various criticisms on the discourse also have increased. A study on critical opinions of Landscape Urbanism is necessary to fully comprehend the theoretical structure of the discourse and its limitations. This study introduced the concept of Intension and Extension, which is used in the field of Logics and Semiotic, as an analytical tool to interpret various criticisms based on different views in a more objective and synthetic way. After examining the development of criticisms of Landscape Urbanism, 30 texts with important critiques on the theory were selected and analyzed. Criticisms can be classified as internal criticism and external criticism according to specific topics they are engaged with. The study only covers internal criticism as a research scope. The internal criticisms on Landscape Urbanism are re-categorized into topics of theory, practice and the relation between theory and practice. Vagueness of concepts and error in concepts are two types criticism related to the issue of theory. Lexical Ambiguity and Intensional Vagueness are the main causes of conceptual vagueness in Landscape Urbanism. Conceptual vagueness related with the problem of redefining an existing concept through expanding its meaning reveals a structural dilemma. There are three types of criticism included in the topic of practice: absence of practical results, form-oriented practice, and ambiguous identity in practical results. Ambiguous identity is caused by Extensional Vagueness allowing borderline cases. Because these borderline cases overlap with extensions of landscape architecture, it is hard to differentiate projects of Landscape Urbanism and those of conventional landscape architecture. Most criticisms on the relation between theory and practice raise the question on the practical method. Two types of criticism are engaged with the topic of the practical method: errors in practical methods and absence of practical methods. The absence of practical methods is a fundamental problem of Landscape Urbanism which is hard to solve by the proposed solutions. However, these structural problems are not only the weak point but also the factor that is able to prove the potentials expand the scope of Landscape Urbanism. In addition to the results of the study, internal criticisms on Landscape Urbanism should be examined in the following studies in order to predict the next direction of Landscape Urbanism.

Antioxidant and Antibacterial Activities of Glycyrrhiza uralensis Fisher (Jecheon, Korea) Extracts Obtained by various Extract Conditions (한국 제천 감초(Glycyrrhiza uralensis Fisher)의 추출 조건별 추출물의 항산화 및 항균 활성 평가)

  • Ha, Ji Hoon;Jeong, Yoon Ju;Seong, Joon Seob;Kim, Kyoung Mi;Kim, A Young;Fu, Min Min;Suh, Ji Young;Lee, Nan Hee;Park, Jino;Park, Soo Nam
    • Journal of the Society of Cosmetic Scientists of Korea
    • /
    • v.41 no.4
    • /
    • pp.361-373
    • /
    • 2015
  • This study was carried out to evaluate the antioxidant and antibacterial activities of Glycyrriza uralensis Fisher (Jecheon, Korea) extracts obtained by various extraction conditions (85% ethanol, heating temperatures and times), and to establish the optimal extraction condition of G. uralensis for the application as cosmetic ingredients. The extracts obtained under different conditions were concentrated and made in the powdered (sample-1) and were the crude extract solutions without concentration (sample-2). The antioxidant effects were determined by free radical scavenging activity ($FSC_{50}$), ROS scavenging activity ($OSC_{50}$), and cellular protective effects. Antibacterial activity was determined by minimum inhibitory concentration (MIC) on human skin flora. DPPH free radical scavenging activity of sample-1 ($100{\mu}g/mL$) was 10% higher in group extracted for 6 h than 12 h, but sample-2 didn't show any significant differences. The extraction yield extracted with same temperature for 12 h was 2.6 times higher than 6 h, but total flavonoid content was 1.1 times higher. These results indicated that total flavonoid content hardly increased with increasing extraction time. Free radical scavenging activity, ROS scavenging activity and cellular protective effects were not dependent on the yield of extraction, but total flavonoid content of extraction. Antibacterial activity on three skin flora (S. aureus, B. subtilis, P. acnes)of sample-1 in different extraction conditions were evaluated on same concentration, and the group extracted at 25 and $40^{\circ}C$ showed 16 times higher than methyl paraben ($2,500{\mu}g/mL$). In conclusion, 85% ethanol extracts of G. uralensis extracted at $40^{\circ}C$ for 6 h showed the highest antioxidant and antibacterial activity. These results indicate that the extraction condition is important to be optimized by comprehensive evaluation of extraction yield with various conditions, yield of active component, and activity test with concentrations, and activity of 100% extract, for manufacturing process of products.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

An Exploratory Study on the Competition Patterns Between Internet Sites in Korea (한국 인터넷사이트들의 산업별 경쟁유형에 대한 탐색적 연구)

  • Park, Yoonseo;Kim, Yongsik
    • Asia Marketing Journal
    • /
    • v.12 no.4
    • /
    • pp.79-111
    • /
    • 2011
  • Digital economy has grown rapidly so that the new business area called 'Internet business' has been dramatically extended as time goes on. However, in the case of Internet business, market shares of individual companies seem to fluctuate very extremely. Thus marketing managers who operate the Internet sites have seriously observed the competition structure of the Internet business market and carefully analyzed the competitors' behavior in order to achieve their own business goals in the market. The newly created Internet business might differ from the offline ones in management styles, because it has totally different business circumstances when compared with the existing offline businesses. Thus, there should be a lot of researches for finding the solutions about what the features of Internet business are and how the management style of those Internet business companies should be changed. Most marketing literatures related to the Internet business have focused on individual business markets. Specifically, many researchers have studied the Internet portal sites and the Internet shopping mall sites, which are the most general forms of Internet business. On the other hand, this study focuses on the entire Internet business industry to understand the competitive circumstance of online market. This approach makes it possible not only to have a broader view to comprehend overall e-business industry, but also to understand the differences in competition structures among Internet business markets. We used time-series data of Internet connection rates by consumers as the basic data to figure out the competition patterns in the Internet business markets. Specifically, the data for this research was obtained from one of Internet ranking sites, 'Fian'. The Internet business ranking data is obtained based on web surfing record of some pre-selected sample group where the possibility of double-count for page-views is controlled by method of same IP check. The ranking site offers several data which are very useful for comparison and analysis of competitive sites. The Fian site divides the Internet business areas into 34 area and offers market shares of big 5 sites which are on high rank in each category daily. We collected the daily market share data about Internet sites on each area from April 22, 2008 to August 5, 2008, where some errors of data was found and 30 business area data were finally used for our research after the data purification. This study performed several empirical analyses in focusing on market shares of each site to understand the competition among sites in Internet business of Korea. We tried to perform more statistically precise analysis for looking into business fields with similar competitive structures by applying the cluster analysis to the data. The research results are as follows. First, the leading sites in each area were classified into three groups based on averages and standard deviations of daily market shares. The first group includes the sites with the lowest market shares, which give more increased convenience to consumers by offering the Internet sites as complimentary services for existing offline services. The second group includes sites with medium level of market shares, where the site users are limited to specific small group. The third group includes sites with the highest market shares, which usually require online registration in advance and have difficulty in switching to another site. Second, we analyzed the second place sites in each business area because it may help us understand the competitive power of the strongest competitor against the leading site. The second place sites in each business area were classified into four groups based on averages and standard deviations of daily market shares. The four groups are the sites showing consistent inferiority compared to the leading sites, the sites with relatively high volatility and medium level of shares, the sites with relatively low volatility and medium level of shares, the sites with relatively low volatility and high level of shares whose gaps are not big compared to the leading sites. Except 'web agency' area, these second place sites show relatively stable shares below 0.1 point of standard deviation. Third, we also classified the types of relative strength between leading sites and the second place sites by applying the cluster analysis to the gap values of market shares between two sites. They were also classified into four groups, the sites with the relatively lowest gaps even though the values of standard deviation are various, the sites with under the average level of gaps, the sites with over the average level of gaps, the sites with the relatively higher gaps and lower volatility. Then we also found that while the areas with relatively bigger gap values usually have smaller standard deviation values, the areas with very small differences between the first and the second sites have a wider range of standard deviation values. The practical and theoretical implications of this study are as follows. First, the result of this study might provide the current market participants with the useful information to understand the competitive circumstance of the market and build the effective new business strategy for the market success. Also it might be useful to help new potential companies find a new business area and set up successful competitive strategies. Second, it might help Internet marketing researchers take a macro view of the overall Internet market so that make possible to begin the new studies on overall Internet market beyond individual Internet market studies.

  • PDF

A Study on the Effect of Water Soluble Extractive upon Physical Properties of Wood (수용성(水溶性) 추출물(抽出物)이 목재(木材)의 물리적(物理的) 성질(性質)에 미치는 영향(影響))

  • Shim, Chong-Supp
    • Journal of the Korean Wood Science and Technology
    • /
    • v.10 no.3
    • /
    • pp.13-44
    • /
    • 1982
  • 1. Since long time ago, it has been talked about that soaking wood into water for a long time would be profitable for the decreasing of defects such as checking, cupping and bow due to the undue-shrinking and swelling. There are, however, no any actual data providing this fact definitly, although there are some guesses that water soluble extractives might effect on this problem. On the other hand, this is a few work which has been done about the effect of water soluble extractives upon the some physical properties of wood and that it might be related to the above mentioned problem. If man does account for that whether soaking wood into water for a long time would be profitable for the decreasing of defects due to the undue-shrinking and swelling in comparison with unsoaking wood or not, it may bring a great contribution on the reasonable uses of wood. To account for the effect of water soluble extractives upon physical properties of wood, this study has been made at the wood technology laboratory, School of Forestry, Yale university, under competent guidance of Dr. F. F. Wangaard, with the following three different species which had been provided at the same laboratory. 1. Pinus strobus 2. Quercus borealis 3. Hymenaea courbaril 2. The physical properties investigated in this study are as follows. a. Equilibrium moisture content at different relative humidity conditions. b. Shrinkage value from gre condition to different relative humidity conditions and oven dry condition. c. Swelling value from oven dry condition to different relative humidity conditions. d. Specific gravity 3. In order to investigate the effect of water soluble extractives upon physical properties of wood, the experiment has been carried out with two differently treated specimens, that is, one has been treated into water and the other into sugar solution, and with controlled specimens. 4. The quantity of water soluble extractives of each species and the group of chemical compounds in the extracted liquid from each species have shown in Table 36. Between species, there is some difference in quantity of extractives and group of chemical compounds. 5. In the case of equilibrium moisture contents at different relative humidity condition, (a) Except the desorption case at 80% R. H. C. (Relative Humidity Condition), there is a definite line between untreated specimens and treated specimens that is, untreated specimens hold water more than treated specimens at the same R.H.C. (b) The specimens treated into sugar solution have shown almost the same tendency in results compared with the untreated specimens. (c) Between species, there is no any definite relation in equilibrium moisture content each other, however E. M. C. in heartwood of pine is lesser than in sapwood. This might cause from the difference of wood anatomical structure. 6. In the case of shrinkage, (a) The shrinkage value of the treated specimen into water is more than that of the untreated specimens, except anyone case of heartwood of pine at 80% R. H. C. (b) The shrinkage value of treated specimens in the sugar solution is less than that of the others and has almost the same tendency to the untreated specimens. It would mean that the penetration of some sugar into the wood can decrease the shrinkage value of wood. (c) Between species, the shrinkage value of heartwood of pine is less than sapwood of the same, shrinkage value of oak is the largest, Hymenaea is lesser than oak and more than pine. (d) Directional difference of shrinkage value through all species can also see as other all kind of species previously tested. (e) There is a definite relation in between the difference of shrinkage value of treated and untreated specimens and amount of extractives, that is, increasing extractives gives increasing the difference of shrinkage value between treated and untreated specimens. 7. In the case of swelling, (a) The swelling value of treated specimens is greater than that of the untreated specimens through all cases. (b) In comparison with the tangential direction and radial direction, the swelling value of tangential direction is larger than that of radial direction in the same species. (c) Between species, the largest one in swelling values is oak and the smallest pine heartwood, there are also a tendency that species which shrink more swell also more and, on the contrary, species which shrink lesser swell also lesser than the others. 8. In the case of specific gravity, (a) The specific gravity of the treated specimens is larger than that of untreated specimens. This reversed value between treated and untreated specimens has been resulted from the volume of specimen of oven dry condition. (b) Between species, there are differences, that is, the specific gravity of Hymenaea is the largest one and the sapwood of pine is the smallest. 9. Through this investigation, it has been concluded that soaking wood into plain water before use without any special consideration may bring more hastful results than unsoaking for use of wood. However soaking wood into the some specially provided solutions such as salt water or inorganic matter may be dissolved in it, can be profitable for the decreasing shrinkage and swelling, checking, shaking and bow etc. if soaking wood into plain water might bring the decreasing defects, it might come from even shrinking and swelling through all dimension.

  • PDF