• Title/Summary/Keyword: SAMe

Search Result 58,884, Processing Time 0.082 seconds

Studies on the fate of nitrogen in the paddy soil (답토양(沓土壤)에서 질소(窒素)의 동태(動態)에 관(關)한 연구(硏究))

  • Kim, Kwang Sik
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.9 no.1
    • /
    • pp.17-23
    • /
    • 1976
  • In order to investigate the fate of nitrogen in the paddy soil, Suchang, Hwasoon and Susan soil which have different properties, were treated with several nitrogen fertilizers such as ammonium chloride, ammonium sulfate, urea and SCU (sulfur-coated urea), and incubated under water-logged condition in $30^{\circ}C$ incubator. $NH_4-N$, $NO_3-N$, $Fe^{++}$ and pH in soil and stagnant water, were determined at 10, 20, 30, 40 and 50 days after incubation. The obtained results were summarized as follows: 1. The effect of rising temperature was increased in order of Hwasoon>Suchang>Susan and the effect of air drying soil was risen in order of Susan>Hwasoon>Suchang, while the rate of ammonication was in order of Susan>Suchang>Hwasoon. 2. The changes of $NH_4-N$ in stagnant water was dependent upon the nitrogen concentration of $NH_4Cl$ and $(NH_4)SO_4$ plat was high and decreased after 30 days incubation, but increased after 40 days and then decreased again. In contrast with the above, $NH_4-N$ concentration of urea and SCU plot was low but the change showed slightly through the incubation period. 3. Accumulation of $NH_4-N$ in the oxidative layer of the $NH_4Cl$ and $(NH_4)_2SO_4$ plot was higher than that of urea and SCU plot and $NH_4-N$ content was decreased with the incubation period. The change of $NH_4-N$ in the reductive layer showed the same pattern. 4. The changes of $NO_3-N$ in the stagnant water were different according to soil properties and nitrogen fertilizer. $NO_3-N$ concentration in stagnant water of urea and SCU plot was higher than in the $NH_4-Cl$ $(NH_4)_2SO_4$ plot and nearly disappeared after 30 to 40 days incubation. 5. The $NO_3-N$ concentration in the oxidative layer of soil was higher than reductive layer. The pattern of change was different in accordance with soil properties and nitrogen fertilizers. In general, nitrification in urea and SCU plot was more increased than $(NH_4)_2SO_4$ plot. In reductive layer, the concentration of $NO_3-N$ was very low until 30 days incubation and thereafter increased slightly. 6. Upon the concentration of $NH_4-N$ and $NO_3-N$ in stagnant water and soil, it was assumed that denitification of urea and SCU plot was higher than $NH_4Cl$ and $(NH_4)_2SO_4$ plot and denitrified nitrogen in incubation period was above 50%.

  • PDF

Application and Expansion of the Harm Principle to the Restrictions of Liberty in the COVID-19 Public Health Crisis: Focusing on the Revised Bill of the March 2020 「Infectious Disease Control and Prevention Act」 (코로나19 공중보건 위기 상황에서의 자유권 제한에 대한 '해악의 원리'의 적용과 확장 - 2020년 3월 개정 「감염병의 예방 및 관리에 관한 법률」을 중심으로 -)

  • You, Kihoon;Kim, Dokyun;Kim, Ock-Joo
    • The Korean Society of Law and Medicine
    • /
    • v.21 no.2
    • /
    • pp.105-162
    • /
    • 2020
  • In the pandemic of infectious disease, restrictions of individual liberty have been justified in the name of public health and public interest. In March 2020, the National Assembly of the Republic of Korea passed the revised bill of the 「Infectious Disease Control and Prevention Act.」 The revised bill newly established the legal basis for forced testing and disclosure of the information of confirmed cases, and also raised the penalties for violation of self-isolation and treatment refusal. This paper examines whether and how these individual liberty limiting clauses be justified, and if so on what ethical and philosophical grounds. The authors propose the theories of the philosophy of law related to the justifiability of liberty-limiting measures by the state and conceptualized the dual-aspect of applying the liberty-limiting principle to the infected patient. In COVID-19 pandemic crisis, the infected person became the 'Patient as Victim and Vector (PVV)' that posits itself on the overlapping area of 'harm to self' and 'harm to others.' In order to apply the liberty-limiting principle proposed by Joel Feinberg to a pandemic with uncertainties, it is necessary to extend the harm principle from 'harm' to 'risk'. Under the crisis with many uncertainties like COVID-19 pandemic, this shift from 'harm' to 'risk' justifies the state's preemptive limitation on individual liberty based on the precautionary principle. This, at the same time, raises concerns of overcriminalization, i.e., too much limitation of individual liberty without sufficient grounds. In this article, we aim to propose principles regarding how to balance between the precautionary principle for preemptive restrictions of liberty and the concerns of overcriminalization. Public health crisis such as the COVID-19 pandemic requires a population approach where the 'population' rather than an 'individual' works as a unit of analysis. We propose the second expansion of the harm principle to be applied to 'population' in order to deal with the public interest and public health. The new concept 'risk to population,' derived from the two arguments stated above, should be introduced to explain the public health crisis like COVID-19 pandemic. We theorize 'the extended harm principle' to include the 'risk to population' as a third liberty-limiting principle following 'harm to others' and 'harm to self.' Lastly, we examine whether the restriction of liberty of the revised 「Infectious Disease Control and Prevention Act」 can be justified under the extended harm principle. First, we conclude that forced isolation of the infected patient could be justified in a pandemic situation by satisfying the 'risk to the population.' Secondly, the forced examination of COVID-19 does not violate the extended harm principle either, based on the high infectivity of asymptomatic infected people to others. Thirdly, however, the provision of forced treatment can not be justified, not only under the traditional harm principle but also under the extended harm principle. Therefore it is necessary to include additional clauses in the provision in order to justify the punishment of treatment refusal even in a pandemic.

Spawning Patterns of Three Bitterling Fishes (Pisces: Acheilognathinae) in Relation to the Shell Size of Host Mussels (Unio douglasiae sinuolatus) (납자루아과(Pisces: Acheilognathinae) 담수어류 3종의 숙주조개(작은말조개; Unio douglasiae sinuolatus) 크기에 대한 산란양상)

  • Choi, Hee-kyu;Lee, Hyuk Je
    • Korean Journal of Environment and Ecology
    • /
    • v.33 no.2
    • /
    • pp.202-215
    • /
    • 2019
  • This study was conducted to investigate the spawning preference of the Acheilognathinae fishes in relation to the shell size of host mussels after identifying the species of eggs and fries in the host mussel using our recently developed RFLP (Restriction Fragment Length Polymorphism) molecular marker at four sites [Hongcheon Naechoncheon (HN) and Deokchicheon (HD) from the North Han River basin and Jeongseon Goljicheon (JG) and Joyanggang (JJ) from the South Han River] in South Korea during May in each year between 2015 and 2018. The Acheilognathinae fish observed in the studied sites included one species (Acheilognathus signifer) in HN and JG, three species (Rhodeus uyekii, A. signifer, and Acheilognathus yamatsutae) in HD, and two species (A. signifer and Acheilognathus yamatsutae) in JJ, and we collected 982 host mussels (Unio douglasiae sinuolatus) that inhabited in all four sites. Using the RFLP molecular marker, we confirmed 46 eggs and fry of the Acheilognathinae fish (454 A. signifer, 43 Acheilognathus yamatsutae, and 149 Acheilognathus yamatsutae) in Unio douglasiae sinuolatus (N=163; 16.6%). We compare the average shell length, shell height, and shell width of mussels with [presence] eggs/fry and mussels without [absence] eggs/fry to examine the spawning preference according to the size of host mussels in each site. The results show that the shell length (1.98 mm), shell height (0.85 mm), and shell width (0.73 mm) of mussels with the eggs/fry were significantly larger (Mann-Whitney U test, P=0.002; difference=1.98 mm) than those of mussel without eggs/fry in HD where three species cohabitated. Although the shell length, shell height, and shell width of mussels with the eggs/fry were larger also in the other three sites, the difference was not statistically significant. In addition, we analyzed the mean number of spawned eggs and fry of each species and found $9.31{\pm}5.94$ R. uyekii, $2.86{\pm}2.45$ A.signifer, and $2.50{\pm}1.32$ A. yamatsutae. R. uyekii spawned 6.45-6.81 more eggs than A.signifer and A. yamatsutae on average per mussel, and it was statistically significant (Kruskal-Wallis test, P < 0.001). These findings indicate that the three species of Acheilognathinae fish tend to prefer larger mussels as their spawning hosts, and this tendency increases when the number of cohabitating bitterling fish species increases. Moreover, A.signifer and A. yamatsutae spawned a smaller number of eggs evenly in more host mussels while R. uyekii spawned many eggs on relatively fewer mussels. We found mussels (N=4) having the eggs/fry of two coexisting species, A. signifier and A. yamatsutae in HD and JJ where more than two bitterling fish species occurred. It suggests the interspecific competition taking place between the Acheilognathinae fishes for utilizing the same resource of mussels for spawning when two or more species cohabitate. This study is expected help to understand better the spawning patterns and reproductive ecology of the Acheilognathinae fishes, which will provide insightful information for advancing our understanding of their ecological relationships - mutualism or host-parasitism - with host mussels.

A Study on the 1889 'Nanjukseok' (Orchid, Bamboo and Rock) Paintings of Seo Byeong-o (석재 서병오(1862-1936)의 1889년작 난죽석도 연구)

  • Choi, Kyoung Hyun
    • Korean Journal of Heritage: History & Science
    • /
    • v.51 no.4
    • /
    • pp.4-23
    • /
    • 2018
  • Seo Byeong-o (徐丙五, 1862-1936) played a central role in the formation of the Daegu artistic community-which advocated artistic styles combining poetry, calligraphy and painting-during the Japanese colonial period, when the introduction of the Western concept of 'art' led to the adoption of Japanese and Western styles of painting in Korea. Seo first entered the world of calligraphy and painting after meeting Lee Ha-eung (李昰應, 1820-1898) in 1879, but his career as a scholar-artist only began in earnest after Korea was annexed by Japan in 1910. Seo's oeuvre can be broadly divided into three periods. In his initial period of learning, from 1879 to 1897, his artistic activity was largely confined to copying works from Chinese painting albums and painting works in the "Four Gentlemen" genre, influenced by the work of Lee Ha-eung, in his spare time. This may have been because Seo's principal aim at this time was to further his career as a government official. His subsequent period of development, which lasted from 1898 until 1920, saw him play a leading social role in such areas as the patriotic enlightenment movement until 1910, after which he reoriented his life to become a scholar-artist. During this period, Seo explored new styles based on the orchid paintings of Min Yeong-ik (閔泳翊, 1860-1914), whom he met during his second trip to Shanghai, and on the bamboo paintings of Chinese artist Pu Hua (蒲華, 1830-1911). At the same time, he painted in various genres including landscapes, flowers, and gimyeong jeolji (器皿折枝; still life with vessels and flowers). In his final mature period, from 1921 to 1936, Seo divided his time between Daegu and Seoul, becoming a highly active calligrapher and painter in Korea's modern art community. By this time his unique personal style, characterized by broad brush strokes and the use of abundant ink in orchid and bamboo paintings, was fully formed. Records on, and extant works from, Seo's early period are particularly rare, thus confining knowledge of his artistic activities and painting style largely to the realm of speculation. In this respect, eleven recently revealed nanjukseok (蘭竹石圖; orchid, bamboo and rock) paintings, produced by Seo in 1889, provide important clues about the origins and standards of his early-period painting style. This study uses a comparative analysis to confirm that Seo's orchid paintings show the influence of the early gunran (群蘭圖; orchid) and seongnan (石蘭圖; rock and orchid) paintings produced by Lee Ha-eung before his arrest by Qing troops in July 1882. Seo's bamboo paintings appear to show both that he adopted the style of Zheng Xie (鄭燮, 1693-1765) of the Yangzhou School (揚州畵派), a style widely known in Seoul from the late eighteenth century onward, and of Heo Ryeon (許鍊, 1809-1892), a student of Joseon artist Kim Jeong-hui (金正喜,1786-1856), and that he attempted to apply a modified version of Lee Ha-eung's seongnan painting technique. It was not possible to find other works by Seo evincing a direct relationship with the curious rocks depicted in his 1889 paintings, but I contend that they show the influence of both the late-nineteenth-century-Qing rock painter Zhou Tang (周棠, 1806-1876) and the curious rock paintings of the middle-class Joseon artist Jeong Hak-gyo (丁學敎, 1832-1914). In conclusion, this study asserts that, for his 1889 nanjukseok paintings, Seo Byeong-o adopted the styles of contemporary painters such as Heo Ryeon and Jeong Hak-gyo, whom he met during his early period at the Unhyeongung through his connection with its occupant, Lee Ha-eung, and those of artists such as Zheng Xie and Zhou Tang, whose works he was able to directly observe in Korea.

Robo-Advisor Algorithm with Intelligent View Model (지능형 전망모형을 결합한 로보어드바이저 알고리즘)

  • Kim, Sunwoong
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.39-55
    • /
    • 2019
  • Recently banks and large financial institutions have introduced lots of Robo-Advisor products. Robo-Advisor is a Robot to produce the optimal asset allocation portfolio for investors by using the financial engineering algorithms without any human intervention. Since the first introduction in Wall Street in 2008, the market size has grown to 60 billion dollars and is expected to expand to 2,000 billion dollars by 2020. Since Robo-Advisor algorithms suggest asset allocation output to investors, mathematical or statistical asset allocation strategies are applied. Mean variance optimization model developed by Markowitz is the typical asset allocation model. The model is a simple but quite intuitive portfolio strategy. For example, assets are allocated in order to minimize the risk on the portfolio while maximizing the expected return on the portfolio using optimization techniques. Despite its theoretical background, both academics and practitioners find that the standard mean variance optimization portfolio is very sensitive to the expected returns calculated by past price data. Corner solutions are often found to be allocated only to a few assets. The Black-Litterman Optimization model overcomes these problems by choosing a neutral Capital Asset Pricing Model equilibrium point. Implied equilibrium returns of each asset are derived from equilibrium market portfolio through reverse optimization. The Black-Litterman model uses a Bayesian approach to combine the subjective views on the price forecast of one or more assets with implied equilibrium returns, resulting a new estimates of risk and expected returns. These new estimates can produce optimal portfolio by the well-known Markowitz mean-variance optimization algorithm. If the investor does not have any views on his asset classes, the Black-Litterman optimization model produce the same portfolio as the market portfolio. What if the subjective views are incorrect? A survey on reports of stocks performance recommended by securities analysts show very poor results. Therefore the incorrect views combined with implied equilibrium returns may produce very poor portfolio output to the Black-Litterman model users. This paper suggests an objective investor views model based on Support Vector Machines(SVM), which have showed good performance results in stock price forecasting. SVM is a discriminative classifier defined by a separating hyper plane. The linear, radial basis and polynomial kernel functions are used to learn the hyper planes. Input variables for the SVM are returns, standard deviations, Stochastics %K and price parity degree for each asset class. SVM output returns expected stock price movements and their probabilities, which are used as input variables in the intelligent views model. The stock price movements are categorized by three phases; down, neutral and up. The expected stock returns make P matrix and their probability results are used in Q matrix. Implied equilibrium returns vector is combined with the intelligent views matrix, resulting the Black-Litterman optimal portfolio. For comparisons, Markowitz mean-variance optimization model and risk parity model are used. The value weighted market portfolio and equal weighted market portfolio are used as benchmark indexes. We collect the 8 KOSPI 200 sector indexes from January 2008 to December 2018 including 132 monthly index values. Training period is from 2008 to 2015 and testing period is from 2016 to 2018. Our suggested intelligent view model combined with implied equilibrium returns produced the optimal Black-Litterman portfolio. The out of sample period portfolio showed better performance compared with the well-known Markowitz mean-variance optimization portfolio, risk parity portfolio and market portfolio. The total return from 3 year-period Black-Litterman portfolio records 6.4%, which is the highest value. The maximum draw down is -20.8%, which is also the lowest value. Sharpe Ratio shows the highest value, 0.17. It measures the return to risk ratio. Overall, our suggested view model shows the possibility of replacing subjective analysts's views with objective view model for practitioners to apply the Robo-Advisor asset allocation algorithms in the real trading fields.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Development of New Variables Affecting Movie Success and Prediction of Weekly Box Office Using Them Based on Machine Learning (영화 흥행에 영향을 미치는 새로운 변수 개발과 이를 이용한 머신러닝 기반의 주간 박스오피스 예측)

  • Song, Junga;Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.67-83
    • /
    • 2018
  • The Korean film industry with significant increase every year exceeded the number of cumulative audiences of 200 million people in 2013 finally. However, starting from 2015 the Korean film industry entered a period of low growth and experienced a negative growth after all in 2016. To overcome such difficulty, stakeholders like production company, distribution company, multiplex have attempted to maximize the market returns using strategies of predicting change of market and of responding to such market change immediately. Since a film is classified as one of experiential products, it is not easy to predict a box office record and the initial number of audiences before the film is released. And also, the number of audiences fluctuates with a variety of factors after the film is released. So, the production company and distribution company try to be guaranteed the number of screens at the opining time of a newly released by multiplex chains. However, the multiplex chains tend to open the screening schedule during only a week and then determine the number of screening of the forthcoming week based on the box office record and the evaluation of audiences. Many previous researches have conducted to deal with the prediction of box office records of films. In the early stage, the researches attempted to identify factors affecting the box office record. And nowadays, many studies have tried to apply various analytic techniques to the factors identified previously in order to improve the accuracy of prediction and to explain the effect of each factor instead of identifying new factors affecting the box office record. However, most of previous researches have limitations in that they used the total number of audiences from the opening to the end as a target variable, and this makes it difficult to predict and respond to the demand of market which changes dynamically. Therefore, the purpose of this study is to predict the weekly number of audiences of a newly released film so that the stakeholder can flexibly and elastically respond to the change of the number of audiences in the film. To that end, we considered the factors used in the previous studies affecting box office and developed new factors not used in previous studies such as the order of opening of movies, dynamics of sales. Along with the comprehensive factors, we used the machine learning method such as Random Forest, Multi Layer Perception, Support Vector Machine, and Naive Bays, to predict the number of cumulative visitors from the first week after a film release to the third week. At the point of the first and the second week, we predicted the cumulative number of visitors of the forthcoming week for a released film. And at the point of the third week, we predict the total number of visitors of the film. In addition, we predicted the total number of cumulative visitors also at the point of the both first week and second week using the same factors. As a result, we found the accuracy of predicting the number of visitors at the forthcoming week was higher than that of predicting the total number of them in all of three weeks, and also the accuracy of the Random Forest was the highest among the machine learning methods we used. This study has implications in that this study 1) considered various factors comprehensively which affect the box office record and merely addressed by other previous researches such as the weekly rating of audiences after release, the weekly rank of the film after release, and the weekly sales share after release, and 2) tried to predict and respond to the demand of market which changes dynamically by suggesting models which predicts the weekly number of audiences of newly released films so that the stakeholders can flexibly and elastically respond to the change of the number of audiences in the film.

A Study on the Tree Surgery Problem and Protection Measures in Monumental Old Trees (천연기념물 노거수 외과수술 문제점 및 보존 관리방안에 관한 연구)

  • Jung, Jong Soo
    • Korean Journal of Heritage: History & Science
    • /
    • v.42 no.1
    • /
    • pp.122-142
    • /
    • 2009
  • This study explored all domestic and international theories for maintenance and health enhancement of an old and big tree, and carried out the anatomical survey of the operation part of the tree toward he current status of domestic surgery and the perception survey of an expert group, and drew out following conclusion through the process of suggesting its reform plan. First, as a result of analyzing the correlation of the 67 subject trees with their ages, growth status. surroundings, it revealed that they were closely related to positional characteristic, damage size, whereas were little related to materials by fillers. Second, the size of the affected part was the most frequent at the bough sheared part under $0.09m^2$, and the hollow size by position(part) was the biggest at 'root + stem' starting from the behind of the main root and stem As a result of analyzing the correlation, the same result was elicited at the group with low correlation. Third, the problem was serious in charging the fillers (especially urethane) in the big hollow or exposed root produced at the behind of the root and stem part, or surface-processing it. The benefit by charging the hollow part was analyzed as not so much. Fourth, the surface-processing of fillers currently used (artificial bark) is mainly 'epoxy+woven fabric+cork', but it is not flexible, so it has brought forth problems of frequent cracks and cracked surface at the joint part with the treetextured part. Fifth, the correlation with the external status of the operated part was very high with the closeness, surface condition, formation of adhesive tissue and internal survey result. Sixth, the most influential thing on flushing by the wrong management of an old and big tree was banking, and a wrong pruning was the source of the ground part damage. In pruning a small bough can easily recover itself from its damage as its formation of adhesive tissue when it is cut by a standard method. Seventh, the parameters affecting the times of related business handling of an old and big tree are 'the need of the conscious reform of the manager and related business'. Eighth, a reform plan in an institutional aspect can include the arrangement of the law and organization of the old and big tree management and preservation at an institutional aspect. This study for preparing a reform plan through the status survey of the designated old and big tree, has a limit inducing a reform plan based on the status survey through individual research, and a weak point suggesting grounds by any statistical data. This can be complemented by subsequent studies.

Application of Science for Interpreting Archaeological Materials(III) Characterization of Some Western Asia Glass Vessels from South Mound of Hwangnamdaechong (고고자료의 자연과학 응용(III) 황남대총(남분)의 일부 서역계 유리제품에 대한 과학적 특성 분류)

  • Kang, Hyung Tae;Cho, Nam Chul
    • Korean Journal of Heritage: History & Science
    • /
    • v.41 no.1
    • /
    • pp.5-19
    • /
    • 2008
  • Thirty six samples of Western asia glass vessel shards which were excavated from South Mound of Hwangnamdaechong were each measured for thickness, pore size and specific gravity and analyzed for ten major compositions and thirteen trace elements. The glass samples with colorless, greenish blue and dark purple blue were well classified by principal component analysis(PCA). All glass shards of Hwangnamdaechong belonged to Soda glass system ($Na_2O-CaO-SiO_2$) which have the range of 14~17% $Na_2O$ and 5~6% CaO. The corelation coefficients of (MgO, $K_2O$) and (MnO, CuO) showed above 0.90. The concentrations of thirteen trace elements apparently differentiated from colorless, greenish blue and dark blue glasses. We found that thirteen trace elements were very important indices for studying raw material of glass and the origin of glass making. Colorless glass : The specific gravity is $1.50{\pm}0.04$. Circle or oval circle pores are observed with regular direction in internal zone and the longest one is about 0.35 mm. The raw material of sodium must be the plant ash because sodium glasses contain HCLA(High CaO, Low $Al_2O_3$) and HMK(high MgO, high $K_2O$) and suggested to Sasanian glass. The total amount of coloring agent of colorless glass is below 1 % which is too small to attribute to the color. Greenish blue glass : The specific gravity is $1.58{\pm}0.04$. The fine pores which are 0.1~0.2mm are dispersed in internal zone. Sodium glasses are distributed to HCLA and HMK. Therefore the greenish blue glass also have used plant ash for raw material of sodium with the same as colorless glass. It was also suggested to the glass of Sasanian. The total amount of coloring agent of greenish blue glass is about 4% under the influence of working MnO, $Fe_2O_3$ and CuO. Dark purple blue glass : The specific gravity is $1.48{\pm}0.19$. There are rarely pores in internal zone. They are distributed to HCLA and LMK(Low MgO, Low $K_2O$) and suggested to Roman glass. The raw material of sodium is estimated to natron. The total amount of coloring agents of greenish blue is about 3% by $Fe_2O_3$ and CuO. These studies for western asia glass shards from South Mound of Hwangnamdaechong could be used in the future as the standard data which could be compared with those of other several graves in Korea and dispersed in foreign areas.