Park Sang-Jung;Cho Min;Yoon Je-Yong;Jun Yong-Sung;Rim Yeon-Taek;Jin Ing-Nyol;Chung Hyen-Mi
Journal of Life Science
/
v.16
no.3
s.76
/
pp.534-539
/
2006
In the ozone disinfection unit process of a piston type batch reactor with continuous ozone analysis using a flow injection analysis (FIA) system, the CT values for 1 log inactivation of Cryptosporidium parvum by viability assays of DAPI/PI and excystation were $1.8{\sim}2.2\;mg/L{\cdot}min$ at $25^{\circ}C$ and $9.1mg/L{\cdot}min$ at $5^{\circ}C$, respectively. At the low temperature, ozone requirement rises $4{\sim}5$ times higher in order to achieve the same level of disinfection at room temperature. In a 40 L scale pilot plant with continuous flow and constant 5 minutes retention time, disinfection effects were evaluated using excystation, DAPI/PI, and cell infection method at the same time. About 0.2 log inactivation of Cryptosporidium by DAPI/PI and excystation assay, and 1.2 log inactivation by cell infectivity assay were estimated, respectively, at the CT value of about $8mg/L{\cdot}min$. The difference between DAPI/PI and excystation assay was not significant in evaluating CT values of Cryptosporidium by ozone in both experiment of the piston and the pilot reactors. However, there was significant difference between viability assay based on the intact cell wall structure and function and infectivity assay based on the developing oocysts to sporozoites and merozoites in the pilot study. The stage of development should be more sensitive to ozone oxidation than cell wall intactness of oocysts. The difference of CT values estimated by viability assay between two studies may partly come from underestimation of the residual ozone concentration due to the manual monitoring in the pilot study, or the difference of the reactor scale (50 mL vs 40 L) and types (batch vs continuous). Adequate If value to disinfect 1 and 2 log scale of Cryptosporidium in UV irradiation process was 25 $mWs/cm^2$ and 50 $mWs/cm^2$, respectively, at $25^{\circ}C$ by DAPI/PI. At $5^{\circ}C$, 40 $mWs/cm^2$ was required for disinfecting 1 log Cryptosporidium, and 80 $mWs/cm^2$ for disinfecting 2 log Cryptosporidium. It was thought that about 60% increase of If value requirement to compensate for the $20^{\circ}C$ decrease in temperature was due to the low voltage low output lamp letting weaker UV rays occur at lower temperatures.
Kim, Dong-Uk;Kim, Ji-Hoon;Kim, Sung-Mi;Kwon, Ky-Beom
The Korean Journal of Air & Space Law and Policy
/
v.32
no.1
/
pp.225-285
/
2017
In regard to the regulations related to the RPA(Remotely Piloted Aircraft), which is sometimes called in other countries as UA(Unmanned Aircraft), ICAO stipulates the regulations in the 'RPAS manual (2015)' in detail based on the 'Chicago Convention' in 1944, and enacts provisions for the Rules of UAS or RPAS. Other contries stipulates them such as the Federal Airline Rules (14 CFR), Public Law (112-95) in the United States, the Air Transport Act, Air Transport Order, Air Transport Authorization Order (through revision in "Regulations to operating Rules on unmanned aerial System") based on EASA Regulation (EC) No.216/2008 in the case of unmanned aircaft under 150kg in Germany, and Civil Aviation Act (CAA 1998), Civil Aviation Act 101 (CASR Part 101) in Australia. Commonly, these laws exclude the model aircraft for leisure purpose and require pilots on the ground, not onboard aricraft, capable of controlling RPA. The laws also require that all managements necessary to operate RPA and pilots safely and efficiently under the structure of the unmanned aircraft system within the scope of the regulations. Each country classifies the RPA as an aircraft less than 25kg. Australia and Germany further break down the RPA at a lower weight. ICAO stipulates all general aviation operations, including commercial operation, in accordance with Annex 6 of the Chicago Convention, and it also applies to RPAs operations. However, passenger transportation using RPAs is excluded. If the operational scope of the RPAs includes the airspace of another country, the special permission of the relevant country shall be required 7 days before the flight date with detail flight plan submitted. In accordance with Federal Aviation Regulation 107 in the United States, a small non-leisure RPA may be operated within line-of-sight of a responsible navigator or observer during the day in the speed range up to 161 km/hr (87 knots) and to the height up to 122 m (400 ft) from surface or water. RPA must yield flight path to other aircraft, and is prohibited to load dangerous materials or to operate more than two RPAs at the same time. In Germany, the regulations on UAS except for leisure and sports provide duty to avoidance of airborne collisions and other provisions related to ground safety and individual privacy. Although commercial UAS of 5 kg or less can be freely operated without approval by relaxing the existing regulatory requirements, all the UAS regardless of the weight must be operated below an altitude of 100 meters with continuous monitoring and pilot control. Australia was the first country to regulate unmanned aircraft in 2001, and its regulations have impacts on the unmanned aircraft laws of ICAO, FAA, and EASA. In order to improve the utiliity of unmanned aircraft which is considered to be low risk, the regulation conditions were relaxed through the revision in 2016 by adding the concept "Excluded RPA". In the case of excluded RPA, it can be operated without special permission even for commercial purpose. Furthermore, disscussions on a new standard manual is being conducted for further flexibility of the current regulations.
Fama asserted that in an efficient market, we can't make a trading rule that consistently outperforms the average stock market returns. This study aims to suggest a machine learning algorithm to improve the trading performance of an intraday short volatility strategy applying asymmetric volatility spillover effect, and analyze its trading performance improvement. Generally stock market volatility has a negative relation with stock market return and the Korean stock market volatility is influenced by the US stock market volatility. This volatility spillover effect is asymmetric. The asymmetric volatility spillover effect refers to the phenomenon that the US stock market volatility up and down differently influence the next day's volatility of the Korean stock market. We collected the S&P 500 index, VIX, KOSPI 200 index, and V-KOSPI 200 from 2008 to 2018. We found the negative relation between the S&P 500 and VIX, and the KOSPI 200 and V-KOSPI 200. We also documented the strong volatility spillover effect from the VIX to the V-KOSPI 200. Interestingly, the asymmetric volatility spillover was also found. Whereas the VIX up is fully reflected in the opening volatility of the V-KOSPI 200, the VIX down influences partially in the opening volatility and its influence lasts to the Korean market close. If the stock market is efficient, there is no reason why there exists the asymmetric volatility spillover effect. It is a counter example of the efficient market hypothesis. To utilize this type of anomalous volatility spillover pattern, we analyzed the intraday volatility selling strategy. This strategy sells short the Korean volatility market in the morning after the US stock market volatility closes down and takes no position in the volatility market after the VIX closes up. It produced profit every year between 2008 and 2018 and the percent profitable is 68%. The trading performance showed the higher average annual return of 129% relative to the benchmark average annual return of 33%. The maximum draw down, MDD, is -41%, which is lower than that of benchmark -101%. The Sharpe ratio 0.32 of SVS strategy is much greater than the Sharpe ratio 0.08 of the Benchmark strategy. The Sharpe ratio simultaneously considers return and risk and is calculated as return divided by risk. Therefore, high Sharpe ratio means high performance when comparing different strategies with different risk and return structure. Real world trading gives rise to the trading costs including brokerage cost and slippage cost. When the trading cost is considered, the performance difference between 76% and -10% average annual returns becomes clear. To improve the performance of the suggested volatility trading strategy, we used the well-known SVM algorithm. Input variables include the VIX close to close return at day t-1, the VIX open to close return at day t-1, the VK open return at day t, and output is the up and down classification of the VK open to close return at day t. The training period is from 2008 to 2014 and the testing period is from 2015 to 2018. The kernel functions are linear function, radial basis function, and polynomial function. We suggested the modified-short volatility strategy that sells the VK in the morning when the SVM output is Down and takes no position when the SVM output is Up. The trading performance was remarkably improved. The 5-year testing period trading results of the m-SVS strategy showed very high profit and low risk relative to the benchmark SVS strategy. The annual return of the m-SVS strategy is 123% and it is higher than that of SVS strategy. The risk factor, MDD, was also significantly improved from -41% to -29%.
With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.
Concept mapping is a device for representing the conceptual structure of a subject discipline in a two dimensional form which is analogous to a road map. In the teaching and learning of earth science, each concept depends on its relationships to many others for meaning. Using concept mapping in teaching helps teachers and students to be more aware of the key concepts and relationships among them. The purpose of this study is to investigate the effect of the use of concept mapping on science achievement and the scientific attitude in ocean units of earth science. The results of this study are as follows; first, the science achievement of a group of concept mapping teaching is significantly higher than that of the group of traditional teaching. Also, when the achievement levels are compared among different cognitive ability groups, the effect is more significant in mid or lower level student groups than in high level groups. The use of concept mapping is more effective when the concepts have a distinct concept hierarchy. Second, the scores of the test of ‘attitude toward scientific inquiry’ and ‘application of scientific attitude’ of the group of concept mapping teaching are significantly higher than those of the group of traditional teaching, whereas the scores of the test of ‘interest in science learning’ of concept mapping teaching is not different from those of group of traditional teaching. Third, the survey on the use of concept mapping shows a positive response across the tested groups. The use of concept mapping is more beneficial in fostering the comprehension of the topic. A concept map of student's own construction facilitates the assessment of learning, thus promising the usefulness of concept mapping as a means of evaluation. In regard to retention aspect, concept mapping is considered to be more effective in confirming and remembering the topic, while less effective in the aspects of activity and interest. In conclusion, the use of concept maps makes learning an active meaningful process and improves student's academic achievement and scientific attitude. If the concept mapping is more effectively as an active teaching strategy, more meaningful learning will be attained.
As conventional oil and gas reservoirs become depleted, interests for oil sands has rapidly increased in the last decade. Oil sands are mixture of bitumen, water, and host sediments of sand and clay. Most oil sand is unconsolidated sand that is held together by bitumen. Bitumen has hydrocarbon in situ viscosity of >10,000 centipoises (cP) at reservoir condition and has API gravity between $8-14^{\circ}$. The largest oil sand deposits are in Alberta and Saskatchewan, Canada. The reverves are approximated at 1.7 trillion barrels of initial oil-in-place and 173 billion barrels of remaining established reserves. Alberta has a number of oil sands deposits which are grouped into three oil sand development areas - the Athabasca, Cold Lake, and Peace River, with the largest current bitumen production from Athabasca. Principal oil sands deposits consist of the McMurray Fm and Wabiskaw Mbr in Athabasca area, the Gething and Bluesky formations in Peace River area, and relatively thin multi-reservoir deposits of McMurray, Clearwater, and Grand Rapid formations in Cold Lake area. The reservoir sediments were deposited in the foreland basin (Western Canada Sedimentary Basin) formed by collision between the Pacific and North America plates and the subsequent thrusting movements in the Mesozoic. The deposits are underlain by basement rocks of Paleozoic carbonates with highly variable topography. The oil sands deposits were formed during the Early Cretaceous transgression which occurred along the Cretaceous Interior Seaway in North America. The oil-sands-hosting McMurray and Wabiskaw deposits in the Athabasca area consist of the lower fluvial and the upper estuarine-offshore sediments, reflecting the broad and overall transgression. The deposits are characterized by facies heterogeneity of channelized reservoir sands and non-reservoir muds. Main reservoir bodies of the McMurray Formation are fluvial and estuarine channel-point bar complexes which are interbedded with fine-grained deposits formed in floodplain, tidal flat, and estuarine bay. The Wabiskaw deposits (basal member of the Clearwater Formation) commonly comprise sheet-shaped offshore muds and sands, but occasionally show deep-incision into the McMurray deposits, forming channelized reservoir sand bodies of oil sands. In Canada, bitumen of oil sands deposits is produced by surface mining or in-situ thermal recovery processes. Bitumen sands recovered by surface mining are changed into synthetic crude oil through extraction and upgrading processes. On the other hand, bitumen produced by in-situ thermal recovery is transported to refinery only through bitumen blending process. The in-situ thermal recovery technology is represented by Steam-Assisted Gravity Drainage and Cyclic Steam Stimulation. These technologies are based on steam injection into bitumen sand reservoirs for increase in reservoir in-situ temperature and in bitumen mobility. In oil sands reservoirs, efficiency for steam propagation is controlled mainly by reservoir geology. Accordingly, understanding of geological factors and characteristics of oil sands reservoir deposits is prerequisite for well-designed development planning and effective bitumen production. As significant geological factors and characteristics in oil sands reservoir deposits, this study suggests (1) pay of bitumen sands and connectivity, (2) bitumen content and saturation, (3) geologic structure, (4) distribution of mud baffles and plugs, (5) thickness and lateral continuity of mud interbeds, (6) distribution of water-saturated sands, (7) distribution of gas-saturated sands, (8) direction of lateral accretion of point bar, (9) distribution of diagenetic layers and nodules, and (10) texture and fabric change within reservoir sand body.
Digital economy has grown rapidly so that the new business area called 'Internet business' has been dramatically extended as time goes on. However, in the case of Internet business, market shares of individual companies seem to fluctuate very extremely. Thus marketing managers who operate the Internet sites have seriously observed the competition structure of the Internet business market and carefully analyzed the competitors' behavior in order to achieve their own business goals in the market. The newly created Internet business might differ from the offline ones in management styles, because it has totally different business circumstances when compared with the existing offline businesses. Thus, there should be a lot of researches for finding the solutions about what the features of Internet business are and how the management style of those Internet business companies should be changed. Most marketing literatures related to the Internet business have focused on individual business markets. Specifically, many researchers have studied the Internet portal sites and the Internet shopping mall sites, which are the most general forms of Internet business. On the other hand, this study focuses on the entire Internet business industry to understand the competitive circumstance of online market. This approach makes it possible not only to have a broader view to comprehend overall e-business industry, but also to understand the differences in competition structures among Internet business markets. We used time-series data of Internet connection rates by consumers as the basic data to figure out the competition patterns in the Internet business markets. Specifically, the data for this research was obtained from one of Internet ranking sites, 'Fian'. The Internet business ranking data is obtained based on web surfing record of some pre-selected sample group where the possibility of double-count for page-views is controlled by method of same IP check. The ranking site offers several data which are very useful for comparison and analysis of competitive sites. The Fian site divides the Internet business areas into 34 area and offers market shares of big 5 sites which are on high rank in each category daily. We collected the daily market share data about Internet sites on each area from April 22, 2008 to August 5, 2008, where some errors of data was found and 30 business area data were finally used for our research after the data purification. This study performed several empirical analyses in focusing on market shares of each site to understand the competition among sites in Internet business of Korea. We tried to perform more statistically precise analysis for looking into business fields with similar competitive structures by applying the cluster analysis to the data. The research results are as follows. First, the leading sites in each area were classified into three groups based on averages and standard deviations of daily market shares. The first group includes the sites with the lowest market shares, which give more increased convenience to consumers by offering the Internet sites as complimentary services for existing offline services. The second group includes sites with medium level of market shares, where the site users are limited to specific small group. The third group includes sites with the highest market shares, which usually require online registration in advance and have difficulty in switching to another site. Second, we analyzed the second place sites in each business area because it may help us understand the competitive power of the strongest competitor against the leading site. The second place sites in each business area were classified into four groups based on averages and standard deviations of daily market shares. The four groups are the sites showing consistent inferiority compared to the leading sites, the sites with relatively high volatility and medium level of shares, the sites with relatively low volatility and medium level of shares, the sites with relatively low volatility and high level of shares whose gaps are not big compared to the leading sites. Except 'web agency' area, these second place sites show relatively stable shares below 0.1 point of standard deviation. Third, we also classified the types of relative strength between leading sites and the second place sites by applying the cluster analysis to the gap values of market shares between two sites. They were also classified into four groups, the sites with the relatively lowest gaps even though the values of standard deviation are various, the sites with under the average level of gaps, the sites with over the average level of gaps, the sites with the relatively higher gaps and lower volatility. Then we also found that while the areas with relatively bigger gap values usually have smaller standard deviation values, the areas with very small differences between the first and the second sites have a wider range of standard deviation values. The practical and theoretical implications of this study are as follows. First, the result of this study might provide the current market participants with the useful information to understand the competitive circumstance of the market and build the effective new business strategy for the market success. Also it might be useful to help new potential companies find a new business area and set up successful competitive strategies. Second, it might help Internet marketing researchers take a macro view of the overall Internet market so that make possible to begin the new studies on overall Internet market beyond individual Internet market studies.
Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.
shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
shows various market conditions captured by the two consumer heterogeneities.
(a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
(c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition.
summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.