• Title/Summary/Keyword: Korea & US

Search Result 6,379, Processing Time 0.036 seconds

Stratigraphic response to tectonic evolution of sedimentary basins in the Yellow Sea and adjacent areas (황해 및 인접 지역 퇴적분지들의 구조적 진화에 따른 층서)

  • Ryo In Chang;Kim Boo Yang;Kwak won Jun;Kim Gi Hyoun;Park Se Jin
    • The Korean Journal of Petroleum Geology
    • /
    • v.8 no.1_2 s.9
    • /
    • pp.1-43
    • /
    • 2000
  • A comparison study for understanding a stratigraphic response to tectonic evolution of sedimentary basins in the Yellow Sea and adjacent areas was carried out by using an integrated stratigraphic technology. As an interim result, we propose a stratigraphic framework that allows temporal and spatial correlation of the sedimentary successions in the basins. This stratigraphic framework will use as a new stratigraphic paradigm for hydrocarbon exploration in the Yellow Sea and adjacent areas. Integrated stratigraphic analysis in conjunction with sequence-keyed biostratigraphy allows us to define nine stratigraphic units in the basins: Cambro-Ordovician, Carboniferous-Triassic, early to middle Jurassic, late Jurassic-early Cretaceous, late Cretaceous, Paleocene-Eocene, Oligocene, early Miocene, and middle Miocene-Pliocene. They are tectono-stratigraphic units that provide time-sliced information on basin-forming tectonics, sedimentation, and basin-modifying tectonics of sedimentary basins in the Yellow Sea and adjacent area. In the Paleozoic, the South Yellow Sea basin was initiated as a marginal sag basin in the northern margin of the South China Block. Siliciclastic and carbonate sediments were deposited in the basin, showing cyclic fashions due to relative sea-level fluctuations. During the Devonian, however, the basin was once uplifted and deformed due to the Caledonian Orogeny, which resulted in an unconformity between the Cambro-Ordovician and the Carboniferous-Triassic units. The second orogenic event, Indosinian Orogeny, occurred in the late Permian-late Triassic, when the North China block began to collide with the South China block. Collision of the North and South China blocks produced the Qinling-Dabie-Sulu-Imjin foldbelts and led to the uplift and deformation of the Paleozoic strata. Subsequent rapid subsidence of the foreland parallel to the foldbelts formed the Bohai and the West Korean Bay basins where infilled with the early to middle Jurassic molasse sediments. Also Piggyback basins locally developed along the thrust. The later intensive Yanshanian (first) Orogeny modified these foreland and Piggyback basins in the late Jurassic. The South Yellow Sea basin, however, was likely to be a continental interior sag basin during the early to middle Jurassic. The early to middle Jurassic unit in the South Yellow Sea basin is characterized by fluvial to lacustrine sandstone and shale with a thick basal quartz conglomerate that contains well-sorted and well-rounded gravels. Meanwhile, the Tan-Lu fault system underwent a sinistrai strike-slip wrench movement in the late Triassic and continued into the Jurassic and Cretaceous until the early Tertiary. In the late Jurassic, development of second- or third-order wrench faults along the Tan-Lu fault system probably initiated a series of small-scale strike-slip extensional basins. Continued sinistral movement of the Tan-Lu fault until the late Eocene caused a megashear in the South Yellow Sea basin, forming a large-scale pull-apart basin. However, the Bohai basin was uplifted and severely modified during this period. h pronounced Yanshanian Orogeny (second and third) was marked by the unconformity between the early Cretaceous and late Eocene in the Bohai basin. In the late Eocene, the Indian Plate began to collide with the Eurasian Plate, forming a megasuture zone. This orogenic event, namely the Himalayan Orogeny, was probably responsible for the change of motion of the Tan-Lu fault system from left-lateral to right-lateral. The right-lateral strike-slip movement of the Tan-Lu fault caused the tectonic inversion of the South Yellow Sea basin and the pull-apart opening of the Bohai basin. Thus, the Oligocene was the main period of sedimentation in the Bohai basin as well as severe tectonic modification of the South Yellow Sea basin. After the Oligocene, the Yellow Sea and Bohai basins have maintained thermal subsidence up to the present with short periods of marine transgressions extending into the land part of the present basins.

  • PDF

Estimation of SCS Runoff Curve Number and Hydrograph by Using Highly Detailed Soil Map(1:5,000) in a Small Watershed, Sosu-myeon, Goesan-gun (SCS-CN 산정을 위한 수치세부정밀토양도 활용과 괴산군 소수면 소유역의 물 유출량 평가)

  • Hong, Suk-Young;Jung, Kang-Ho;Choi, Chol-Uong;Jang, Min-Won;Kim, Yi-Hyun;Sonn, Yeon-Kyu;Ha, Sang-Keun
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.43 no.3
    • /
    • pp.363-373
    • /
    • 2010
  • "Curve number" (CN) indicates the runoff potential of an area. The US Soil Conservation Service (SCS)'s CN method is a simple, widely used, and efficient method for estimating the runoff from a rainfall event in a particular area, especially in ungauged basins. The use of soil maps requested from end-users was dominant up to about 80% of total use for estimating CN based rainfall-runoff. This study introduce the use of soil maps with respect to hydrologic and watershed management focused on hydrologic soil group and a case study resulted in assessing effective rainfall and runoff hydrograph based on SCS-CN method in a small watershed. The ratio of distribution areas for hydrologic soil group based on detailed soil map (1:25,000) of Korea were 42.2% (A), 29.4% (B), 18.5% (C), and 9.9% (D) for HSG 1995, and 35.1% (A), 15.7% (B), 5.5% (C), and 43.7% (D) for HSG 2006, respectively. The ratio of D group in HSG 2006 accounted for 43.7% of the total and 34.1% reclassified from A, B, and C groups of HSG 1995. Similarity between HSG 1995 and 2006 was about 55%. Our study area was located in Sosu-myeon, Goesan-gun including an approx. 44 $km^2$-catchment, Chungchungbuk-do. We used a digital elevation model (DEM) to delineate the catchments. The soils were classified into 4 hydrologic soil groups on the basis of measured infiltration rate and a model of the representative soils of the study area reported by Jung et al. 2006. Digital soil maps (1:5,000) were used for classifying hydrologic soil groups on the basis of soil series unit. Using high resolution satellite images, we delineated the boundary of each field or other parcel on computer screen, then surveyed the land use and cover in each. We calculated CN for each and used those data and a land use and cover map and a hydrologic soil map to estimate runoff. CN values, which are ranged from 0 (no runoff) to 100 (all precipitation runs off), of the catchment were 73 by HSG 1995 and 79 by HSG 2006, respectively. Each runoff response, peak runoff and time-to-peak, was examined using the SCS triangular synthetic unit hydrograph, and the results of HSG 2006 showed better agreement with the field observed data than those with use of HSG 1995.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Media Habits of Sensation Seekers (감지추구자적매체습관(感知追求者的媒体习惯))

  • Blakeney, Alisha;Findley, Casey;Self, Donald R.;Ingram, Rhea;Garrett, Tony
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.2
    • /
    • pp.179-187
    • /
    • 2010
  • Understanding consumers' preferences and use of media types is imperative for marketing and advertising managers, especially in today's fragmented market. A clear understanding assists managers in making more effective selections of appropriate media outlets, yet individuals' choices of type and use of media are based on a variety of characteristics. This paper examines one personality trait, sensation seeking, which has not appeared in the literature examining "new" media preferences and use. Sensation seeking is a personality trait defined as "the need for varied, novel, and complex sensations and experiences and the willingness to take physical and social risks for the sake of such experiences" (Zuckerman 1979). Six hypotheses were developed from a review of the literature. Particular attention was given to the Uses and Gratification theory (Katz 1959), which explains various reasons why people choose media types and their motivations for using the different types of media. Current theory suggests that High Sensation Seekers (HSS), due to their needs for novelty, arousal and unconventional content and imagery, would exhibit higher frequency of use of new media. Specifically, we hypothesize that HSS will use the internet more than broadcast (H1a) or print media (H1b) and more than low (LSS) (H2a) or medium sensation seekers (MSS) (H2b). In addition, HSS have been found to be more social and have higher numbers of friends therefore are expected to use social networking websites such as Facebook/MySpace (H3) and chat rooms (H4) more than LSS (a) and MSS (b). Sensation seekers can manifest into a range of behaviors including disinhibition,. It is expected that alternative social networks such as Facebook/MySpace (H5) and chat rooms (H6) will be used more often for those who have higher levels of disinhibition than low (a) or medium (b) levels. Data were collected using an online survey of participants in extreme sports. In order to reach this group, an improved version of a snowball sampling technique, chain-referral method, was used to select respondents for this study. This method was chosen as it is regarded as being effective to reach otherwise hidden population groups (Heckathorn, 1997). A final usable sample of 1108 respondents, which was mainly young (56.36% under 34), male (86.1%) and middle class (58.7% with household incomes over USD 50,000) was consistent with previous studies on sensation seeking. Sensation seeking was captured using an existing measure, the Brief Sensation Seeking Scale (Hoyle et al., 2002). Media usage was captured by measuring the self reported usage of various media types. Results did not support H1a and b. HSS did not show higher levels of usage of alternative media such as the internet showing in fact lower mean levels of usage than all the other types of media. The highest media type used by HSS was print media, suggesting that there is a revolt against the mainstream. Results support H2a and b that HSS are more frequent users of the internet than LSS or MSS. Further analysis revealed that there are significant differences in the use of print media between HSS and LSS, suggesting that HSS may seek out more specialized print publications in their respective extreme sport activity. Hypothesis 3a and b showed that HSS use Facebook/MySpace more frequently than either LSS or MSS. There were no significant differences in the use of chat rooms between LSS and HSS, so as a consequence no support for H4a, although significant for MSS H4b. Respondents with varying levels of disinhibition were expected to have different levels of use of Facebook/MySpace and chat-rooms. There was support for the higher levels of use of Facebook/MySpace for those with high levels of disinhibition than low or medium levels, supporting H5a and b. Similarly there was support for H6b, Those with high levels of disinhibition use chat-rooms significantly more than those with medium levels but not for low levels (H6a). The findings are counterintuitive and give some interesting insights for managers. First, although HSS use online media more frequently than LSS or MSS, this groups use of online media is less than either print or broadcast media. The advertising executive should not place too much emphasis on online media for this important market segment. Second, social media, such as facebook/Myspace and chatrooms should be examined by managers as potential ways to reach this group. Finally, there is some implication for public policy by the higher levels of use of social media by those who are disinhibited. These individuals are more inclined to engage in more socially risky behavior which may have some dire implications, e.g. by internet predators or future employers. There is a limitation in the study in that only those who engage in extreme sports are included. This is by nature a HSS activity. A broader population is therefore needed to test if these results hold.

Empirical Analysis on Bitcoin Price Change by Consumer, Industry and Macro-Economy Variables (비트코인 가격 변화에 관한 실증분석: 소비자, 산업, 그리고 거시변수를 중심으로)

  • Lee, Junsik;Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.195-220
    • /
    • 2018
  • In this study, we conducted an empirical analysis of the factors that affect the change of Bitcoin Closing Price. Previous studies have focused on the security of the block chain system, the economic ripple effects caused by the cryptocurrency, legal implications and the acceptance to consumer about cryptocurrency. In various area, cryptocurrency was studied and many researcher and people including government, regardless of country, try to utilize cryptocurrency and applicate to its technology. Despite of rapid and dramatic change of cryptocurrencies' price and growth of its effects, empirical study of the factors affecting the price change of cryptocurrency was lack. There were only a few limited studies, business reports and short working paper. Therefore, it is necessary to determine what factors effect on the change of closing Bitcoin price. For analysis, hypotheses were constructed from three dimensions of consumer, industry, and macroeconomics for analysis, and time series data were collected for variables of each dimension. Consumer variables consist of search traffic of Bitcoin, search traffic of bitcoin ban, search traffic of ransomware and search traffic of war. Industry variables were composed GPU vendors' stock price and memory vendors' stock price. Macro-economy variables were contemplated such as U.S. dollar index futures, FOMC policy interest rates, WTI crude oil price. Using above variables, we did times series regression analysis to find relationship between those variables and change of Bitcoin Closing Price. Before the regression analysis to confirm the relationship between change of Bitcoin Closing Price and the other variables, we performed the Unit-root test to verifying the stationary of time series data to avoid spurious regression. Then, using a stationary data, we did the regression analysis. As a result of the analysis, we found that the change of Bitcoin Closing Price has negative effects with search traffic of 'Bitcoin Ban' and US dollar index futures, while change of GPU vendors' stock price and change of WTI crude oil price showed positive effects. In case of 'Bitcoin Ban', it is directly determining the maintenance or abolition of Bitcoin trade, that's why consumer reacted sensitively and effected on change of Bitcoin Closing Price. GPU is raw material of Bitcoin mining. Generally, increasing of companies' stock price means the growth of the sales of those companies' products and services. GPU's demands increases are indirectly reflected to the GPU vendors' stock price. Making an interpretation, a rise in prices of GPU has put a crimp on the mining of Bitcoin. Consequently, GPU vendors' stock price effects on change of Bitcoin Closing Price. And we confirmed U.S. dollar index futures moved in the opposite direction with change of Bitcoin Closing Price. It moved like Gold. Gold was considered as a safe asset to consumers and it means consumer think that Bitcoin is a safe asset. On the other hand, WTI oil price went Bitcoin Closing Price's way. It implies that Bitcoin are regarded to investment asset like raw materials market's product. The variables that were not significant in the analysis were search traffic of bitcoin, search traffic of ransomware, search traffic of war, memory vendor's stock price, FOMC policy interest rates. In search traffic of bitcoin, we judged that interest in Bitcoin did not lead to purchase of Bitcoin. It means search traffic of Bitcoin didn't reflect all of Bitcoin's demand. So, it implies there are some factors that regulate and mediate the Bitcoin purchase. In search traffic of ransomware, it is hard to say concern of ransomware determined the whole Bitcoin demand. Because only a few people damaged by ransomware and the percentage of hackers requiring Bitcoins was low. Also, its information security problem is events not continuous issues. Search traffic of war was not significant. Like stock market, generally it has negative in relation to war, but exceptional case like Gulf war, it moves stakeholders' profits and environment. We think that this is the same case. In memory vendor stock price, this is because memory vendors' flagship products were not VRAM which is essential for Bitcoin supply. In FOMC policy interest rates, when the interest rate is low, the surplus capital is invested in securities such as stocks. But Bitcoin' price fluctuation was large so it is not recognized as an attractive commodity to the consumers. In addition, unlike the stock market, Bitcoin doesn't have any safety policy such as Circuit breakers and Sidecar. Through this study, we verified what factors effect on change of Bitcoin Closing Price, and interpreted why such change happened. In addition, establishing the characteristics of Bitcoin as a safe asset and investment asset, we provide a guide how consumer, financial institution and government organization approach to the cryptocurrency. Moreover, corroborating the factors affecting change of Bitcoin Closing Price, researcher will get some clue and qualification which factors have to be considered in hereafter cryptocurrency study.

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

Forecasting Substitution and Competition among Previous and New products using Choice-based Diffusion Model with Switching Cost: Focusing on Substitution and Competition among Previous and New Fixed Charged Broadcasting Services (전환 비용이 반영된 선택 기반 확산 모형을 통한 신.구 상품간 대체 및 경쟁 예측: 신.구 유료 방송서비스간 대체 및 경쟁 사례를 중심으로)

  • Koh, Dae-Young;Hwang, Jun-Seok;Oh, Hyun-Seok;Lee, Jong-Su
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.223-252
    • /
    • 2008
  • In this study, we attempt to propose a choice-based diffusion model with switching cost, which can be used to forecast the dynamic substitution and competition among previous and new products at both individual-level and aggregate level, especially when market data for new products is insufficient. Additionally, we apply the proposed model to the empirical case of substitution and competition among Analog Cable TV that represents previous fixed charged broadcasting service and Digital Cable TV and Internet Protocol TV (IPTV) that are new ones, verify the validities of our proposed model, and finally derive related empirical implications. For empirical application, we obtained data from survey conducted as follows. Survey was administered by Dongseo Research to 1,000 adults aging from 20 to 60 living in Seoul, Korea, in May of 2007, under the title of 'Demand analysis of next generation fixed interactive broadcasting services'. Conjoint survey modified as follows, was used. First, as the traditional approach in conjoint analysis, we extracted 16 hypothetical alternative cards from the orthogonal design using important attributes and levels of next generation interactive broadcasting services which were determined by previous literature review and experts' comments. Again, we divided 16 conjoint cards into 4 groups, and thus composed 4 choice sets with 4 alternatives each. Therefore, each respondent faces 4 different hypothetical choice situations. In addition to this, we added two ways of modification. First, we asked the respondents to include the status-quo broadcasting services they subscribe to, as another alternative in each choice set. As a result, respondents choose the most preferred alternative among 5 alternatives consisting of 1 alternative with current subscription and 4 hypothetical alternatives in 4 choice sets. Modification of traditional conjoint survey in this way enabled us to estimate the factors related to switching cost or switching threshold in addition to the effects of attributes. Also, by using both revealed preference data(1 alternative with current subscription) and stated preference data (4 hypothetical alternatives), additional advantages in terms of the estimation properties and more conservative and realistic forecast, can be achieved. Second, we asked the respondents to choose the most preferred alternative while considering their expected adoption timing or switching timing. Respondents are asked to report their expected adoption or switching timing among 14 half-year points after the introduction of next generation broadcasting services. As a result, for each respondent, 14 observations with 5 alternatives for each period, are obtained, which results in panel-type data. Finally, this panel-type data consisting of $4{\ast}14{\ast}1000=56000$observations is used for estimation of the individual-level consumer adoption model. From the results obtained by empirical application, in case of forecasting the demand of new products without considering existence of previous product(s) and(or) switching cost factors, it is found that overestimated speed of diffusion at introductory stage or distorted predictions can be obtained, and as such, validities of our proposed model in which both existence of previous products and switching cost factors are properly considered, are verified. Also, it is found that proposed model can produce flexible patterns of market evolution depending on the degree of the effects of consumer preferences for the attributes of the alternatives on individual-level state transition, rather than following S-shaped curve assumed a priori. Empirically, it is found that in various scenarios with diverse combinations of prices, IPTV is more likely to take advantageous positions over Digital Cable TV in obtaining subscribers. Meanwhile, despite inferiorities in many technological attributes, Analog Cable TV, which is regarded as previous product in our analysis, is likely to be substituted by new services gradually rather than abruptly thanks to the advantage in low service charge and existence of high switching cost in fixed charged broadcasting service market.

  • PDF

The Effect of Attributes of Innovation and Perceived Risk on Product Attitudes and Intention to Adopt Smart Wear (스마트 의류의 혁신속성과 지각된 위험이 제품 태도 및 수용의도에 미치는 영향)

  • Ko, Eun-Ju;Sung, Hee-Won;Yoon, Hye-Rim
    • Journal of Global Scholars of Marketing Science
    • /
    • v.18 no.2
    • /
    • pp.89-111
    • /
    • 2008
  • Due to the development of digital technology, studies regarding smart wear integrating daily life have rapidly increased. However, consumer research about perception and attitude toward smart clothing hardly could find. The purpose of this study was to identify innovative characteristics and perceived risk of smart clothing and to analyze the influences of theses factors on product attitudes and intention to adopt. Specifically, five hypotheses were established. H1: Perceived attributes of smart clothing except for complexity would have positive relations to product attitude or purchase intention, while complexity would be opposite. H2: Product attitude would have positive relation to purchase intention. H3: Product attitude would have a mediating effect between perceived attributes and purchase intention. H4: Perceived risks of smart clothing would have negative relations to perceived attributes except for complexity, and positive relations to complexity. H5: Product attitude would have a mediating effect between perceived risks and purchase intention. A self-administered questionnaire was developed based on previous studies. After pretest, the data were collected during September, 2006, from university students in Korea who were relatively sensitive to innovative products. A total of 300 final useful questionnaire were analyzed by SPSS 13.0 program. About 60.3% were male with the mean age of 21.3 years old. About 59.3% reported that they were aware of smart clothing, but only 9 respondents purchased it. The mean of attitudes toward smart clothing and purchase intention was 2.96 (SD=.56) and 2.63 (SD=.65) respectively. Factor analysis using principal components with varimax rotation was conducted to identify perceived attribute and perceived risk dimensions. Perceived attributes of smart wear were categorized into relative advantage (including compatibility), observability (including triability), and complexity. Perceived risks were identified into physical/performance risk, social psychological risk, time loss risk, and economic risk. Regression analysis was conducted to test five hypotheses. Relative advantage and observability were significant predictors of product attitude (adj $R^2$=.223) and purchase intention (adj $R^2$=.221). Complexity showed negative influence on product attitude. Product attitude presented significant relation to purchase intention (adj $R^2$=.692) and partial mediating effect between perceived attributes and purchase intention (adj $R^2$=.698). Therefore hypothesis one to three were accepted. In order to test hypothesis four, four dimensions of perceived risk and demographic variables (age, gender, monthly household income, awareness of smart clothing, and purchase experience) were entered as independent variables in the regression models. Social psychological risk, economic risk, and gender (female) were significant to predict relative advantage (adj $R^2$=.276). When perceived observability was a dependent variable, social psychological risk, time loss risk, physical/performance risk, and age (younger) were significant in order (adj $R^2$=.144). However, physical/performance risk was positively related to observability. The more Koreans seemed to be observable of smart clothing, the more increased the probability of physical harm or performance problems received. Complexity was predicted by product awareness, social psychological risk, economic risk, and purchase experience in order (adj $R^2$=.114). Product awareness was negatively related to complexity, meaning high level of product awareness would reduce complexity of smart clothing. However, purchase experience presented positive relation with complexity. It appears that consumers can perceive high level of complexity when they are actually consuming smart clothing in real life. Risk variables were positively related with complexity. That is, in order to decrease complexity, it is also necessary to consider minimizing anxiety factors about social psychological wound or loss of money. Thus, hypothesis 4 was partially accepted. Finally, in testing hypothesis 5, social psychological risk and economic risk were significant predictors for product attitude (adj $R^2$=.122) and purchase intention (adj $R^2$=.099) respectively. When attitude variable was included with risk variables as independent variables in the regression model to predict purchase intention, only attitude variable was significant (adj $R^2$=.691). Thus attitude variable presented full mediating effect between perceived risks and purchase intention, and hypothesis 5 was accepted. Findings would provide guidelines for fashion and electronic businesses who aim to create and strengthen positive attitude toward smart clothing. Marketers need to consider not only functional feature of smart clothing, but also practical and aesthetic attributes, since appropriateness for social norm or self image would reduce uncertainty of psychological or social risk, which increase relative advantage of smart clothing. Actually social psychological risk was significantly associated to relative advantage. Economic risk is negatively associated with product attitudes as well as purchase intention, suggesting that smart-wear developers have to reflect on price ranges of potential adopters. It will be effective to utilize the findings associated with complexity when marketers in US plan communication strategy.

  • PDF