• Title/Summary/Keyword: systems-theory

Search Result 5,367, Processing Time 0.034 seconds

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

A Study on Commodity Asset Investment Model Based on Machine Learning Technique (기계학습을 활용한 상품자산 투자모델에 관한 연구)

  • Song, Jin Ho;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2017
  • Services using artificial intelligence have begun to emerge in daily life. Artificial intelligence is applied to products in consumer electronics and communications such as artificial intelligence refrigerators and speakers. In the financial sector, using Kensho's artificial intelligence technology, the process of the stock trading system in Goldman Sachs was improved. For example, two stock traders could handle the work of 600 stock traders and the analytical work for 15 people for 4weeks could be processed in 5 minutes. Especially, big data analysis through machine learning among artificial intelligence fields is actively applied throughout the financial industry. The stock market analysis and investment modeling through machine learning theory are also actively studied. The limits of linearity problem existing in financial time series studies are overcome by using machine learning theory such as artificial intelligence prediction model. The study of quantitative financial data based on the past stock market-related numerical data is widely performed using artificial intelligence to forecast future movements of stock price or indices. Various other studies have been conducted to predict the future direction of the market or the stock price of companies by learning based on a large amount of text data such as various news and comments related to the stock market. Investing on commodity asset, one of alternative assets, is usually used for enhancing the stability and safety of traditional stock and bond asset portfolio. There are relatively few researches on the investment model about commodity asset than mainstream assets like equity and bond. Recently machine learning techniques are widely applied on financial world, especially on stock and bond investment model and it makes better trading model on this field and makes the change on the whole financial area. In this study we made investment model using Support Vector Machine among the machine learning models. There are some researches on commodity asset focusing on the price prediction of the specific commodity but it is hard to find the researches about investment model of commodity as asset allocation using machine learning model. We propose a method of forecasting four major commodity indices, portfolio made of commodity futures, and individual commodity futures, using SVM model. The four major commodity indices are Goldman Sachs Commodity Index(GSCI), Dow Jones UBS Commodity Index(DJUI), Thomson Reuters/Core Commodity CRB Index(TRCI), and Rogers International Commodity Index(RI). We selected each two individual futures among three sectors as energy, agriculture, and metals that are actively traded on CME market and have enough liquidity. They are Crude Oil, Natural Gas, Corn, Wheat, Gold and Silver Futures. We made the equally weighted portfolio with six commodity futures for comparing with other commodity indices. We set the 19 macroeconomic indicators including stock market indices, exports & imports trade data, labor market data, and composite leading indicators as the input data of the model because commodity asset is very closely related with the macroeconomic activities. They are 14 US economic indicators, two Chinese economic indicators and two Korean economic indicators. Data period is from January 1990 to May 2017. We set the former 195 monthly data as training data and the latter 125 monthly data as test data. In this study, we verified that the performance of the equally weighted commodity futures portfolio rebalanced by the SVM model is better than that of other commodity indices. The prediction accuracy of the model for the commodity indices does not exceed 50% regardless of the SVM kernel function. On the other hand, the prediction accuracy of equally weighted commodity futures portfolio is 53%. The prediction accuracy of the individual commodity futures model is better than that of commodity indices model especially in agriculture and metal sectors. The individual commodity futures portfolio excluding the energy sector has outperformed the three sectors covered by individual commodity futures portfolio. In order to verify the validity of the model, it is judged that the analysis results should be similar despite variations in data period. So we also examined the odd numbered year data as training data and the even numbered year data as test data and we confirmed that the analysis results are similar. As a result, when we allocate commodity assets to traditional portfolio composed of stock, bond, and cash, we can get more effective investment performance not by investing commodity indices but by investing commodity futures. Especially we can get better performance by rebalanced commodity futures portfolio designed by SVM model.

A Study on the ' Zhe Zhong Pai'(折衷派) of the Traditional Medicine of Japan (일본(日本) 의학(醫學)의 '절충파(折衷派)'에 관(關)한 연구(硏究))

  • Park, Hyun-Kuk;Kim, Ki-Wook
    • The Journal of Dong Guk Oriental Medicine
    • /
    • v.10
    • /
    • pp.41-61
    • /
    • 2008
  • The outline and characteristics of the important doctors of the 'Zhe Zhong Pai'(折衷派) are as follows. Part 1. In the late Edo(江戶) period The 'Zhe Zhong Pai', which tried to take the theory and clinical treatment of the 'Hou Shi Pai (後世派)' and the 'Gu Fang Pai(古方派)' and get their strong points to make treatments perfect, appeared. Their point was 'The main part is the art of the ancients, The latter prescriptions are to be used'(以古法爲主, 後世方爲用) and the "Shang Han Lun(傷寒論)" was revered for its treatments but in actual use it was not kept at that. As mentioned above The 'Zhe Zhong Pai' viewed treatments as the base, which was the view of most doctors in the Edo period. However, the reason the 'Zhe Zhong Pai' is not valued as much as the 'Gu Fang Pai' by medical history books in Japan is because the 'Zhe Zhong Pai' does not have the substantiation or uniqueness of the 'Gu Fang Pai', and also because the view of 'gather as well as store up'(兼收並蓄) was the same as the 'Kao Zheng Pai'. Moreover, the 'compromise'(折衷) point of view was from taking in both Chinese and western medical knowledge systems(漢蘭折衷). Generally the pioneer of the 'Zhe Zhong Pai' is seen as Mochizuki Rokumon(望月鹿門) and after that was Fukui Futei(福井楓亭), Wadato Kaku(和田東郭), Yamada Seichin(山田正珍) and Taki Motohiro(多紀元簡). Part 2. The lives of Wada Tokaku(和田東郭), Nakagame Kinkei(中神琴溪), Nei Teng Xi Zhe(內藤希哲), the important doctors of the 'Zhe Zhong Pai', are as follows. First Wada Tokaku(和田東郭, 1743-1803) was born when the 'Hou Shi Pai' was already declining and the 'Gu Fang Pai' was flourishing and learned medicine from a 'Hou Shi Pai' doctor, Hu Tian Xu Shan(戶田旭山) and a 'Gu Fang Pai' doctor, Yoshimasu Todo(吉益東洞). He was not hindered by 'the old ways(古方)' and did not lean towards 'the new ways(後世方)' and formed a way of compromise that 'looked at hardness and softness as the same'(剛柔相摩) by setting 'the cure of the disease' as the base, and said that to cure diseases 'the old way' must be used, but 'the new way' was necessary to supplement its shortcomings. His works include "Dao Shui Suo Yan(導水瑣言)", "Jiao Chiang Fang Yi Je(蕉窗方意解)" and "Yi Xue Sho(醫學說)". Second. Nakagame Kinkei(中神琴溪, 1744-1833) was famous for leaving Yoshimasu Todo(吉益東洞) and changing to the 'Zhe Zhong Pai', and in his early years used qing fen(輕粉) to cure geisha(妓女) of syphilis. His argument was "the "Shang Han Lun" must be revered but needs to be adapted", "Zhong Jing can be made into a follower but I cannot become his follower", "the later medical texts such as "Ru Men Shi Qin(儒門事親)" should only be used for its prescriptions and not its theories". His works include "Shang Han Lun Yue Yan(傷寒論約言)". Third, Nei Teng Xi Zhe(內藤希哲, 1701-1735) learned medicine from Qing Shui Xian Sheng(淸水先生) and went out to Edo. In his book "Yi Jing Jie Huo Lun(醫經解惑論)" he tells of how he went from 'learning'(學) to 'skepticism'(惑) and how skepticism made him learn in 'the six skepticisms'(六惑). In the latter years Xi Zhe(希哲) combines the "Shen Nong Ben Cao Jing(神農本草經)", the main text for herbal medicine, "Ming Tang Jing(明堂經)" of accupuncture, basic theory texts "Huang Dui Nei Jing(皇帝內經)" and "Nan Jing(難經)" with the "Shang Han Za Bing Lun", a book that the 'Gu Fang Pai' saw as opposing to the rest, and became 'an expert of five scriptures'(五經一貫). Part 3. Asada Showhaku(淺田宗伯, 1815-1894) started medicine at Zhong Cun Zhong Zong(中村中倧) and learned 'the old way'(古方) from Yoshimasu Todo and got experience through Ouan Yue(川越) and Fu Jing(福井) and received teachings in texts, history and Wang Yangmin's principles(陽明學) fmm famous teachers. Showhaku(倧伯) meets a medical official of the makufu(幕府), Ben Kang Zong Yuan(本康宗圓), and receives help from the 3 great doctors of the Edo period, Taki Motokato(多紀元堅), Xiao Dao Xue Gu(小島學古) and Xi Duo Cun Kao(喜多村栲窻) and further develops his arts. At 47 he diagnoses the general Jia Mao(家茂) with 'heart failure from beriberi'(脚氣衡心) and becomes a Zheng Shi(徵土), at 51 he cures a minister from France and received a present from Napoleon, at 65 he becomes the court physician and saves Ming Gong(明宮) Jia Ren Qn Wang(嘉仁親王, later the 大正天皇) from bodily convulsions and becomes 'the vassal of merit who saved the national polity(國體)' At the 7th year of the Meiji(明治) he becomes the 2nd owner of Wen Zhi She(溫知社) and takes part in the 'kampo continuation movement'. In his latter years he saw 14000 patients a year, so we can estimate the qualjty and quantity of his clinical skills. Showhaku(宗伯) wrote over 80 books including the "Ju Chuang Shu Ying(橘窻書影)", "Wu Wu Yao Shi Fang Han(勿誤藥室方函)", "Shang Han Biang Shu(傷寒辨術)", "Jing Qi Shen Lun(精氣神論)", "Hunag Guo Ming Yi Chuan(皇國名醫傳)" and the "Xian Jhe Yi Hua(先哲醫話)". Especially in the "Ju Chuang Shu Ying(橘窻書影) he says "the old theories are the main, and the new prescriptions are to be used"(以古法爲主, 後世方爲用), stating the 'Zhe Zhong Pai' way of thinking, In the first volume of "Shang Han Biang Shu(傷寒辨術)" and "Za Bing Lun Shi(雜病論識)", 'Zong Ping'(總評), He discerns the parts that are not Zhang Zhong Jing's writings and emphasizes his theories and practical uses.

  • PDF

A Study on the 'Zhe Zhong Pai'(折衷派) of the Traditional Medicine of Japan (일본(日本) 의학醫學의 '절충파(折衷派)'에 관(關)한 연구(硏究))

  • Park, Hyun-Kuk;Kim, Ki-Wook
    • Journal of Korean Medical classics
    • /
    • v.20 no.3
    • /
    • pp.121-141
    • /
    • 2007
  • The outline and characteristics of the important doctors of the 'Zhe Zhong Pai'(折衷派) are as follows. Part 1. In the late Edo(江戶) period The 'Zhe Zhong Pai', which tried to take the theory and clinical treatment of the 'Hou Shi Pai (後世派)' and the 'Gu Fang Pai (古方派)' and get their strong points to make treatments perfect, appeared. Their point was 'The main part is the art of the ancients, The latter prescriptions are to be used'(以古法爲主, 後世方爲用) and the "Shang Han Lun(傷寒論)" was revered for its treatments but in actual use it was not kept at that. As mentioned above The 'Zhe Zhong Pai ' viewed treatments as the base, which was the view of most doctors in the Edo period, However, the reason the 'Zhe Zhong Pai' is not valued as much as the 'Gu Fang Pai' by medical history books in Japan is because the 'Zhe Zhong Pai' does not have the substantiation or uniqueness of the 'Gu Fang Pai', and also because the view of 'gather as well as store up' was the same as the 'Kao Zheng Pai', Moreover, the 'compromise'(折衷) point of view was from taking in both Chinese and western medical knowledge systems(漢蘭折衷), Generally the pioneer of the 'Zhe Zhong Pai' is seen as Mochizuki Rokumon(望月鹿門) and after that was Fukui Futei(福井楓亭), Wadato Kaku(和田東郭), Yamada Seichin(山田正珍) and Taki Motohiro(多紀元簡), Part 2. The lives of Wada Tokaku(和田東郭), Nakagame Kinkei(中神琴溪), Nei Teng Xi Zhe(內藤希哲), the important doctors of the 'Zhe Zhong Pai', are as follows First. Wada Tokaku(和田東郭, 1743-1803) was born when the 'Hou Shi Pai' was already declining and the 'Gu Fang Pai' was flourishing and learned medicine from a 'Hou Shi Pai' doctor, Hu Tian Xu Shan(戶田旭山) and a 'Gu Fang Pai' doctor, Yoshimasu Todo(吉益東洞). He was not hindered by 'the old ways(古方), and did not lean towards 'the new ways(後世方)' and formed a way of compromise that 'looked at hardness and softness as the same'(剛柔相摩) by setting 'the cure of the disease' as the base, and said that to cure diseases 'the old way' must be used, but 'the new way' was necessary to supplement its shortcomings. His works include "Dao Shui Suo Yan", "Jiao Chiang Fang Yi Je" and "Yi Xue Sho(醫學說)" Second. Nakagame Kinkei(中神琴溪, 1744-1833) was famous for leaving Yoshirnasu Todo(吉益東洞) and changing to the 'Zhe Zhong Pai', and in his early years used qing fen(輕粉) to cure geisha(妓女) of syphilis. His argument was "the "Shang Han Lun" must be revered but needs to be adapted", "Zhong jing can be made into a follower but I cannot become his follower", "the later medical texts such as "Ru Men Shi Qin(儒門事親)" should only be used for its prescriptions and not its theories". His works include "Shang Han Lun Yue Yan(傷寒論約言) Third. Nei Teng Xi Zhe(內藤希哲, 1701-1735) learned medicine from Qing Shui Xian Sheng(淸水先生) and went out to Edo. In his book "Yi Jing Jie Huo Lun(醫經解惑論)" he tells of how he went from 'learning'(學) to 'skepticism'(惑) and how skepticism made him learn in 'the six skepticisms'(六惑). In the latter years Xi Zhe(希哲) combines the "Shen Nong Ben Cao jing(神農本草經)", the main text for herbal medicine, "Ming Tang jing(明堂經)" of accupuncture, basic theory texts "Huang Dui Nei jing(黃帝內徑)" and "Nan jing(難經)" with the "Shang Han Za Bing Lun", a book that the 'Gu Fang Pai' saw as opposing to the rest, and became 'an expert of five scriptures'(五經一貫). Part 3. Asada Showhaku(淺田宗伯, 1815-1894) started medicine at Zhong Cun Zhong(中村中倧) and learned 'the old way'(古方) from Yoshirnasu Todo and got experience through Chuan Yue(川越) and Fu jing(福井) and received teachings in texts, history and Wang Yangmin's principles(陽明學) from famous teachers. Showhaku(宗伯) meets a medical official of the makufu(幕府), Ben Kang Zong Yuan(本康宗圓), and recieves help from the 3 great doctors of the Edo period, Taki Motokato(多紀元堅), Xiao Dao Xue GU(小島學古) and Xi Duo Cun Kao Chuang and further develops his arts. At 47 he diagnoses the general Jia Mao(家茂) with 'heart failure from beriberi'(脚氣衝心) and becomes a Zheng Shi(徵I), at 51 he cures a minister from France and received a present from Napoleon, at 65 he becomes the court physician and saves Ming Gong(明宮) jia Ren Qn Wang(嘉仁親王, later the 大正犬皇) from bodily convulsions and becomes 'the vassal of merit who saved the national polity(國體)' At the 7th year of the Meiji(明治) he becomes the 2nd owner of Wen Zhi She(溫知社) and takes part in the 'kampo continuation movement'. In his latter years he saw 14000 patients a year, so we can estimate the quality and quantity of his clinical skills Showhaku(宗伯) wrote over 80 books including the "Ju Chuang Shu Ying(橘窓書影)", "WU Wu Yao Shi Fang Han(勿誤藥室方函)", "Shang Han Biang Shu(傷寒辨術)", "jing Qi Shen Lun(精氣神論)", "Hunag Guo Ming Yi Chuan(皇國名醫傳)" and the "Xian Jhe Yi Hua(先哲醫話)". Especially in the "Ju Chuang Shu Ying(橘窓書影)" he says "the old theories are the main, and the new prescriptions are to be used"(以古法爲主, 後世方爲用), stating the 'Zhe Zhong Pai' way of thinking. In the first volume of "Shung Han Biang Shu(傷寒辨術) and "Za Bing Lun Shi(雜病論識)", 'Zong Ping'(總評), He discerns the parts that are not Zhang Zhong Jing's writings and emphasizes his theories and practical uses.

  • PDF

Structural features and Diffusion Patterns of Gartner Hype Cycle for Artificial Intelligence using Social Network analysis (인공지능 기술에 관한 가트너 하이프사이클의 네트워크 집단구조 특성 및 확산패턴에 관한 연구)

  • Shin, Sunah;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.107-129
    • /
    • 2022
  • It is important to preempt new technology because the technology competition is getting much tougher. Stakeholders conduct exploration activities continuously for new technology preoccupancy at the right time. Gartner's Hype Cycle has significant implications for stakeholders. The Hype Cycle is a expectation graph for new technologies which is combining the technology life cycle (S-curve) with the Hype Level. Stakeholders such as R&D investor, CTO(Chef of Technology Officer) and technical personnel are very interested in Gartner's Hype Cycle for new technologies. Because high expectation for new technologies can bring opportunities to maintain investment by securing the legitimacy of R&D investment. However, contrary to the high interest of the industry, the preceding researches faced with limitations aspect of empirical method and source data(news, academic papers, search traffic, patent etc.). In this study, we focused on two research questions. The first research question was 'Is there a difference in the characteristics of the network structure at each stage of the hype cycle?'. To confirm the first research question, the structural characteristics of each stage were confirmed through the component cohesion size. The second research question is 'Is there a pattern of diffusion at each stage of the hype cycle?'. This research question was to be solved through centralization index and network density. The centralization index is a concept of variance, and a higher centralization index means that a small number of nodes are centered in the network. Concentration of a small number of nodes means a star network structure. In the network structure, the star network structure is a centralized structure and shows better diffusion performance than a decentralized network (circle structure). Because the nodes which are the center of information transfer can judge useful information and deliver it to other nodes the fastest. So we confirmed the out-degree centralization index and in-degree centralization index for each stage. For this purpose, we confirmed the structural features of the community and the expectation diffusion patterns using Social Network Serice(SNS) data in 'Gartner Hype Cycle for Artificial Intelligence, 2021'. Twitter data for 30 technologies (excluding four technologies) listed in 'Gartner Hype Cycle for Artificial Intelligence, 2021' were analyzed. Analysis was performed using R program (4.1.1 ver) and Cyram Netminer. From October 31, 2021 to November 9, 2021, 6,766 tweets were searched through the Twitter API, and converting the relationship user's tweet(Source) and user's retweets (Target). As a result, 4,124 edgelists were analyzed. As a reult of the study, we confirmed the structural features and diffusion patterns through analyze the component cohesion size and degree centralization and density. Through this study, we confirmed that the groups of each stage increased number of components as time passed and the density decreased. Also 'Innovation Trigger' which is a group interested in new technologies as a early adopter in the innovation diffusion theory had high out-degree centralization index and the others had higher in-degree centralization index than out-degree. It can be inferred that 'Innovation Trigger' group has the biggest influence, and the diffusion will gradually slow down from the subsequent groups. In this study, network analysis was conducted using social network service data unlike methods of the precedent researches. This is significant in that it provided an idea to expand the method of analysis when analyzing Gartner's hype cycle in the future. In addition, the fact that the innovation diffusion theory was applied to the Gartner's hype cycle's stage in artificial intelligence can be evaluated positively because the Gartner hype cycle has been repeatedly discussed as a theoretical weakness. Also it is expected that this study will provide a new perspective on decision-making on technology investment to stakeholdes.

Development of Predictive Models for Rights Issues Using Financial Analysis Indices and Decision Tree Technique (경영분석지표와 의사결정나무기법을 이용한 유상증자 예측모형 개발)

  • Kim, Myeong-Kyun;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-77
    • /
    • 2012
  • This study focuses on predicting which firms will increase capital by issuing new stocks in the near future. Many stakeholders, including banks, credit rating agencies and investors, performs a variety of analyses for firms' growth, profitability, stability, activity, productivity, etc., and regularly report the firms' financial analysis indices. In the paper, we develop predictive models for rights issues using these financial analysis indices and data mining techniques. This study approaches to building the predictive models from the perspective of two different analyses. The first is the analysis period. We divide the analysis period into before and after the IMF financial crisis, and examine whether there is the difference between the two periods. The second is the prediction time. In order to predict when firms increase capital by issuing new stocks, the prediction time is categorized as one year, two years and three years later. Therefore Total six prediction models are developed and analyzed. In this paper, we employ the decision tree technique to build the prediction models for rights issues. The decision tree is the most widely used prediction method which builds decision trees to label or categorize cases into a set of known classes. In contrast to neural networks, logistic regression and SVM, decision tree techniques are well suited for high-dimensional applications and have strong explanation capabilities. There are well-known decision tree induction algorithms such as CHAID, CART, QUEST, C5.0, etc. Among them, we use C5.0 algorithm which is the most recently developed algorithm and yields performance better than other algorithms. We obtained data for the rights issue and financial analysis from TS2000 of Korea Listed Companies Association. A record of financial analysis data is consisted of 89 variables which include 9 growth indices, 30 profitability indices, 23 stability indices, 6 activity indices and 8 productivity indices. For the model building and test, we used 10,925 financial analysis data of total 658 listed firms. PASW Modeler 13 was used to build C5.0 decision trees for the six prediction models. Total 84 variables among financial analysis data are selected as the input variables of each model, and the rights issue status (issued or not issued) is defined as the output variable. To develop prediction models using C5.0 node (Node Options: Output type = Rule set, Use boosting = false, Cross-validate = false, Mode = Simple, Favor = Generality), we used 60% of data for model building and 40% of data for model test. The results of experimental analysis show that the prediction accuracies of data after the IMF financial crisis (59.04% to 60.43%) are about 10 percent higher than ones before IMF financial crisis (68.78% to 71.41%). These results indicate that since the IMF financial crisis, the reliability of financial analysis indices has increased and the firm intention of rights issue has been more obvious. The experiment results also show that the stability-related indices have a major impact on conducting rights issue in the case of short-term prediction. On the other hand, the long-term prediction of conducting rights issue is affected by financial analysis indices on profitability, stability, activity and productivity. All the prediction models include the industry code as one of significant variables. This means that companies in different types of industries show their different types of patterns for rights issue. We conclude that it is desirable for stakeholders to take into account stability-related indices and more various financial analysis indices for short-term prediction and long-term prediction, respectively. The current study has several limitations. First, we need to compare the differences in accuracy by using different data mining techniques such as neural networks, logistic regression and SVM. Second, we are required to develop and to evaluate new prediction models including variables which research in the theory of capital structure has mentioned about the relevance to rights issue.

An Analysis of the Moderating Effects of User Ability on the Acceptance of an Internet Shopping Mall (인터넷 쇼핑몰 수용에 있어 사용자 능력의 조절효과 분석)

  • Suh, Kun-Soo
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.27-55
    • /
    • 2008
  • Due to the increasing and intensifying competition in the Internet shopping market, it has been recognized as very important to develop an effective policy and strategy for acquiring loyal customers. For this reason, web site designers need to know if a new Internet shopping mall(ISM) will be accepted. Researchers have been working on identifying factors for explaining and predicting user acceptance of an ISM. Some studies, however, revealed inconsistent findings on the antecedents of user acceptance of a website. Lack of consideration for individual differences in user ability is believed to be one of the key reasons for the mixed findings. The elaboration likelihood model (ELM) and several studies have suggested that individual differences in ability plays an moderating role on the relationship between the antecedents and user acceptance. Despite the critical role of user ability, little research has examined the role of user ability in the Internet shopping mall context. The purpose of this study is to develop a user acceptance model that consider the moderating role of user ability in the context of Internet shopping. This study was initiated to see the ability of the technology acceptance model(TAM) to explain the acceptance of a specific ISM. According to TAM. which is one of the most influential models for explaining user acceptance of IT, an intention to use IT is determined by usefulness and ease of use. Given that interaction between user and website takes place through web interface, the decisions to accept and continue using an ISM depend on these beliefs. However, TAM neglects to consider the fact that many users would not stick to an ISM until they trust it although they may think it useful and easy to use. The importance of trust for user acceptance of ISM has been raised by the relational views. The relational view emphasizes the trust-building process between the user and ISM, and user's trust on the website is a major determinant of user acceptance. The proposed model extends and integrates the TAM and relational views on user acceptance of ISM by incorporating usefulness, ease of use, and trust. User acceptance is defined as a user's intention to reuse a specific ISM. And user ability is introduced into the model as moderating variable. Here, the user ability is defined as a degree of experiences, knowledge and skills regarding Internet shopping sites. The research model proposes that the ease of use, usefulness and trust of ISM are key determinants of user acceptance. In addition, this paper hypothesizes that the effects of the antecedents(i.e., ease of use, usefulness, and trust) on user acceptance may differ among users. In particular, this paper proposes a moderating effect of a user's ability on the relationship between antecedents with user's intention to reuse. The research model with eleven hypotheses was derived and tested through a survey that involved 470 university students. For each research variable, this paper used measurement items recognized for reliability and widely used in previous research. We slightly modified some items proper to the research context. The reliability and validity of the research variables were tested using the Crobnach's alpha and internal consistency reliability (ICR) values, standard factor loadings of the confirmative factor analysis, and average variance extracted (AVE) values. A LISREL method was used to test the suitability of the research model and its relating six hypotheses. Key findings of the results are summarized in the following. First, TAM's two constructs, ease of use and usefulness directly affect user acceptance. In addition, ease of use indirectly influences user acceptance by affecting trust. This implies that users tend to trust a shopping site and visit repeatedly when they perceive a specific ISM easy to use. Accordingly, designing a shopping site that allows users to navigate with heuristic and minimal clicks for finding information and products within the site is important for improving the site's trust and acceptance. Usefulness, however, was not found to influence trust. Second, among the three belief constructs(ease of use, usefulness, and trust), trust was empirically supported as the most important determinants of user acceptance. This implies that users require trustworthiness from an Internet shopping site to be repeat visitors of an ISM. Providing a sense of safety and eliminating the anxiety of online shoppers in relation to privacy, security, delivery, and product returns are critically important conditions for acquiring repeat visitors. Hence, in addition to usefulness and ease of use as in TAM, trust should be a fundamental determinants of user acceptance in the context of internet shopping. Third, the user's ability on using an Internet shopping site played a moderating role. For users with low ability, ease of use was found to be a more important factors in deciding to reuse the shopping mall, whereas usefulness and trust had more effects on users with high ability. Applying the EML theory to these findings, we can suggest that experienced and knowledgeable ISM users tend to elaborate on such usefulness aspects as efficient and effective shopping performance and trust factors as ability, benevolence, integrity, and predictability of a shopping site before they become repeat visitors of the site. In contrast, novice users tend to rely on the low elaborating features, such as the perceived ease of use. The existence of moderating effects suggests the fact that different individuals evaluate an ISM from different perspectives. The expert users are more interested in the outcome of the visit(usefulness) and trustworthiness(trust) than those novice visitors. The latter evaluate the ISM in a more superficial manner focusing on the novelty of the site and on other instrumental beliefs(ease of use). This is consistent with the insights proposed by the Heuristic-Systematic model. According to the Heuristic-Systematic model. a users act on the principle of minimum effort. Thus, the user considers an ISM heuristically, focusing on those aspects that are easy to process and evaluate(ease of use). When the user has sufficient experience and skills, the user will change to systematic processing, where they will evaluate more complex aspects of the site(its usefulness and trustworthiness). This implies that an ISM has to provide a minimum level of ease of use to make it possible for a user to evaluate its usefulness and trustworthiness. Ease of use is a necessary but not sufficient condition for the acceptance and use of an ISM. Overall, the empirical results generally support the proposed model and identify the moderating effect of the effects of user ability. More detailed interpretations and implications of the findings are discussed. The limitations of this study are also discussed to provide directions for future research.

APPLICATION OF FUZZY SET THEORY IN SAFEGUARDS

  • Fattah, A.;Nishiwaki, Y.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1051-1054
    • /
    • 1993
  • The International Atomic Energy Agency's Statute in Article III.A.5 allows it“to establish and administer safeguards designed to ensure that special fissionable and other materials, services, equipment, facilities and information made available by the Agency or at its request or under its supervision or control are not used in such a way as to further any military purpose; and to apply safeguards, at the request of the parties, to any bilateral or multilateral arrangement, or at the request of a State, to any of that State's activities in the field of atomic energy”. Safeguards are essentially a technical means of verifying the fulfilment of political obligations undertaken by States and given a legal force in international agreements relating to the peaceful uses of nuclear energy. The main political objectives are: to assure the international community that States are complying with their non-proliferation and other peaceful undertakings; and to deter (a) the diversion of afeguarded nuclear materials to the production of nuclear explosives or for military purposes and (b) the misuse of safeguarded facilities with the aim of producing unsafeguarded nuclear material. It is clear that no international safeguards system can physically prevent diversion. The IAEA safeguards system is basically a verification measure designed to provide assurance in those cases in which diversion has not occurred. Verification is accomplished by two basic means: material accountancy and containment and surveillance measures. Nuclear material accountancy is the fundamental IAEA safeguards mechanism, while containment and surveillance serve as important complementary measures. Material accountancy refers to a collection of measurements and other determinations which enable the State and the Agency to maintain a current picture of the location and movement of nuclear material into and out of material balance areas, i. e. areas where all material entering or leaving is measurab e. A containment measure is one that is designed by taking advantage of structural characteristics, such as containers, tanks or pipes, etc. To establish the physical integrity of an area or item by preventing the undetected movement of nuclear material or equipment. Such measures involve the application of tamper-indicating or surveillance devices. Surveillance refers to both human and instrumental observation aimed at indicating the movement of nuclear material. The verification process consists of three over-lapping elements: (a) Provision by the State of information such as - design information describing nuclear installations; - accounting reports listing nuclear material inventories, receipts and shipments; - documents amplifying and clarifying reports, as applicable; - notification of international transfers of nuclear material. (b) Collection by the IAEA of information through inspection activities such as - verification of design information - examination of records and repo ts - measurement of nuclear material - examination of containment and surveillance measures - follow-up activities in case of unusual findings. (c) Evaluation of the information provided by the State and of that collected by inspectors to determine the completeness, accuracy and validity of the information provided by the State and to resolve any anomalies and discrepancies. To design an effective verification system, one must identify possible ways and means by which nuclear material could be diverted from peaceful uses, including means to conceal such diversions. These theoretical ways and means, which have become known as diversion strategies, are used as one of the basic inputs for the development of safeguards procedures, equipment and instrumentation. For analysis of implementation strategy purposes, it is assumed that non-compliance cannot be excluded a priori and that consequently there is a low but non-zero probability that a diversion could be attempted in all safeguards ituations. An important element of diversion strategies is the identification of various possible diversion paths; the amount, type and location of nuclear material involved, the physical route and conversion of the material that may take place, rate of removal and concealment methods, as appropriate. With regard to the physical route and conversion of nuclear material the following main categories may be considered: - unreported removal of nuclear material from an installation or during transit - unreported introduction of nuclear material into an installation - unreported transfer of nuclear material from one material balance area to another - unreported production of nuclear material, e. g. enrichment of uranium or production of plutonium - undeclared uses of the material within the installation. With respect to the amount of nuclear material that might be diverted in a given time (the diversion rate), the continuum between the following two limiting cases is cons dered: - one significant quantity or more in a short time, often known as abrupt diversion; and - one significant quantity or more per year, for example, by accumulation of smaller amounts each time to add up to a significant quantity over a period of one year, often called protracted diversion. Concealment methods may include: - restriction of access of inspectors - falsification of records, reports and other material balance areas - replacement of nuclear material, e. g. use of dummy objects - falsification of measurements or of their evaluation - interference with IAEA installed equipment.As a result of diversion and its concealment or other actions, anomalies will occur. All reasonable diversion routes, scenarios/strategies and concealment methods have to be taken into account in designing safeguards implementation strategies so as to provide sufficient opportunities for the IAEA to observe such anomalies. The safeguards approach for each facility will make a different use of these procedures, equipment and instrumentation according to the various diversion strategies which could be applicable to that facility and according to the detection and inspection goals which are applied. Postulated pathways sets of scenarios comprise those elements of diversion strategies which might be carried out at a facility or across a State's fuel cycle with declared or undeclared activities. All such factors, however, contain a degree of fuzziness that need a human judgment to make the ultimate conclusion that all material is being used for peaceful purposes. Safeguards has been traditionally based on verification of declared material and facilities using material accountancy as a fundamental measure. The strength of material accountancy is based on the fact that it allows to detect any diversion independent of the diversion route taken. Material accountancy detects a diversion after it actually happened and thus is powerless to physically prevent it and can only deter by the risk of early detection any contemplation by State authorities to carry out a diversion. Recently the IAEA has been faced with new challenges. To deal with these, various measures are being reconsidered to strengthen the safeguards system such as enhanced assessment of the completeness of the State's initial declaration of nuclear material and installations under its jurisdiction enhanced monitoring and analysis of open information and analysis of open information that may indicate inconsistencies with the State's safeguards obligations. Precise information vital for such enhanced assessments and analyses is normally not available or, if available, difficult and expensive collection of information would be necessary. Above all, realistic appraisal of truth needs sound human judgment.

  • PDF

Memory Organization for a Fuzzy Controller.

  • Jee, K.D.S.;Poluzzi, R.;Russo, B.
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1993.06a
    • /
    • pp.1041-1043
    • /
    • 1993
  • Fuzzy logic based Control Theory has gained much interest in the industrial world, thanks to its ability to formalize and solve in a very natural way many problems that are very difficult to quantify at an analytical level. This paper shows a solution for treating membership function inside hardware circuits. The proposed hardware structure optimizes the memoried size by using particular form of the vectorial representation. The process of memorizing fuzzy sets, i.e. their membership function, has always been one of the more problematic issues for the hardware implementation, due to the quite large memory space that is needed. To simplify such an implementation, it is commonly [1,2,8,9,10,11] used to limit the membership functions either to those having triangular or trapezoidal shape, or pre-definite shape. These kinds of functions are able to cover a large spectrum of applications with a limited usage of memory, since they can be memorized by specifying very few parameters ( ight, base, critical points, etc.). This however results in a loss of computational power due to computation on the medium points. A solution to this problem is obtained by discretizing the universe of discourse U, i.e. by fixing a finite number of points and memorizing the value of the membership functions on such points [3,10,14,15]. Such a solution provides a satisfying computational speed, a very high precision of definitions and gives the users the opportunity to choose membership functions of any shape. However, a significant memory waste can as well be registered. It is indeed possible that for each of the given fuzzy sets many elements of the universe of discourse have a membership value equal to zero. It has also been noticed that almost in all cases common points among fuzzy sets, i.e. points with non null membership values are very few. More specifically, in many applications, for each element u of U, there exists at most three fuzzy sets for which the membership value is ot null [3,5,6,7,12,13]. Our proposal is based on such hypotheses. Moreover, we use a technique that even though it does not restrict the shapes of membership functions, it reduces strongly the computational time for the membership values and optimizes the function memorization. In figure 1 it is represented a term set whose characteristics are common for fuzzy controllers and to which we will refer in the following. The above term set has a universe of discourse with 128 elements (so to have a good resolution), 8 fuzzy sets that describe the term set, 32 levels of discretization for the membership values. Clearly, the number of bits necessary for the given specifications are 5 for 32 truth levels, 3 for 8 membership functions and 7 for 128 levels of resolution. The memory depth is given by the dimension of the universe of the discourse (128 in our case) and it will be represented by the memory rows. The length of a world of memory is defined by: Length = nem (dm(m)+dm(fm) Where: fm is the maximum number of non null values in every element of the universe of the discourse, dm(m) is the dimension of the values of the membership function m, dm(fm) is the dimension of the word to represent the index of the highest membership function. In our case then Length=24. The memory dimension is therefore 128*24 bits. If we had chosen to memorize all values of the membership functions we would have needed to memorize on each memory row the membership value of each element. Fuzzy sets word dimension is 8*5 bits. Therefore, the dimension of the memory would have been 128*40 bits. Coherently with our hypothesis, in fig. 1 each element of universe of the discourse has a non null membership value on at most three fuzzy sets. Focusing on the elements 32,64,96 of the universe of discourse, they will be memorized as follows: The computation of the rule weights is done by comparing those bits that represent the index of the membership function, with the word of the program memor . The output bus of the Program Memory (μCOD), is given as input a comparator (Combinatory Net). If the index is equal to the bus value then one of the non null weight derives from the rule and it is produced as output, otherwise the output is zero (fig. 2). It is clear, that the memory dimension of the antecedent is in this way reduced since only non null values are memorized. Moreover, the time performance of the system is equivalent to the performance of a system using vectorial memorization of all weights. The dimensioning of the word is influenced by some parameters of the input variable. The most important parameter is the maximum number membership functions (nfm) having a non null value in each element of the universe of discourse. From our study in the field of fuzzy system, we see that typically nfm 3 and there are at most 16 membership function. At any rate, such a value can be increased up to the physical dimensional limit of the antecedent memory. A less important role n the optimization process of the word dimension is played by the number of membership functions defined for each linguistic term. The table below shows the request word dimension as a function of such parameters and compares our proposed method with the method of vectorial memorization[10]. Summing up, the characteristics of our method are: Users are not restricted to membership functions with specific shapes. The number of the fuzzy sets and the resolution of the vertical axis have a very small influence in increasing memory space. Weight computations are done by combinatorial network and therefore the time performance of the system is equivalent to the one of the vectorial method. The number of non null membership values on any element of the universe of discourse is limited. Such a constraint is usually non very restrictive since many controllers obtain a good precision with only three non null weights. The method here briefly described has been adopted by our group in the design of an optimized version of the coprocessor described in [10].

  • PDF

Individual Thinking Style leads its Emotional Perception: Development of Web-style Design Evaluation Model and Recommendation Algorithm Depending on Consumer Regulatory Focus (사고가 시각을 바꾼다: 조절 초점에 따른 소비자 감성 기반 웹 스타일 평가 모형 및 추천 알고리즘 개발)

  • Kim, Keon-Woo;Park, Do-Hyung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.171-196
    • /
    • 2018
  • With the development of the web, two-way communication and evaluation became possible and marketing paradigms shifted. In order to meet the needs of consumers, web design trends are continuously responding to consumer feedback. As the web becomes more and more important, both academics and businesses are studying consumer emotions and satisfaction on the web. However, some consumer characteristics are not well considered. Demographic characteristics such as age and sex have been studied extensively, but few studies consider psychological characteristics such as regulatory focus (i.e., emotional regulation). In this study, we analyze the effect of web style on consumer emotion. Many studies analyze the relationship between the web and regulatory focus, but most concentrate on the purpose of web use, particularly motivation and information search, rather than on web style and design. The web communicates with users through visual elements. Because the human brain is influenced by all five senses, both design factors and emotional responses are important in the web environment. Therefore, in this study, we examine the relationship between consumer emotion and satisfaction and web style and design. Previous studies have considered the effects of web layout, structure, and color on emotions. In this study, however, we excluded these web components, in contrast to earlier studies, and analyzed the relationship between consumer satisfaction and emotional indexes of web-style only. To perform this analysis, we collected consumer surveys presenting 40 web style themes to 204 consumers. Each consumer evaluated four themes. The emotional adjectives evaluated by consumers were composed of 18 contrast pairs, and the upper emotional indexes were extracted through factor analysis. The emotional indexes were 'softness,' 'modernity,' 'clearness,' and 'jam.' Hypotheses were established based on the assumption that emotional indexes have different effects on consumer satisfaction. After the analysis, hypotheses 1, 2, and 3 were accepted and hypothesis 4 was rejected. While hypothesis 4 was rejected, its effect on consumer satisfaction was negative, not positive. This means that emotional indexes such as 'softness,' 'modernity,' and 'clearness' have a positive effect on consumer satisfaction. In other words, consumers prefer emotions that are soft, emotional, natural, rounded, dynamic, modern, elaborate, unique, bright, pure, and clear. 'Jam' has a negative effect on consumer satisfaction. It means, consumer prefer the emotion which is empty, plain, and simple. Regulatory focus shows differences in motivation and propensity in various domains. It is important to consider organizational behavior and decision making according to the regulatory focus tendency, and it affects not only political, cultural, ethical judgments and behavior but also broad psychological problems. Regulatory focus also differs from emotional response. Promotion focus responds more strongly to positive emotional responses. On the other hand, prevention focus has a strong response to negative emotions. Web style is a type of service, and consumer satisfaction is affected not only by cognitive evaluation but also by emotion. This emotional response depends on whether the consumer will benefit or harm himself. Therefore, it is necessary to confirm the difference of the consumer's emotional response according to the regulatory focus which is one of the characteristics and viewpoint of the consumers about the web style. After MMR analysis result, hypothesis 5.3 was accepted, and hypothesis 5.4 was rejected. But hypothesis 5.4 supported in the opposite direction to the hypothesis. After validation, we confirmed the mechanism of emotional response according to the tendency of regulatory focus. Using the results, we developed the structure of web-style recommendation system and recommend methods through regulatory focus. We classified the regulatory focus group in to three categories that promotion, grey, prevention. Then, we suggest web-style recommend method along the group. If we further develop this study, we expect that the existing regulatory focus theory can be extended not only to the motivational part but also to the emotional behavioral response according to the regulatory focus tendency. Moreover, we believe that it is possible to recommend web-style according to regulatory focus and emotional desire which consumers most prefer.