• Title/Summary/Keyword: future Internet

Search Result 2,267, Processing Time 0.038 seconds

SNS as a Method of Election Campaign: A Case study of the 2015's Special Election in South Korea (정치인들의 선거 캠페인 수단으로서의 SNS 활용: 2015년 4·29 재·보궐선거를 중심으로)

  • Park, SeMi;Hwang, HaSung
    • Journal of Internet Computing and Services
    • /
    • v.17 no.2
    • /
    • pp.87-95
    • /
    • 2016
  • Considerable research over the years has been devoted to ascertaining the impact of social media on political settings.In recent days, Social Network Sites (SNS) such as Facebook allowed users to share their political beliefs, support specific candidates, and interact with others on political issues. This study examines the role of SNS as the means of political campaign. The study tasks the case of the 2015'sspecial election, Seoul Korea. The analysis aims to identify how candidates use Facebook or Twitter to interact with voters by applying functional theory of political campaign discourse developed by Benoit. In this study, we analyzed the candidates' SNS messages in terms of political behavior such as self-expression, informing policy, asking voters to participate in political events. Among them the results indicated that two candidates, Jung, Dong Young and Byun, Hee Jae, both of them used SNS to express themselves the most. The study also found that two candidates used mainly the strategy called 'acclaim' which praises their own strengths. In terms of topics of SNS messages (policy versus character) there was different between two candidates. Jung, sent message in relation to 'character' the most, while Byun contained 'policy' message on SNS the most. Based on these findings implications and directions for future studies are discussed.

Lightweight Framework For Supporting Mobile Web Development (초고속 모바일 웹 개발을 위한 경량화 프레임워크)

  • Shin, Seung-Woo;Kim, Haeng-Kon
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.127-138
    • /
    • 2009
  • Mobile web applications are being used and changed rapidly due to the growth of mobile device performance. But, cost of development environment and standards make the high development cost and low productivity. It is main reason that the design and implementation of the applications are more time consuming than general computing environments. In this paper, we propose MWeb(MobileWeb)-Framework based on the agile methodology and Ruby on Rails that is a kind of framework for supporting mobile web application development using mobile web standards. This work consists of the mobile web development architecture and agile process model. MWeb-Framework will support the same user experience to the different devices. We validates the framework by implementing the case studies through suggested mobile web development framework. As a result, we can develop the mobile web applications with productivity and quality. In the future, we will suggest how to make the MWeb-Framework standardization and practically apply the frameworks the various case studies to improve framework potentially problems.

  • PDF

A Study on the Internet GPS Data Processing System (인터넷 GPS 자료처리 시스템에 관한 연구)

  • 윤희천;최병길;이용욱
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.22 no.2
    • /
    • pp.145-150
    • /
    • 2004
  • A protocol for the web-based GPS data processing system has been developed. The system is developed following the typical ASP system, in which the GPS data acquired by various users can be uploaded through the web and the data is processed with data processing components selected by the users. After the processing, the results are also transported to the users through the web. The developed system is designed for easy software upgrade and it is an synchronous process mode so that the multiple accesses can be handled with high user flexibility. The database components for the efficient GPS data maintenance are developed so that the data from CORS can be used for the data processing. Currently, the absolute and relative positioning algorithms using code measurements are integrated and much more algorithms such as the data quality control, absolute and relative positioning using phases will be integrated in near future.

Research on improving correctness of cardiac disorder data classifier by applying Best-First decision tree method (Best-First decision tree 기법을 적용한 심전도 데이터 분류기의 정확도 향상에 관한 연구)

  • Lee, Hyun-Ju;Shin, Dong-Kyoo;Park, Hee-Won;Kim, Soo-Han;Shin, Dong-Il
    • Journal of Internet Computing and Services
    • /
    • v.12 no.6
    • /
    • pp.63-71
    • /
    • 2011
  • Cardiac disorder data are generally tested using the classifier and QRS-Complex and R-R interval which is used in this experiment are often extracted by ECG(Electrocardiogram) signals. The experimentation of ECG data with classifier is generally performed with SVM(Support Vector Machine) and MLP(Multilayer Perceptron) classifier, but this study experimented with Best-First Decision Tree(B-F Tree) derived from the Dicision Tree among Random Forest classifier algorithms to improve accuracy. To compare and analyze accuracy, experimentation of SVM, MLP, RBF(Radial Basic Function) Network and Decision Tree classifiers are performed and also compared the result of announced papers carried out under same interval and data. Comparing the accuracy of Random Forest classifier with above four ones, Random Forest is the best in accuracy. As though R-R interval was extracted using Band-pass filter in pre-processing of this experiment, in future, more filter study is needed to extract accurate interval.

An Analysis and Design of Efficient Community Routing Policy for Global Research Network (글로벌연구망을 위한 효율적인 커뮤니티 라우팅 정책의 분석 및 설계)

  • Jang, Hyun-Hee;Park, Jae-Bok;Koh, Kwang-Shin;Kim, Seung-Hae;Cho, Gi-Hwan
    • Journal of Internet Computing and Services
    • /
    • v.10 no.5
    • /
    • pp.1-12
    • /
    • 2009
  • A routing policy based on BGP community routing permits to select a specific route for particular network by making use of user-defined routing policies. Especially, community based routing policy is recently getting a great concern to enhance overall performance in the global research networks which are generally inter-connected large number of different characterized networks. In this paper, we analyze the community routing which has been applied in existing global research networks in the network performance point of view, and catch hold of problems caused by the routing performance in a new global research network. Then, we suggest an effective community routing policy model along with an interconnection architecture of research networks, in order to make correct some wrong routings and resolve an asymmetric routing problem, for a new global research network. Our work is expected to be utilized as an enabling base technology to improve the network performance of future global research networks as well as commercial networks.

  • PDF

Optimal Relay Selection and Power Allocation in an Improved Low-Order-Bit Quantize-and-Forward Scheme

  • Bao, Jianrong;He, Dan;Xu, Xiaorong;Jiang, Bin;Sun, Minhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5381-5399
    • /
    • 2016
  • Currently, the quantize-and-forward (QF) scheme with high order modulation and quantization has rather high complexity and it is thus impractical, especially in multiple relay cooperative communications. To overcome these deficiencies, an improved low complex QF scheme is proposed by the combination of the low order binary phase shift keying (BPSK) modulation and the 1-bit and 2-bit quantization, respectively. In this scheme, the relay selection is optimized by the best relay position for best bit-error-rate (BER) performance, where the relays are located closely to the destination node. In addition, an optimal power allocation is also suggested on a total power constraint. Finally, the BER and the achievable rate of the low order 1-bit, 2-bit and 3-bit QF schemes are simulated and analyzed. Simulation results indicate that the 3-bit QF scheme has about 1.8~5 dB, 4.5~7.5 dB and 1~2.5 dB performance gains than those of the decode-and-forward (DF), the 1-bit and 2-bit QF schemes, at BER of $10^{-2}$, respectively. For the 2-bit QF, the scheme of the normalized Source-Relay (S-R) distance with 0.9 has about 5dB, 7.5dB, 9dB and 15dB gains than those of the distance with 0.7, 0.5, 0.3 and 0.1, respectively, at BER of $10^{-3}$. In addition, the proposed optimal power allocation saves about 2.5dB much more relay power on an average than that of the fixed power allocation. Therefore, the proposed QF scheme can obtain excellent features, such as good BER performance, low complexity and high power efficiency, which make it much pragmatic in the future cooperative communications.

An empirical study on the employment impact of the Fourth Industrial Revolution (제4차 산업혁명의 고용 영향에 대한 실증적 연구)

  • Ahn, Jongchang;Hwang, Jun;Lee, Woongjae
    • Journal of Internet Computing and Services
    • /
    • v.19 no.1
    • /
    • pp.131-140
    • /
    • 2018
  • This study aims to analyze various discussions for influences on employment by the technologies related to the frequently mentioned Fourth Industrial Revolution and to conduct an exploratory research. For this aim, this paper analyzes and extends the survey related to realization possibility for managements and professionals of ICT sector in Global Agenda Council of World Economic Forum (WEF) in September 2015. Based upon these results, this study further conducts an empirical survey not only over realization possibility but also over its employment impact. For each 23 item of realization possibility, all the respondents (n=169) responded positively to each item to be actualized in 2025. In addition, for each 23 item of the strength of employment impact, most items were responded as decrease of employment but a few items were predicted as expansion of employment. This research has a meaning in providing a clue of empirical survey for employment impact by the Fourth Industrial Revolution in the future.

Fast k-NN based Malware Analysis in a Massive Malware Environment

  • Hwang, Jun-ho;Kwak, Jin;Lee, Tae-jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.12
    • /
    • pp.6145-6158
    • /
    • 2019
  • It is a challenge for the current security industry to respond to a large number of malicious codes distributed indiscriminately as well as intelligent APT attacks. As a result, studies using machine learning algorithms are being conducted as proactive prevention rather than post processing. The k-NN algorithm is widely used because it is intuitive and suitable for handling malicious code as unstructured data. In addition, in the malicious code analysis domain, the k-NN algorithm is easy to classify malicious codes based on previously analyzed malicious codes. For example, it is possible to classify malicious code families or analyze malicious code variants through similarity analysis with existing malicious codes. However, the main disadvantage of the k-NN algorithm is that the search time increases as the learning data increases. We propose a fast k-NN algorithm which improves the computation speed problem while taking the value of the k-NN algorithm. In the test environment, the k-NN algorithm was able to perform with only the comparison of the average of similarity of 19.71 times for 6.25 million malicious codes. Considering the way the algorithm works, Fast k-NN algorithm can also be used to search all data that can be vectorized as well as malware and SSDEEP. In the future, it is expected that if the k-NN approach is needed, and the central node can be effectively selected for clustering of large amount of data in various environments, it will be possible to design a sophisticated machine learning based system.

A Fair Radio Resource Allocation Algorithm for Uplink of FBMC Based CR Systems

  • Jamal, Hosseinali;Ghorashi, Seyed Ali;Sadough, Seyed Mohammad-Sajad
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.6
    • /
    • pp.1479-1495
    • /
    • 2012
  • Spectrum scarcity seems to be the most challenging issue to be solved in new wireless telecommunication services. It is shown that spectrum unavailability is mainly due to spectrum inefficient utilization and inappropriate physical layer execution rather than spectrum shortage. Daily increasing demand for new wireless services with higher data rate and QoS level makes the upgrade of the physical layer modulation techniques inevitable. Orthogonal Frequency Division Multiple Access (OFDMA) which utilizes multicarrier modulation to provide higher data rates with the capability of flexible resource allocation, although has widely been used in current wireless systems and standards, seems not to be the best candidate for cognitive radio systems. Filter Bank based Multi-Carrier (FBMC) is an evolutionary scheme with some advantages over the widely-used OFDM multicarrier technique. In this paper, we focus on the total throughput improvement of a cognitive radio network using FBMC modulation. Along with this modulation scheme, we propose a novel uplink radio resource allocation algorithm in which fairness issue is also considered. Moreover, the average throughput of the proposed FBMC based cognitive radio is compared to a conventional OFDM system in order to illustrate the efficiency of using FBMC in future cognitive radio systems. Simulation results show that in comparison with the state of the art two algorithms (namely, Shaat and Wang) our proposed algorithm achieves higher throughputs and a better fairness for cognitive radio applications.

Semantic Computing for Big Data: Approaches, Tools, and Emerging Directions (2011-2014)

  • Jeong, Seung Ryul;Ghani, Imran
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.6
    • /
    • pp.2022-2042
    • /
    • 2014
  • The term "big data" has recently gained widespread attention in the field of information technology (IT). One of the key challenges in making use of big data lies in finding ways to uncover relevant and valuable information. The high volume, velocity, and variety of big data hinder the use of solutions that are available for smaller datasets, which involve the manual interpretation of data. Semantic computing technologies have been proposed as a means of dealing with these issues, and with the advent of linked data in recent years, have become central to mainstream semantic computing. This paper attempts to uncover the state-of-the-art semantics-based approaches and tools that can be leveraged to enrich and enhance today's big data. It presents research on the latest literature, including 61 studies from 2011 to 2014. In addition, it highlights the key challenges that semantic approaches need to address in the near future. For instance, this paper presents cutting-edge approaches to ontology engineering, ontology evolution, searching and filtering relevant information, extracting and reasoning, distributed (web-scale) reasoning, and representing big data. It also makes recommendations that may encourage researchers to more deeply explore the applications of semantic technology, which could improve the processing of big data. The findings of this study contribute to the existing body of basic knowledge on semantics and computational issues related to big data, and may trigger further research on the field. Our analysis shows that there is a need to put more effort into proposing new approaches, and that tools must be created that support researchers and practitioners in realizing the true power of semantic computing and solving the crucial issues of big data.