• Title/Summary/Keyword: computing model

Search Result 3,371, Processing Time 0.03 seconds

Horizontal Consolidation Characteristics of Marine Clay Using Piezocone Test (Piezocone 시험을 이용한 해성점토의 수평압밀 특성 연구)

  • 이강운;윤길림;채영수
    • Journal of the Korean Geotechnical Society
    • /
    • v.19 no.5
    • /
    • pp.133-144
    • /
    • 2003
  • Horizontal consolidation characteristics of Busan marine clay were investigated by computing coefficient of horizontal consolidation from Piezocone data and comparing their results with those of standard consolidation test. It is well known that current prediction models of $c_h$ for high plastic soils have large uncertainties, and show a great difference between the predicted and the measured values. However, the spherical models and expanding cavity theory of Torstensson(1977), and Burns & Mayne(1998) based on modified Cam-Clay model with critical limit state concepts have relative reliability in estimating $c_h$ and good applicability in highly plasticity soils. In this paper, a normalization technique was used to evaluate $c_h$ using the Burns and Mayne's method based on the dissipation test, and their normalized consolidation curves give 0.015 of time factor($T_{50}$) when 50% degree of consolidation is completed. Comparison study using Piezocone data obtained at other similar ground site shows 1.5 times less systematicality than that of standard consolidation test, which indicates considerable approximation with the measured values because standard consolidation test gives the difference of three to few times compared with the measured values. In addition, design chart for estimating $c_h$ based on the chart from Robertson et al.(1992) and using the other method of the direct prediction from the of dissipation test was newly proposed. It is judged that new proposed chart is very applicable to Korean marine soils, especially in very high plastic soils.

Selecting the Optimal Method of Competition Index Computation for Major Coniferous Species in Korea (우리나라 주요 침엽수종의 최적 경쟁지수 모형 선정)

  • Lee, Jungho;Lee, Daesung;Seo, Yeongwan;Choi, Jungkee
    • Journal of Korean Society of Forest Science
    • /
    • v.107 no.2
    • /
    • pp.193-204
    • /
    • 2018
  • This study was carried out to select the optimal method of competition index computation according to the competitor selection methods and distant-dependent competition index models, and to analyze the characteristics of competition indices in terms of thinning intensity and tree density targeting Pinus densiflora, Pinus koraiensis, and Larix kaempferi, which are the major coniferous species in Korea. Data was the re-investigated tree information from 240 permanent plots of 80 sites in the stands of P. densiflora, P. koraiensis, and L. kaempferi, which were located in the national forest of Gangwon and North Gyeongsang provinces. The number of subject trees with competition index calculated were 1126 trees for P. densiflora, 4093 trees for P. koraiensis, and 3399 trees for L. kaempferi. For the best competition index computation method, three kinds of competitor selection methods were considered: basal area factor, angle of height, angle of height to crown base. Also, six kinds of competition index models were compared: Lorimer, Martin-EK, Braathe, Heygi, Daniels, and Modified Daniels, which was developed in this study. Correlation coefficient was the best when the competitor selection method of basal area factor $4m^2/ha$ and the competition index model of Modified Daniels were used, and thus, it was selected as the best method for computing competition index. According to the best method by stand characteristics, competition index decreased in all species as thinning intensity was high and tree density was low.

Cluster and Polarity Analysis of Online Discussion Communities Using User Bipartite Graph Model (사용자 이분그래프모형을 이용한 온라인 커뮤니티 토론 네트워크의 군집성과 극성 분석)

  • Kim, Sung-Hwan;Tak, Haesung;Cho, Hwan-Gue
    • Journal of Internet Computing and Services
    • /
    • v.19 no.5
    • /
    • pp.89-96
    • /
    • 2018
  • In online communities, a large number of participants can exchange their opinion using replies without time and space restrictions. While the online space provides quick and free communication, it also easily triggers unnecessary quarrels and conflicts. The network established on the discussion participants is an important cue to analyze the confrontation and predict serious disputes. In this paper, we present a quantitative measure for polarity observed on the discussion network built from reply exchanges in online communities. The proposed method uses the comment exchange information to establish the user interaction network graph, computes its maximum spanning tree, and then performs vertex coloring to assign two colors to each node in order to divide the discussion participants into two subsets. Using the proportion of the comment exchanges across the partitioned user subsets, we compute the polarity measure, and quantify how discussion participants are bipolarized. Using experimental results, we demonstrate the effectiveness of our method for detecting polarization and show participants of a specific discussion subject tend to be divided into two camps when they debate.

A Novel Compressed Sensing Technique for Traffic Matrix Estimation of Software Defined Cloud Networks

  • Qazi, Sameer;Atif, Syed Muhammad;Kadri, Muhammad Bilal
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4678-4702
    • /
    • 2018
  • Traffic Matrix estimation has always caught attention from researchers for better network management and future planning. With the advent of high traffic loads due to Cloud Computing platforms and Software Defined Networking based tunable routing and traffic management algorithms on the Internet, it is more necessary as ever to be able to predict current and future traffic volumes on the network. For large networks such origin-destination traffic prediction problem takes the form of a large under- constrained and under-determined system of equations with a dynamic measurement matrix. Previously, the researchers had relied on the assumption that the measurement (routing) matrix is stationary due to which the schemes are not suitable for modern software defined networks. In this work, we present our Compressed Sensing with Dynamic Model Estimation (CS-DME) architecture suitable for modern software defined networks. Our main contributions are: (1) we formulate an approach in which measurement matrix in the compressed sensing scheme can be accurately and dynamically estimated through a reformulation of the problem based on traffic demands. (2) We show that the problem formulation using a dynamic measurement matrix based on instantaneous traffic demands may be used instead of a stationary binary routing matrix which is more suitable to modern Software Defined Networks that are constantly evolving in terms of routing by inspection of its Eigen Spectrum using two real world datasets. (3) We also show that linking this compressed measurement matrix dynamically with the measured parameters can lead to acceptable estimation of Origin Destination (OD) Traffic flows with marginally poor results with other state-of-art schemes relying on fixed measurement matrices. (4) Furthermore, using this compressed reformulated problem, a new strategy for selection of vantage points for most efficient traffic matrix estimation is also presented through a secondary compression technique based on subset of link measurements. Experimental evaluation of proposed technique using real world datasets Abilene and GEANT shows that the technique is practical to be used in modern software defined networks. Further, the performance of the scheme is compared with recent state of the art techniques proposed in research literature.

Adaptive Consensus Bound PBFT Algorithm Design for Eliminating Interface Factors of Blockchain Consensus (블록체인 합의 방해요인 제거를 위한 Adaptive Consensus Bound PBFT 알고리즘 설계)

  • Kim, Hyoungdae;Yun, Jusik;Goh, Yunyeong;Chung, Jong-Moon
    • Journal of Internet Computing and Services
    • /
    • v.21 no.1
    • /
    • pp.17-31
    • /
    • 2020
  • With the rapid development of block chain technology, attempts have been made to put the block chain technology into practical use in various fields such as finance and logistics, and also in the public sector where data integrity is very important. Defense Operations In addition, strengthening security and ensuring complete integrity of the command communication network is crucial for operational operation under the network-centered operational environment (NCOE). For this purpose, it is necessary to construct a command communication network applying the block chain network. However, the block chain technology up to now can not solve the security issues such as the 51% attack. In particular, the Practical Byzantine fault tolerance (PBFT) algorithm which is now widely used in blockchain, does not have a penalty factor for nodes that behave maliciously, and there is a problem of failure to make a consensus even if malicious nodes are more than 33% of all nodes. In this paper, we propose a Adaptive Consensus Bound PBFT (ACB-PBFT) algorithm that incorporates a penalty mechanism for anomalous behavior by combining the Trust model to improve the security of the PBFT, which is the main agreement algorithm of the blockchain.

A Study on Standardization of GIS Interoperability in Local Governments (지자체 GIS 상호운용성 확보를 위한 표준화 연구)

  • Jeon, Chang-Sub;Kim, Eun-Hyung
    • Journal of Korea Spatial Information System Society
    • /
    • v.4 no.2 s.8
    • /
    • pp.41-54
    • /
    • 2002
  • The main questions of this study are how to reuse GIS applications and what to do for interoperability of the applications in local governments. To answer the questions, related technologies and standards of GIS are investigated. International standard organizations, such as ISO/TC211 and OGC(OpenGIS Consortium), are working on GIS interoperability standards based on component technology and distributed computing environments. In this study, a standard model for interoperability of GIS applications in local governments is proposed based on the international standards. Standardization process of GIS interfaces in local governments is as followed: 1) modeling of GIS business and 2) establishment of GIS service architectures 3) defining GIS standard interfaces 4) GIS component. In conclusion, by developing interoperable GIS applications based on component technology the reusability in local governments can be realized.

  • PDF

A Study On The Economic Value Of Firm's Big Data Technologies Introduction Using Real Option Approach - Based On YUYU Pharmaceuticals Case - (실물옵션 기법을 이용한 기업의 빅데이터 기술 도입의 경제적 가치 분석 - 유유제약 사례를 중심으로 -)

  • Jang, Hyuk Soo;Lee, Bong Gyou
    • Journal of Internet Computing and Services
    • /
    • v.15 no.6
    • /
    • pp.15-26
    • /
    • 2014
  • This study focus on a economic value of the Big Data technologies by real options model using big data technology company's stock price to determine the price of the economic value of incremental assessed value. For estimating stochastic process of company's stock price by big data technology to extract the incremental shares, Generalized Moments Method (GMM) are used. Option value for Black-Scholes partial differential equation was derived, in which finite difference numerical methods to obtain the Big Data technology was introduced to estimate the economic value. As a result, a option value of big data technology investment is 38.5 billion under assumption which investment cost is 50 million won and time value is a about 1 million, respectively. Thus, introduction of big data technology to create a substantial effect on corporate profits, is valuable and there are an effects on the additional time value. Sensitivity analysis of lower underlying asset value appear decreased options value and the lower investment cost showed increased options value. A volatility are not sensitive on the option value due to the big data technological characteristics which are low stock volatility and introduction periods.

A study on Factors that Influence the Usage of Mobile Apps - Based on Flow Theory and Unified Theory of Acceptance and Use of Technology - (모바일 앱 이용에 영향을 미치는 요인 : - 플로우 이론과 통합기술수용모형을 바탕으로 -)

  • Kim, Young-Chae;Jeong, Seung Ryul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.4
    • /
    • pp.73-84
    • /
    • 2013
  • This study, based on flow theory and unified theory of acceptance and use of technology (UTAUT), examines various factors that influence the continuous use of mobile applications, particularly those providing users with satisfaction and pleasure as well as useful information. This study extends the previous studies that have been based on technology acceptance model, in which usefulness and ease of use are key determinants of use of new technology, by introducing flow theory in explaining the use of various technologies in mobile environment. For this purpose, this study employes a survey based field study and collects data from individuals who use fashion mobile apps since these are considered to provide fun and pleasures. This study finds that flow theory is a proper framework to understand the use of mobile technology, indicating flow experience is an important variable to determine the usage of fashion apps. In addition, performance expectation, effort expectation, social influence, and facilitation condition are found to be significant in influencing use of mobile apps, suggesting UTAUT still plays an important role in understanding the use of mobile technology.

Mobile Service Modeling Based on Service Oriented Architecture (서비스 지향 아키텍처 기반의 모바일 서비스 모델링)

  • Chang, Young-Won;Noh, Hye-Min;Yoo, Cheol-Jung
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.2
    • /
    • pp.140-149
    • /
    • 2008
  • Recently, the need for accessing information from anywhere at any time has been a driving force for a variety of mobile applications. As the number of mobile applications increases rapidly, there has been a growing demand for the use of Service Oriented Architectures(SOA) for various applications. Mobile based SOA offers a systematic way to classify and assess technical realizations of business processes. But mobile has severly restricted range of utilizing services in computing environment and more, a mobile computer is envisioned to be equipped with more powerful capabilities, including the storage of a small database, the capacity of data processing, a narrow user input and small size of display. This paper present mobile adaption method based on SOA to overcome mobile restriction. To improve mobile efficient we analyzing mobile application requirement writing service specification, optimizing design, providing extended use case specification which test use case testing and testing service test case which derived from service specification. We discuss an mobile application testing that uses a SOA as a model for deploying discovering, specifying, integrating, implementing, testing, and invoking services. Such a service use case specification and testing technique including some idea could help the mobile application to develop cost efficient and dependable mobile services.

Random Noise Addition for Detecting Adversarially Generated Image Dataset (임의의 잡음 신호 추가를 활용한 적대적으로 생성된 이미지 데이터셋 탐지 방안에 대한 연구)

  • Hwang, Jeonghwan;Yoon, Ji Won
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.629-635
    • /
    • 2019
  • In Deep Learning models derivative is implemented by error back-propagation which enables the model to learn the error and update parameters. It can find the global (or local) optimal points of parameters even in the complex models taking advantage of a huge improvement in computing power. However, deliberately generated data points can 'fool' models and degrade the performance such as prediction accuracy. Not only these adversarial examples reduce the performance but also these examples are not easily detectable with human's eyes. In this work, we propose the method to detect adversarial datasets with random noise addition. We exploit the fact that when random noise is added, prediction accuracy of non-adversarial dataset remains almost unchanged, but that of adversarial dataset changes. We set attack methods (FGSM, Saliency Map) and noise level (0-19 with max pixel value 255) as independent variables and difference of prediction accuracy when noise was added as dependent variable in a simulation experiment. We have succeeded in extracting the threshold that separates non-adversarial and adversarial dataset. We detected the adversarial dataset using this threshold.