• Title/Summary/Keyword: User-defined Model

Search Result 415, Processing Time 0.028 seconds

An Architecture of Access Control Model for Preventing Illegal Information Leakage by Insider (내부자의 불법적 정보 유출 차단을 위한 접근통제 모델 설계)

  • Eom, Jung-Ho;Park, Seon-Ho;Chung, Tai-M.
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.20 no.5
    • /
    • pp.59-67
    • /
    • 2010
  • In the paper, we proposed an IM-ACM(Insider Misuse-Access Control Model) for preventing illegal information leakage by insider who exploits his legal rights in the ubiquitous computing environment. The IM-ACM can monitor whether insider uses data rightly using misuse monitor add to CA-TRBAC(Context Aware-Task Role Based Access Control) which permits access authorization according to user role, context role, task and entity's security attributes. It is difficult to prevent information leakage by insider because of access to legal rights, a wealth of knowledge about the system. The IM-ACM can prevent the information flow between objects which have the different security levels using context role and security attributes and prevent an insider misuse by misuse monitor which comparing an insider actual processing behavior to an insider possible work process pattern drawing on the current defined profile of insider's process.

Numerical analysis of blast-induced anisotropic rock damage (터발파압력에 기인한 이방성 암반손상의 수치해석적 분석)

  • Park, Bong-Ki;Cho, Kook-Hwan;Lee, In-Mo
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.6 no.4
    • /
    • pp.291-302
    • /
    • 2004
  • Blast-induced anisotropic rock damage around a blast-hole was analyzed by a using numerical method with user-defined subroutine based on continuum damage mechanics. Anisotropic blasting pressure was evaluated by applying anisotropic ruck characteristics to analytical solution which is a function of explosive and rock properties. Anisotropic rock damage was evaluated by applying the proposed anisotropic blasting pressure. Blast-induced isotropic rock damage was also analyzed. User-defined subroutines to solve anisotropic and isotropic damage model were coded. Initial rock damages in natural ruck were considered in anisotropic and isotropic damage models. Blasting pressure and elastic modulus of rock were major influential parameters from parametric analysis results of isotropic rock damage. From the results of anisotropic rock damage analysis, blasting pressure was the most influential parameter. Anisotropic rock damage area in horizontal direction was approximately 34% larger and about 12% smaller in vertical direction comparing with isotropic rock damage area. Isotropic rock damage area under fully coupled charge condition was around 30 times larger than that under decoupled charge condition. Blasting pressure under fully coupled charge condition was estimated to be more than 10 times larger than that of decoupled charge condition.

  • PDF

Object Modeling for Mapping from XML Document and Query to UML Class Diagram based on XML-GDM (XML-GDM을 기반으로 한 UML 클래스 다이어그램으로 사상을 위한 XML문서와 질의의 객체 모델링)

  • Park, Dae-Hyun;Kim, Yong-Sung
    • The KIPS Transactions:PartD
    • /
    • v.17D no.2
    • /
    • pp.129-146
    • /
    • 2010
  • Nowadays, XML has been favored by many companies internally and externally as a means of sharing and distributing data. there are many researches and systems for modeling and storing XML documents by an object-oriented method as for the method of saving and managing web-based multimedia document more easily. The representative tool for the object-oriented modeling of XML documents is UML (Unified Modeling Language). UML at the beginning was used as the integrated methodology for software development, but now it is used more frequently as the modeling language of various objects. Currently, UML supports various diagrams for object-oriented analysis and design like class diagram and is widely used as a tool of creating various database schema and object-oriented codes from them. This paper proposes an Efficinet Query Modelling of XML-GL using the UML class diagram and OCL for searching XML document which its application scope is widely extended due to the increased use of WWW and its flexible and open nature. In order to accomplish this, we propose the modeling rules and algorithm that map XML-GL. which has the modeling function for XML document and DTD and the graphical query function about that. In order to describe precisely about the constraint of model component, it is defined by OCL (Object Constraint Language). By using proposed technique creates a query for the XML document of holding various properties of object-oriented model by modeling the XML-GL query from XML document, XML DTD, and XML query while using the class diagram of UML. By converting, saving and managing XML document visually into the object-oriented graphic data model, user can prepare the base that can express the search and query on XML document intuitively and visually. As compared to existing XML-based query languages, it has various object-oriented characteristics and uses the UML notation that is widely used as object modeling tool. Hence, user can construct graphical and intuitive queries on XML-based web document without learning a new query language. By using the same modeling tool, UML class diagram on XML document content, query syntax and semantics, it allows consistently performing all the processes such as searching and saving XML document from/to object-oriented database.

Location Inference of Twitter Users using Timeline Data (타임라인데이터를 이용한 트위터 사용자의 거주 지역 유추방법)

  • Kang, Ae Tti;Kang, Young Ok
    • Spatial Information Research
    • /
    • v.23 no.2
    • /
    • pp.69-81
    • /
    • 2015
  • If one can infer the residential area of SNS users by analyzing the SNS big data, it can be an alternative by replacing the spatial big data researches which result from the location sparsity and ecological error. In this study, we developed the way of utilizing the daily life activity pattern, which can be found from timeline data of tweet users, to infer the residential areas of tweet users. We recognized the daily life activity pattern of tweet users from user's movement pattern and the regional cognition words that users text in tweet. The models based on user's movement and text are named as the daily movement pattern model and the daily activity field model, respectively. And then we selected the variables which are going to be utilized in each model. We defined the dependent variables as 0, if the residential areas that users tweet mainly are their home location(HL) and as 1, vice versa. According to our results, performed by the discriminant analysis, the hit ratio of the two models was 67.5%, 57.5% respectively. We tested both models by using the timeline data of the stress-related tweets. As a result, we inferred the residential areas of 5,301 users out of 48,235 users and could obtain 9,606 stress-related tweets with residential area. The results shows about 44 times increase by comparing to the geo-tagged tweets counts. We think that the methodology we have used in this study can be used not only to secure more location data in the study of SNS big data, but also to link the SNS big data with regional statistics in order to analyze the regional phenomenon.

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Estimation of Land Surface Energy Fluxes using CLM and VIC model (CLM과 VIC 모형을 활용한 지표 에너지 플럭스 산정)

  • Kim, Daeun;Ray, Ram L.;King, Seokkoo;Choi, Minha
    • Journal of Wetlands Research
    • /
    • v.18 no.2
    • /
    • pp.166-172
    • /
    • 2016
  • Accurate understanding of land surface is essential to analyze energy exchanges between earth surface and atmosphere. For the quantization of energy fluxes, the various researches about Land Surface Model(LSM) have been progressed. Among the various LSMs, the researches using Common Land Model(CLM) and Variable Infiltration Capacity(VIC) model are performed briskly. The CLM which is advanced LSM can calculate realistic results with few user defined parameters. The VIC model which is also typical LSM is widely used for estimation of energy fluxes and runoff in various fields. In this study, the energy fluxes which are net radiation, sensible heat flux, and latent heat flux were estimated using CLM and VIC model at Southern Sierra-Critical Zone Observatory(SS-CZO) site in California, United States. In case of net radiation and sensible heat flux, both models showed good agreement with observations, however, the CLM showed underestimated patterns of net radiation and sensible heat flux during precipitation period. In case of latent heat flux, the CLM represented better estimation of latent heat flux than VIC model which underestimated the latent heat flux. Through the estimation of energy fluxes and analysis of models' pros and cons, the applicability of CLM and VIC models and need of multi-model application were identified.

Design and Implementation of a Real-Time Emotional Avatar (실시간 감정 표현 아바타의 설계 및 구현)

  • Jung, Il-Hong;Cho, Sae-Hong
    • Journal of Digital Contents Society
    • /
    • v.7 no.4
    • /
    • pp.235-243
    • /
    • 2006
  • This paper presents the development of certain efficient method for expressing the emotion of an avatar based on the facial expression recognition. This new method is not changing a facial expression of the avatar manually. It can be changing a real time facial expression of the avatar based on recognition of a facial pattern which can be captured by a web cam. It provides a tool for recognizing some part of images captured by the web cam. Because of using the model-based approach, this tool recognizes the images faster than other approaches such as the template-based or the network-based. It is extracting the shape of user's lip after detecting the information of eyes by using the model-based approach. By using changes of lip's patterns, we define 6 patterns of avatar's facial expression by using 13 standard lip's patterns. Avatar changes a facial expression fast by using the pre-defined avatar with corresponding expression.

  • PDF

Performance Simulation of ACM for Compensating Rain Attenuation in Satellite Link (위성시스템 강우 감쇠 보상을 위한 ACM 성능 시뮬레이션)

  • Zhang, Meixiang;Kim, Sooyoung;Pack, Jeong-Ki;Kim, Ihn-Kyum
    • Journal of Satellite, Information and Communications
    • /
    • v.7 no.3
    • /
    • pp.8-15
    • /
    • 2012
  • Adaptive transmission technique is an effective means to counter-measure rain attenation that is one of the most significant factors degrading link quality in satellite communication systems. This paper introduces a simulator for adaptive transmission technique to compensate rain attenuation. In the simulator, a dynamic rain attenuation model is loaded, which was developed to synthesize Korean rain attenuation dynamics at a frequency band of Ka. It is a Markov chain model with rain attenuation parameters extracted from the rain attenuation data measured per second. In addition, various transmission schemes are embedded so that a user defined simulations can be performed. This paper demonstrates simulation results of adaptive schemes in comprison with fixed schemes, and show the efficiency of the adaptive schemes to compensate the rain attenuation.

Face Detection for Automatic Avatar Creation by using Deformable Template and GA (Deformable Template과 GA를 이용한 얼굴 인식 및 아바타 자동 생성)

  • Park Tae-Young;Kwon Min-Su;Kang Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.1
    • /
    • pp.110-115
    • /
    • 2005
  • This paper proposes the method to detect contours of a face, eyes and a mouth in a color image for making an avatar automatically. First, we use the HSI color model to exclude the effect of various light condition, and we find skin regions in an input image by using the skin color is defined on HS-plane. And then, we use deformable templates and Genetic Algorithm(GA) to detect contours of a face, eyes and a mouth. Deformable templates consist of B-spline curves and control point vectors. Those can represent various shape of a face, eyes and a mouth. And GA is very useful search procedure based on the mechanics of natural selection and natural genetics. Second, an avatar is created automatically by using contours and Fuzzy C-means clustering(FCM). FCM is used to reduce the number of face color As a result, we could create avatars like handmade caricatures which can represent the user's identity, differing from ones generated by the existing methods.

Methods to Design Provided, Required and Customize Interfaces of Software Components (소프트웨어 컴포넌트의 Provided, Required와 Customize인터페이스 설계 기법)

  • 박지영;김수동
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1286-1303
    • /
    • 2004
  • Component-based Development is gaining a wide acceptance as an economical software development paradigm to develop applications by utilizing reusable software components. Well-defined interface manages coupling and cohesion between components, minimizes the effect on the user in case of component evolvement, and enhances reusability, extendibility and maintainability. Therefore, study on systematic development process and design guidelines for component interface has been required since the component has been introduced. In this paper, we propose three types of interfaces based on software architecture layers and functionality types; Provided Interface which provides functionality of a component, Required Interface which specifies required functionality that is provided by other components, and Customize Interface which tailors the component to customer's requirement. In addition, we suggest design criteria for well-designed interface, and systematic process and instructions for designing interface. We firstly cluster operations extracted from use case model and class model to identify Provided interfaces, and design Customize interfaces based on artifacts for variability. We also specify Required interfaces by identifying dependency among interfaces. Proposed interface design method provides traceability, throughout the component interface design. And furthermore, proposed guidelines support practical design for high quality component based on a case study.