• Title/Summary/Keyword: Design of Information Systems

Search Result 7,677, Processing Time 0.05 seconds

A Scenario-based Goal-oriented Approach for Use Case Modeling (유즈케이스 모델링을 위한 시나리오 근간의 목표(Goal)지향 분석 방안)

  • Lee, Jae-Ho;Kim, Jae-Seon;Park, Soo-Yong
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.211-224
    • /
    • 2002
  • As system become larger and more complex, it is important to correctly analyze and specify user's requirements. Use case modeling is widely used in Object-Oriented Analysis and Design(OOAD) and Component-Based Development(CBD). It is useful to mitigate the complexity of the requirements analysis. However, use cases are difficult to be structured, to explicitly represent non-functional requirements, and to analyze what is affected by changes of use cases. To alleviate these problems, we propose scenario-based goal-oriented approach for use case modeling. The approach is to apply goal-oriented analysis method to use case model. Since goal-oriented analysis method is not systematic and heuristics is considerably involved, we adopted scenarios as the basis for the goal extraction. The proposed method is applied to City Bus Information Subsystem(CBIS) in Intelligent Transport Systems(ITS) domain. The proposed approach helps software engineer to analyze what is affected by use case's changes and represent non-functional requirements.

Design and Implementation of Ethereum-based Future Power Trading System (이더리움 기반의 선물(Future) 전력 거래 시스템 설계)

  • Youm, Sungkwan;Lee, Heekwon;Shin, Kwang-Seong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.584-585
    • /
    • 2021
  • As the production of new and renewable energy such as solar and wind power has diversified, microgrid systems that can simultaneously produce and consume have been introduced. In general, a decrease in electricity prices through solar power is expected in summer, so producer protection is required. In this paper, we propose a transparent and safe gift power transaction system between users using blockchain in a microgrid environment. A futures is simply a contract in which the buyer is obligated to buy electricity or the seller is obliged to sell electricity at a fixed price and a predetermined futures price. This system proposes a futures trading algorithm that searches for futures prices and concludes power transactions with automated operations without user intervention by using a smart contract, a reliable executable code within the blockchain network. If a power producer thinks that the price during the peak production period is likely to decrease during production planning, it sells futures first in the futures market and buys back futures during the peak production period to make a profit in the spot market. losses can be compensated. In addition, if there is a risk that the price of electricity will rise when a sales contract is concluded, a broker can compensate for a loss in the spot market by first buying futures in the futures market and liquidating futures when the sales contract is fulfilled.

  • PDF

Comparison of Association Rule Learning and Subgroup Discovery for Mining Traffic Accident Data (교통사고 데이터의 마이닝을 위한 연관규칙 학습기법과 서브그룹 발견기법의 비교)

  • Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.1-16
    • /
    • 2015
  • Traffic accident is one of the major cause of death worldwide for the last several decades. According to the statistics of world health organization, approximately 1.24 million deaths occurred on the world's roads in 2010. In order to reduce future traffic accident, multipronged approaches have been adopted including traffic regulations, injury-reducing technologies, driving training program and so on. Records on traffic accidents are generated and maintained for this purpose. To make these records meaningful and effective, it is necessary to analyze relationship between traffic accident and related factors including vehicle design, road design, weather, driver behavior etc. Insight derived from these analysis can be used for accident prevention approaches. Traffic accident data mining is an activity to find useful knowledges about such relationship that is not well-known and user may interested in it. Many studies about mining accident data have been reported over the past two decades. Most of studies mainly focused on predict risk of accident using accident related factors. Supervised learning methods like decision tree, logistic regression, k-nearest neighbor, neural network are used for these prediction. However, derived prediction model from these algorithms are too complex to understand for human itself because the main purpose of these algorithms are prediction, not explanation of the data. Some of studies use unsupervised clustering algorithm to dividing the data into several groups, but derived group itself is still not easy to understand for human, so it is necessary to do some additional analytic works. Rule based learning methods are adequate when we want to derive comprehensive form of knowledge about the target domain. It derives a set of if-then rules that represent relationship between the target feature with other features. Rules are fairly easy for human to understand its meaning therefore it can help provide insight and comprehensible results for human. Association rule learning methods and subgroup discovery methods are representing rule based learning methods for descriptive task. These two algorithms have been used in a wide range of area from transaction analysis, accident data analysis, detection of statistically significant patient risk groups, discovering key person in social communities and so on. We use both the association rule learning method and the subgroup discovery method to discover useful patterns from a traffic accident dataset consisting of many features including profile of driver, location of accident, types of accident, information of vehicle, violation of regulation and so on. The association rule learning method, which is one of the unsupervised learning methods, searches for frequent item sets from the data and translates them into rules. In contrast, the subgroup discovery method is a kind of supervised learning method that discovers rules of user specified concepts satisfying certain degree of generality and unusualness. Depending on what aspect of the data we are focusing our attention to, we may combine different multiple relevant features of interest to make a synthetic target feature, and give it to the rule learning algorithms. After a set of rules is derived, some postprocessing steps are taken to make the ruleset more compact and easier to understand by removing some uninteresting or redundant rules. We conducted a set of experiments of mining our traffic accident data in both unsupervised mode and supervised mode for comparison of these rule based learning algorithms. Experiments with the traffic accident data reveals that the association rule learning, in its pure unsupervised mode, can discover some hidden relationship among the features. Under supervised learning setting with combinatorial target feature, however, the subgroup discovery method finds good rules much more easily than the association rule learning method that requires a lot of efforts to tune the parameters.

A Study on the Differentiation of a City image with City Identity (CI(City Identity)에 의한 도시이미지 차별화를 위한 연구)

  • 이충훈
    • Archives of design research
    • /
    • v.15 no.4
    • /
    • pp.57-66
    • /
    • 2002
  • With the advanced localization followed by the settlement of the local autonomous systems, every city has faced new realities that it is inevitable to change its environment and image designs which have been uniformly made without consideration of its characteristics. Accordingly, they have failed to effectively achieve the development goal which make them distinctive.. The identity of a city means an image rather than its attribute. It can be drawn only when the city has its own municipality as well as the superiority to others. For Corporate Identity(CI) to function effectively as a comprehensive medium of communication, We should take into consideration all the situations which surround the city. It should be emphasized on the culture and environment oriented image. To do so we first of all have to analyze in detail the current situations and characteristics of the city. Hence, this paper tried to propose the strategies of making the CI which expresses the unique identity and communication of the city applying the CI program which have been used as the way of business management. The creation of the CI of the city takes the following steps. First, find the potentials for the image of the city through the survey of its resources. Second, provide the motive for citizens to actively participate in making plans with a dear vision for the improvement of the city image, physical development and so on. Third, provide with the events and the projects for specialized goods of the city to strengthen the ability of delivering the information, to design the city image and the street environment of the city. Fourth, apply the communication design system to use actively the administration organization, to enhance the citizenship, and to differentiate the city image. To do so, a variety of efforts should be followed to integrate and promote the regional culture, develope the structure and the facility functions of the city connecting those factors effectively. The establishment of the city identity is required a variety of activities to make the environment of the city, and the agreeable residential environment for a better life by differentiating the characteristics the city has.

  • PDF

A Study on the construction of physical security system by using security design (보안디자인을 활용한 시설보안시스템 구축 방안)

  • Choi, Sun-Tae
    • Korean Security Journal
    • /
    • no.27
    • /
    • pp.129-159
    • /
    • 2011
  • Physical security has always been an extremely important facet within the security arena. A comprehensive security plan consists of three components of physical security, personal security and information security. These elements are interrelated and may exist in varying degrees defending on the type of enterprise or facility being protected. The physical security component of a comprehensive security program is usually composed of policies and procedures, personal, barriers, equipment and records. Human beings kept restless struggle to preserve their and tribal lives. However, humans in prehistoric ages did not learn how to build strong house and how to fortify their residence, so they relied on their protection to the nature and use caves as protection and refuge in cold days. Through the history of man, human has been establishing various protection methods to protect himself and his tribe's life and assets. Physical security methods are set in the base of these security methods. Those caves that primitive men resided was rounded with rock wall except entrance, so safety was guaranteed especially by protection for tribes in all directions. The Great Wall of China that is considered as the longest building in the history was built over one hundred years from about B.C. 400 to prevent the invasion of northern tribes, but this wall enhanced its protection function to small invasions only, and Mongolian army captured the most part of China across this wall by about 1200 A.D. European lords in the Middle Ages built a moat by digging around of castle or reinforced around of the castle by making bascule bridge, and provided these protections to the resident and received agricultural products cultivated. Edwin Holmes of USA in 20 centuries started to provide innovative electric alarm service to the development of the security industry in USA. This is the first of today's electrical security system, and with developments, the security system that combined various electrical security system to the relevant facilities takes charging most parts of today's security market. Like above, humankind established various protection methods to keep life in the beginning and its development continues. Today, modern people installed CCTV to the most facilities all over the country to cope with various social pathological phenomenon and to protect life and assets, so daily life of people are protected and observed. Most of these physical security systems are installed to guarantee our safety but we pay all expenses for these also. Therefore, establishing effective physical security system is very important and urgent problem. On this study, it is suggested methods of establishing effective physical security system by using system integration on the principle of security design about effective security system's effective establishing method of physical security system that is increasing rapidly by needs of modern society.

  • PDF

A Design and Implementation of Multimedia Retrieval System based on MAF(Multimedia Application File Format) (MAF(Multimedia Application File Format) 기반 멀티미디어 검색 시스템의 설계 및 구현)

  • Gang Young-Mo;Park Joo-Hyoun;Bang Hyung-Gin;Nang Jong-Ho;Kim Hyung-Chul
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.9
    • /
    • pp.574-584
    • /
    • 2006
  • Recently, ISO/IEC 23000 (also known as 'MPEG-A') has proposed a new file format called 'MAF(Multimedia Application File Format)[1]' which provides a capability of integrating/storing the widely-used compression standards for audio and video and the metadata in MPEG-7 form into a single file format. However, it is still very hard to verify the usefulness of MPEG-A in the real applications because there is still no real system that fully implements this standard. In this thesis, a design and implementation of a multimedia retrieval system based on MPEG-A standard on PC and mobile device is presented. Furthermore, an extension of MPEG-A for describing the metadata for video is also proposed. It is selected and defined as a subset of MPEG-7 MDS[4] and TV-anytime[5] for video that is useful and manageable in the mobile environments. In order to design the multimedia retrieval system based on MPEG-A, we define the system requirements in terms of portability, extensibility, compatibility, adaptability, efficiency. Based on these requirements, we design the system which composed of 3 layers: Application Layer, Middleware Layer, Platform Layer. The proposed system consists of two sub-parts, client-part and server-part. The client-part consists of MAF authoring tool, MAP player tool and MAF searching tool which allow users to create, play and search the MAF files, respectively. The server-part is composed of modules to store and manage the MAF files and metadata extracted from MAF files. We show the usefulness of the proposed system by implementing the client system both on MS-Windows platform on desk-top computer and WIPI platform on mobile phone, and validate whether it to satisfy all the system requirements. The proposed system can be used to verify the specification in the MPEG-A, and to proves the usefulness of MPEG-A in the real application.

Customer Behavior Prediction of Binary Classification Model Using Unstructured Information and Convolution Neural Network: The Case of Online Storefront (비정형 정보와 CNN 기법을 활용한 이진 분류 모델의 고객 행태 예측: 전자상거래 사례를 중심으로)

  • Kim, Seungsoo;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.221-241
    • /
    • 2018
  • Deep learning is getting attention recently. The deep learning technique which had been applied in competitions of the International Conference on Image Recognition Technology(ILSVR) and AlphaGo is Convolution Neural Network(CNN). CNN is characterized in that the input image is divided into small sections to recognize the partial features and combine them to recognize as a whole. Deep learning technologies are expected to bring a lot of changes in our lives, but until now, its applications have been limited to image recognition and natural language processing. The use of deep learning techniques for business problems is still an early research stage. If their performance is proved, they can be applied to traditional business problems such as future marketing response prediction, fraud transaction detection, bankruptcy prediction, and so on. So, it is a very meaningful experiment to diagnose the possibility of solving business problems using deep learning technologies based on the case of online shopping companies which have big data, are relatively easy to identify customer behavior and has high utilization values. Especially, in online shopping companies, the competition environment is rapidly changing and becoming more intense. Therefore, analysis of customer behavior for maximizing profit is becoming more and more important for online shopping companies. In this study, we propose 'CNN model of Heterogeneous Information Integration' using CNN as a way to improve the predictive power of customer behavior in online shopping enterprises. In order to propose a model that optimizes the performance, which is a model that learns from the convolution neural network of the multi-layer perceptron structure by combining structured and unstructured information, this model uses 'heterogeneous information integration', 'unstructured information vector conversion', 'multi-layer perceptron design', and evaluate the performance of each architecture, and confirm the proposed model based on the results. In addition, the target variables for predicting customer behavior are defined as six binary classification problems: re-purchaser, churn, frequent shopper, frequent refund shopper, high amount shopper, high discount shopper. In order to verify the usefulness of the proposed model, we conducted experiments using actual data of domestic specific online shopping company. This experiment uses actual transactions, customers, and VOC data of specific online shopping company in Korea. Data extraction criteria are defined for 47,947 customers who registered at least one VOC in January 2011 (1 month). The customer profiles of these customers, as well as a total of 19 months of trading data from September 2010 to March 2012, and VOCs posted for a month are used. The experiment of this study is divided into two stages. In the first step, we evaluate three architectures that affect the performance of the proposed model and select optimal parameters. We evaluate the performance with the proposed model. Experimental results show that the proposed model, which combines both structured and unstructured information, is superior compared to NBC(Naïve Bayes classification), SVM(Support vector machine), and ANN(Artificial neural network). Therefore, it is significant that the use of unstructured information contributes to predict customer behavior, and that CNN can be applied to solve business problems as well as image recognition and natural language processing problems. It can be confirmed through experiments that CNN is more effective in understanding and interpreting the meaning of context in text VOC data. And it is significant that the empirical research based on the actual data of the e-commerce company can extract very meaningful information from the VOC data written in the text format directly by the customer in the prediction of the customer behavior. Finally, through various experiments, it is possible to say that the proposed model provides useful information for the future research related to the parameter selection and its performance.

Development of Neural Network Based Cycle Length Design Model Minimizing Delay for Traffic Responsive Control (실시간 신호제어를 위한 신경망 적용 지체최소화 주기길이 설계모형 개발)

  • Lee, Jung-Youn;Kim, Jin-Tae;Chang, Myung-Soon
    • Journal of Korean Society of Transportation
    • /
    • v.22 no.3 s.74
    • /
    • pp.145-157
    • /
    • 2004
  • The cycle length design model of the Korean traffic responsive signal control systems is devised to vary a cycle length as a response to changes in traffic demand in real time by utilizing parameters specified by a system operator and such field information as degrees of saturation of through phases. Since no explicit guideline is provided to a system operator, the system tends to include ambiguity in terms of the system optimization. In addition, the cycle lengths produced by the existing model have yet been verified if they are comparable to the ones minimizing delay. This paper presents the studies conducted (1) to find shortcomings embedded in the existing model by comparing the cycle lengths produced by the model against the ones minimizing delay and (2) to propose a new direction to design a cycle length minimizing delay and excluding such operator oriented parameters. It was found from the study that the cycle lengths from the existing model fail to minimize delay and promote intersection operational conditions to be unsatisfied when traffic volume is low, due to the feature of the changed target operational volume-to-capacity ratio embedded in the model. The 64 different neural network based cycle length design models were developed based on simulation data surrogating field data. The CORSIM optimal cycle lengths minimizing delay were found through the COST software developed for the study. COST searches for the CORSIM optimal cycle length minimizing delay with a heuristic searching method, a hybrid genetic algorithm. Among 64 models, the best one producing cycle lengths close enough to the optimal was selected through statistical tests. It was found from the verification test that the best model designs a cycle length as similar pattern to the ones minimizing delay. The cycle lengths from the proposed model are comparable to the ones from TRANSYT-7F.

GWB: An integrated software system for Managing and Analyzing Genomic Sequences (GWB: 유전자 서열 데이터의 관리와 분석을 위한 통합 소프트웨어 시스템)

  • Kim In-Cheol;Jin Hoon
    • Journal of Internet Computing and Services
    • /
    • v.5 no.5
    • /
    • pp.1-15
    • /
    • 2004
  • In this paper, we explain the design and implementation of GWB(Gene WorkBench), which is a web-based, integrated system for efficiently managing and analyzing genomic sequences, Most existing software systems handling genomic sequences rarely provide both managing facilities and analyzing facilities. The analysis programs also tend to be unit programs that include just single or some part of the required functions. Moreover, these programs are widely distributed over Internet and require different execution environments. As lots of manual and conversion works are required for using these programs together, many life science researchers suffer great inconveniences. in order to overcome the problems of existing systems and provide a more convenient one for helping genomic researches in effective ways, this paper integrates both managing facilities and analyzing facilities into a single system called GWB. Most important issues regarding the design of GWB are how to integrate many different analysis programs into a single software system, and how to provide data or databases of different formats required to run these programs. In order to address these issues, GWB integrates different analysis programs byusing common input/output interfaces called wrappers, suggests a common format of genomic sequence data, organizes local databases consisting of a relational database and an indexed sequential file, and provides facilities for converting data among several well-known different formats and exporting local databases into XML files.

  • PDF

A Study on Ontology and Topic Modeling-based Multi-dimensional Knowledge Map Services (온톨로지와 토픽모델링 기반 다차원 연계 지식맵 서비스 연구)

  • Jeong, Hanjo
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.79-92
    • /
    • 2015
  • Knowledge map is widely used to represent knowledge in many domains. This paper presents a method of integrating the national R&D data and assists of users to navigate the integrated data via using a knowledge map service. The knowledge map service is built by using a lightweight ontology and a topic modeling method. The national R&D data is integrated with the research project as its center, i.e., the other R&D data such as research papers, patents, and reports are connected with the research project as its outputs. The lightweight ontology is used to represent the simple relationships between the integrated data such as project-outputs relationships, document-author relationships, and document-topic relationships. Knowledge map enables us to infer further relationships such as co-author and co-topic relationships. To extract the relationships between the integrated data, a Relational Data-to-Triples transformer is implemented. Also, a topic modeling approach is introduced to extract the document-topic relationships. A triple store is used to manage and process the ontology data while preserving the network characteristics of knowledge map service. Knowledge map can be divided into two types: one is a knowledge map used in the area of knowledge management to store, manage and process the organizations' data as knowledge, the other is a knowledge map for analyzing and representing knowledge extracted from the science & technology documents. This research focuses on the latter one. In this research, a knowledge map service is introduced for integrating the national R&D data obtained from National Digital Science Library (NDSL) and National Science & Technology Information Service (NTIS), which are two major repository and service of national R&D data servicing in Korea. A lightweight ontology is used to design and build a knowledge map. Using the lightweight ontology enables us to represent and process knowledge as a simple network and it fits in with the knowledge navigation and visualization characteristics of the knowledge map. The lightweight ontology is used to represent the entities and their relationships in the knowledge maps, and an ontology repository is created to store and process the ontology. In the ontologies, researchers are implicitly connected by the national R&D data as the author relationships and the performer relationships. A knowledge map for displaying researchers' network is created, and the researchers' network is created by the co-authoring relationships of the national R&D documents and the co-participation relationships of the national R&D projects. To sum up, a knowledge map-service system based on topic modeling and ontology is introduced for processing knowledge about the national R&D data such as research projects, papers, patent, project reports, and Global Trends Briefing (GTB) data. The system has goals 1) to integrate the national R&D data obtained from NDSL and NTIS, 2) to provide a semantic & topic based information search on the integrated data, and 3) to provide a knowledge map services based on the semantic analysis and knowledge processing. The S&T information such as research papers, research reports, patents and GTB are daily updated from NDSL, and the R&D projects information including their participants and output information are updated from the NTIS. The S&T information and the national R&D information are obtained and integrated to the integrated database. Knowledge base is constructed by transforming the relational data into triples referencing R&D ontology. In addition, a topic modeling method is employed to extract the relationships between the S&T documents and topic keyword/s representing the documents. The topic modeling approach enables us to extract the relationships and topic keyword/s based on the semantics, not based on the simple keyword/s. Lastly, we show an experiment on the construction of the integrated knowledge base using the lightweight ontology and topic modeling, and the knowledge map services created based on the knowledge base are also introduced.