• Title/Summary/Keyword: XML Management System

Search Result 473, Processing Time 0.024 seconds

Data flow for MOS-EMS system interoperation (MOS-EMS 연계 데이터 흐름)

  • Lee, K.J.;Park, M.C.;Lee, K.W.;Kim, S.H.
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.2134-2135
    • /
    • 2006
  • 전력거래소는 발전경쟁시장(CBP; Cost-Based Pool) 장기화에 따른 운영상의 효율성을 개선하고 기 개발된 시장운영시스템(MOS; Market Operation System)을 활용하여 급전체계를 개선하기 위해 준비중이다. 현행 급전체계에서는 거래 전일에 수행한 수요예측을 바탕으로 1시간 단위운영발전계획을 전일에 수립하고 EMS(Energy Management System)를 이용하여 발전기에 대한 경제부하배분(ED; Economic Dispatch)을 시행하고 있지만, 현 EMS는 시장체제 환경 전에 도입된 설비로 시장환경에 대한 고려가 되어 있지 않고 계통운영 보조서비스의 실시간 반영이 어려운 점이 있다. 전력거래소는 실시간 급전 운영을 위해 기존 EMS에 MOS를 연계하여 MOS의 5분 단위 수요예측량을 기반으로 송전망 제약과 예비력 요구량 등을 고려한 발전기별 경제부하 배분량 및 예비력 배분량을 결정하고, 추가적으로 EMS에서 수요예측 오차 및 주파수 보정량을 실시간으로 계산하여 발전기별로 배분하도록 함으로써, 1일 전 시행하던 급전계획을 취득 자료를 기반으로 5분 단위로 실시간 계산할 수 있도록 급전 체계를 개선할 계획이다. 이를 통해 실시간으로 에너지와 예비력을 동시에 최적화함으로 전력시장 및 전력계통 운영을 한층 선진화 할 수 있는 계기를 마련하였으며 또한 저비용 발전기 사용을 극대화함으로 발전비용의 절감에도 기여하는 효과를 기대할 수 있다. MOS-EMS간 자료연계에는 ICCP(Inter-Control Communication Protocol)와 FTP 프로토콜을 사용하였고, 수차례 모의운영을 통하여 데이터베이스 및 현장 취득 자료의 정확도(accuracy)가 양 시스템 간 연계 및 전력 계통의 안정적 운영에 매우 중요한 요소로 나타났다. 전력거래소는 장기적으로 CIM(Common Information Model)기반의 표준 전력계통 데이터베이스를 구축하고 시스템 간 자료 연계를 위해 XML을 활용하여 시스템 간 상호 운영성(Interoperability)과 자료 연계의 안정성을 높일 계획이다. 본 논문은 MOS-EMS 연계에 따른 시스템 간 자료의 흐름 및 처리에 대해 주로 설명하고자 한다.

  • PDF

A Study on the ManiFestation Consolidation System(MFCS) of e-Trade (전자무역의 적하목록취합시스템에 관한 연구)

  • Jeong, Boon-Do;Jang, Ki-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.1
    • /
    • pp.10-16
    • /
    • 2008
  • The Manifestation Consolidation System(MFCS) has made a peat contribution to effective improvement of distribution industry. To save distribution expenses and the time needed to record reports, more simplified system is being used for which overlapped or unnecessary transmitted items in taking care of goods are decreased. To become a center of Asian distribution information, our country has to prepare for the base of the MFCS and use it effectively Therefore, for an effective management of the MFCS, this study examines a procedure that takes care of goods in the MFCS, presents a future-model of the MFCS, and analyses classification of tasks relating to export and import and titles of electronic documents. In conclusion, this study aims at presenting an interpretative base of the MFCS in a practical viewpoint rather than presenting its technical direction.

Automatic Summary Method of Linguistic Educational Video Using Multiple Visual Features (다중 비주얼 특징을 이용한 어학 교육 비디오의 자동 요약 방법)

  • Han Hee-Jun;Kim Cheon-Seog;Choo Jin-Ho;Ro Yong-Man
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.10
    • /
    • pp.1452-1463
    • /
    • 2004
  • The requirement of automatic video summary is increasing as bi-directional broadcasting contents and various user requests and preferences for the bi -directional broadcast environment are increasing. Automatic video summary is needed for an efficient management and usage of many contents in service provider as well. In this paper, we propose a method to generate a content-based summary of linguistic educational videos automatically. First, shot-boundaries and keyframes are generated from linguistic educational video and then multiple(low-level) visual features are extracted. Next, the semantic parts (Explanation part, Dialog part, Text-based part) of the linguistic educational video are generated using extracted visual features. Lastly the XMI- document describing summary information is made based on HieraTchical Summary architecture oi MPEG-7 MDS (Multimedia I)escription Scheme). Experimental results show that our proposed algorithm provides reasonable performance for automatic summary of linguistic educational videos. We verified that the proposed method is useful ior video summary system to provide various services as well as management of educational contents.

  • PDF

Rule Acquisition Using Ontology Based on Graph Search (그래프 탐색을 이용한 웹으로부터의 온톨로지 기반 규칙습득)

  • Park, Sangun;Lee, Jae Kyu;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.3
    • /
    • pp.95-110
    • /
    • 2006
  • To enhance the rule-based reasoning capability of Semantic Web, the XRML (eXtensible Rule Markup Language) approach embraces the meta-information necessary for the extraction of explicit rules from Web pages and its maintenance. To effectuate the automatic identification of rules from unstructured texts, this research develops a framework of using rule ontology. The ontology can be acquired from a similar site first, and then can be used for multiple sites in the same domain. The procedure of ontology-based rule identification is regarded as a graph search problem with incomplete nodes, and an A* algorithm is devised to solve the problem. The procedure is demonstrated with the domain of shipping rates and return policy comparison portal, which needs rule based reasoning capability to answer the customer's inquiries. An example ontology is created from Amazon.com, and is applied to the many online retailers in the same domain. The experimental result shows a high performance of this approach.

  • PDF

Design and Implementation of Tool Server and License Server for Protecting Digital Contents (디지털 콘텐츠의 저작권 관리를 위한 라이센스 서버와 툴 서버 설계 및 구현)

  • Hong Hyen-Woo;Ryu Kwang-Hee;Kim Kwang-Yong;Kim Jae-Gon;Jung Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2006.05a
    • /
    • pp.573-576
    • /
    • 2006
  • Recently, the standard work of the copyright of the Digital content is not completed. And the content providers are developing self's copyright protecting technique. here is some problem such as the confusion existed in the copyright protecting and management system. The reason is that the company using different technique when developing the Digital Contents. Now, there is a standard working leaded by the MPEG. It's called MPEG-21 Multimedia framework and the REL is parted of the Intellectual Property Management and Protection included the framework. And the REL's standard working is completed. The Intellectual Property Management and Protection system contain license server, tool server, metadate server and consume server. In this paper, In order to management and protect the Digital Content copyright, We applying the REL, One of the contents of the MPEG-21 framework to design and implementation the License Server manage the settlement and the consumption information and the Tool Server manage and transport the Tools used from Digital Contents formation to the Digital Contents consumption.

  • PDF

An XPDL-Based Workflow Control-Structure and Data-Sequence Analyzer

  • Kim, Kwanghoon Pio
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.3
    • /
    • pp.1702-1721
    • /
    • 2019
  • A workflow process (or business process) management system helps to define, execute, monitor and manage workflow models deployed on a workflow-supported enterprise, and the system is compartmentalized into a modeling subsystem and an enacting subsystem, in general. The modeling subsystem's functionality is to discover and analyze workflow models via a theoretical modeling methodology like ICN, to graphically define them via a graphical representation notation like BPMN, and to systematically deploy those graphically defined models onto the enacting subsystem by transforming into their textual models represented by a standardized workflow process definition language like XPDL. Before deploying those defined workflow models, it is very important to inspect its syntactical correctness as well as its structural properness to minimize the loss of effectiveness and the depreciation of efficiency in managing the corresponding workflow models. In this paper, we are particularly interested in verifying very large-scale and massively parallel workflow models, and so we need a sophisticated analyzer to automatically analyze those specialized and complex styles of workflow models. One of the sophisticated analyzers devised in this paper is able to analyze not only the structural complexity but also the data-sequence complexity, especially. The structural complexity is based upon combinational usages of those control-structure constructs such as subprocesses, exclusive-OR, parallel-AND and iterative-LOOP primitives with preserving matched pairing and proper nesting properties, whereas the data-sequence complexity is based upon combinational usages of those relevant data repositories such as data definition sequences and data use sequences. Through the devised and implemented analyzer in this paper, we are able eventually to achieve the systematic verifications of the syntactical correctness as well as the effective validation of the structural properness on those complicate and large-scale styles of workflow models. As an experimental study, we apply the implemented analyzer to an exemplary large-scale and massively parallel workflow process model, the Large Bank Transaction Workflow Process Model, and show the structural complexity analysis results via a series of operational screens captured from the implemented analyzer.

Design and Implementation of ISO/IEEE 11073 DIM Transmission Structure Based on oneM2M for IoT Healthcare Service (사물인터넷 헬스케어 서비스를 위한 oneM2M기반 ISO/IEEE 11073 DIM 전송 구조 설계 및 구현)

  • Kim, Hyun Su;Chun, Seung Man;Chung, Yun Seok;Park, Jong Tae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.4
    • /
    • pp.3-11
    • /
    • 2016
  • In the environment of Internet of Things (IoT), IoT devices are limited by physical components such as power supply and memory, and also limited to their network performance in bandwidth, wireless channel, throughput, payload, etc. Despite these limitations, resources of IoT devices are shared with other IoT devices. Especially, remote management of the information of devices and patients are very important for the IoT healthcare service, moreover, providing the interoperability between the healthcare device and healthcare platform is essential. To meet these requirements, format of the message and the expressions for the data information and data transmission need to comply with suitable international standards for the IoT environment. However, the ISO/IEEE 11073 PHD (Personal Healthcare Device) standards, the existing international standards for the transmission of health informatics, does not consider the IoT environment, and therefore it is difficult to be applied for the IoT healthcare service. For this matter, we have designed and implemented the IoT healthcare system by applying the oneM2M, standards for the Internet of Things, and ISO/IEEE 11073 DIM (Domain Information Model), standards for the transmission of health informatics. For the implementation, the OM2M platform, which is based on the oneM2M standards, has been used. To evaluate the efficiency of transfer syntaxes between the healthcare device and OM2M platform, we have implemented comparative performance evaluation between HTTP and CoAP, and also between XML and JSON by comparing the packet size and number of packets in one transaction.

Empirical Research on Search model of Web Service Repository (웹서비스 저장소의 검색기법에 관한 실증적 연구)

  • Hwang, You-Sub
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.173-193
    • /
    • 2010
  • The World Wide Web is transitioning from being a mere collection of documents that contain useful information toward providing a collection of services that perform useful tasks. The emerging Web service technology has been envisioned as the next technological wave and is expected to play an important role in this recent transformation of the Web. By providing interoperable interface standards for application-to-application communication, Web services can be combined with component-based software development to promote application interaction and integration within and across enterprises. To make Web services for service-oriented computing operational, it is important that Web services repositories not only be well-structured but also provide efficient tools for an environment supporting reusable software components for both service providers and consumers. As the potential of Web services for service-oriented computing is becoming widely recognized, the demand for an integrated framework that facilitates service discovery and publishing is concomitantly growing. In our research, we propose a framework that facilitates Web service discovery and publishing by combining clustering techniques and leveraging the semantics of the XML-based service specification in WSDL files. We believe that this is one of the first attempts at applying unsupervised artificial neural network-based machine-learning techniques in the Web service domain. We have developed a Web service discovery tool based on the proposed approach using an unsupervised artificial neural network and empirically evaluated the proposed approach and tool using real Web service descriptions drawn from operational Web services repositories. We believe that both service providers and consumers in a service-oriented computing environment can benefit from our Web service discovery approach.

Development of Intelligent Job Classification System based on Job Posting on Job Sites (구인구직사이트의 구인정보 기반 지능형 직무분류체계의 구축)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.123-139
    • /
    • 2019
  • The job classification system of major job sites differs from site to site and is different from the job classification system of the 'SQF(Sectoral Qualifications Framework)' proposed by the SW field. Therefore, a new job classification system is needed for SW companies, SW job seekers, and job sites to understand. The purpose of this study is to establish a standard job classification system that reflects market demand by analyzing SQF based on job offer information of major job sites and the NCS(National Competency Standards). For this purpose, the association analysis between occupations of major job sites is conducted and the association rule between SQF and occupation is conducted to derive the association rule between occupations. Using this association rule, we proposed an intelligent job classification system based on data mapping the job classification system of major job sites and SQF and job classification system. First, major job sites are selected to obtain information on the job classification system of the SW market. Then We identify ways to collect job information from each site and collect data through open API. Focusing on the relationship between the data, filtering only the job information posted on each job site at the same time, other job information is deleted. Next, we will map the job classification system between job sites using the association rules derived from the association analysis. We will complete the mapping between these market segments, discuss with the experts, further map the SQF, and finally propose a new job classification system. As a result, more than 30,000 job listings were collected in XML format using open API in 'WORKNET,' 'JOBKOREA,' and 'saramin', which are the main job sites in Korea. After filtering out about 900 job postings simultaneously posted on multiple job sites, 800 association rules were derived by applying the Apriori algorithm, which is a frequent pattern mining. Based on 800 related rules, the job classification system of WORKNET, JOBKOREA, and saramin and the SQF job classification system were mapped and classified into 1st and 4th stages. In the new job taxonomy, the first primary class, IT consulting, computer system, network, and security related job system, consisted of three secondary classifications, five tertiary classifications, and five fourth classifications. The second primary classification, the database and the job system related to system operation, consisted of three secondary classifications, three tertiary classifications, and four fourth classifications. The third primary category, Web Planning, Web Programming, Web Design, and Game, was composed of four secondary classifications, nine tertiary classifications, and two fourth classifications. The last primary classification, job systems related to ICT management, computer and communication engineering technology, consisted of three secondary classifications and six tertiary classifications. In particular, the new job classification system has a relatively flexible stage of classification, unlike other existing classification systems. WORKNET divides jobs into third categories, JOBKOREA divides jobs into second categories, and the subdivided jobs into keywords. saramin divided the job into the second classification, and the subdivided the job into keyword form. The newly proposed standard job classification system accepts some keyword-based jobs, and treats some product names as jobs. In the classification system, not only are jobs suspended in the second classification, but there are also jobs that are subdivided into the fourth classification. This reflected the idea that not all jobs could be broken down into the same steps. We also proposed a combination of rules and experts' opinions from market data collected and conducted associative analysis. Therefore, the newly proposed job classification system can be regarded as a data-based intelligent job classification system that reflects the market demand, unlike the existing job classification system. This study is meaningful in that it suggests a new job classification system that reflects market demand by attempting mapping between occupations based on data through the association analysis between occupations rather than intuition of some experts. However, this study has a limitation in that it cannot fully reflect the market demand that changes over time because the data collection point is temporary. As market demands change over time, including seasonal factors and major corporate public recruitment timings, continuous data monitoring and repeated experiments are needed to achieve more accurate matching. The results of this study can be used to suggest the direction of improvement of SQF in the SW industry in the future, and it is expected to be transferred to other industries with the experience of success in the SW industry.

Mobile Contents Transformation System Research for Personalization Service (개인화 서비스를 위한 모바일 콘텐츠 변환 시스템 연구)

  • Bae, Jong-Hwan;Cho, Young-Hee;Lee, Jung-Jae;Kim, Nam-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.119-128
    • /
    • 2011
  • The Sensor technology and portable device capability able to collect recent user information and the information about the surrounding environment haven been highly developed. A user can be made use of various contents and the option is also extending with this technology development. In particular, the initial portable device had simply a call function, but now that has evolved into 'the 4th screen' which including movie, television, PC ability. also, in the past, a portable device to provided only the services of a SMS, in recent years, it provided to interactive video service, and it include technology which providing various contents. Also, it is rising as media which leading the consumption of contents, because it can be used anytime, anywhere. However, the contents available for the nature of user's handheld devices are limited. because it is very difficult for making the contents separately according to various device specification. To find a solution to this problem, the study on one contents from several device has been progressing. The contents conversion technology making use of the profile of device out of this study comes to the force and profile study has been progressing for this. Furthermore, Demand for a user is also increased and the study on the technology collecting, analyzing demands has been making active progress. And what is more, Grasping user's demands by making use of this technology and the study on the technology analyzing, providing contents has been making active progress as well. First of all, there is a method making good use of ZigBee, Bluetooth technology about the sensor for gathering user's information. ZigBee uses low-power digital radio for wireless headphone, wireless communication network, and being utilized for smart energy, automatic home system, wireless communication application and wireless sensor application. Bluetooth, as industry standards of PAN(Personal Area Networks), is being made of use of low power wireless device for the technology supporting data transmission such as drawing file, video file among Bluetooth device. With analyzing the collected information making use of this technology, it utilizes personalized service based on network knowledge developed by ETRI to service contents tailor-made for a user. Now that personalized service builds up network knowledge about user's various environments, the technology provides context friendly service constructed dynamically on the basis of this. The contents to service dynamically like this offer the contents that it converses with utilizing device profile to working well. Therefore, this paper suggests the system as follow. It collects the information, for example of user's sensitivity, context and location by using sensor technology, and generates the profile as a means of collected information as sensor. It collects the user's propensity to the information by user's input and event and generates profile in the same way besides the gathered information by sensor. Device transmits a generated profile and the profile about a device specification to proxy server. And proxy server transmits a profile to each profile management server. It analyzes profile in proxy server so that it selects the contents user demand and requests in contents server. Contents server receives a profile of user portable device from device profile server and converses the contents by using this. Original source code of contents convert into XML code using the device profile and XML code convert into source code available in user portable device. Thus, contents conversion process is terminated and user friendly system is completed as the user transmits optimal contents for user portable device.