• Title/Summary/Keyword: Software Frameworks

Search Result 117, Processing Time 0.022 seconds

Implementation of the Metadata Registry-based Framework for Semantic Interoperability of Application in Ubiquitous Environment (유비쿼터스 환경에서 어플리케이션의 의미 상호운용성을 위한 메타데이터 레지스트리 기반의 프레임워크 구현)

  • Kim, Jeong-Dong;Jeong, Dong-Won;Kim, Jin-Hyung;Baik, Doo-Kwon
    • Journal of the Korea Society for Simulation
    • /
    • v.16 no.1
    • /
    • pp.11-19
    • /
    • 2007
  • Under ubiquitous environment, applications can gather and utilize various sensing information. There are many issues such as energy management, protocol standardization, independency on sensor fields, and security to be resolved for the complete ubiquitous computing. Especially, the independent information access in the sensor field is one of the most important issues to maximize the usability of sensors in various sensor fields. However, existing frameworks are not suitable for the ubiquitous computing environment because of data heterogeneity between data elements in sensor fields. Existing applications are dependent to sensor fields and sensors in the existing ubiquitous computing on environment is dependent to the application in the sensor field. In other word, an application can utilize just information from a specific sensor field. To overcome this restriction, many issues from a hardware or software view must be resolved. In this paper, we provide the design and implementation of the Metadata Registry-based framework (UbiMDR) of the Ubiquitous environment. This framework can provides the semantic interoperability among ubiquitous applications or various sensor fields. In addition, we describe comparison evaluation between conventional Ubiquitous computing framework and UbiMDR framework with data accuracy of interoperability.

  • PDF

Interactive Navigational Structures

  • Czaplewski, Krzysztof;Wisniewski, Zbigniew
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • v.1
    • /
    • pp.495-500
    • /
    • 2006
  • Satellite systems for objects positioning appeared indispensable for performing basic tasks of maritime navigation. Navigation, understood as safe and effective conducting a vehicle from one point to another, within a specific physical-geographical environment. [Kopacz, $Urba{\acute{n}}ski$, 1998]. However, the systems have not solved the problem of accessibility to reliable and highly accurate information about a position of an object, especially if surveyed toward on-shore navigational signs or in sea depth. And it's of considerable significance for many navigational tasks, carried out within the frameworks of special works performance and submarine navigation. In addition, positioning precisely the objects other than vessels, while executing hydrographical works, is not always possible with a use of any satellite system. Difficulties with GPS application show up also while positioning such off-lying dangers as wrecks, underwater and aquatic rocks also other naturaland artificial obstacles. It is caused by impossibility of surveyors approaching directly any such object while its positioning. Moreover, determination of vessels positions mutually (mutual geometrical relations) by teams carrying out one common tasks at sea, demands applying the navigational techniques other than the satellite ones. Vessels'staying precisely on specified positions is of special importance in, among the others, the cases as follows: - surveying vessels while carrying out bathymetric works, wire dragging; - special tasks watercraft in course of carrying out scientific research, sea bottom exploration etc. The problems are essential for maritime economy and the Country defence readiness. Resolving them requires applying not only the satellite navigation methods, but also the terrestrial ones. The condition for implementation of the geo-navigation methods is at present the methods development both: in aspects of their techniques and technologies as well as survey data evaluation. Now, the classical geo-navigation comprises procedures, which meet out-of-date accuracy standards. To enable meeting the present-day requirements, the methods should refer to well-recognised and still developed methods of contemporary geodesy. Moreover, in a time of computerization and automation of calculating, it is feasible to create also such software, which could be applied in the integrated navigational systems, allowing carrying out navigation, provided with combinatory systems as well as with the new positioning methods. Whereas, as regards data evaluation, there should be applied the most advanced achievements in that subject; first of all the newest, although theoretically well-recognised estimation methods, including estimation [Hampel et al. 1986; $Wi{\acute{s}}niewski$ 2005; Yang 1997; Yang et al. 1999]. Such approach to the problem consisting in positioning a vehicle in motion and solid objects under observation enables an opportunity of creating dynamic and interactive navigational structures. The main subject of the theoretical suggested in this paper is the Interactive Navigational Structure. In this paper, the Structure will stand for the existing navigational signs systems, any observed solid objects and also vehicles, carrying out navigation (submarines inclusive), which, owing to mutual dependencies, (geometrical and physical) allow to determine coordinates of this new Structure's elements and to correct the already known coordinates of other elements.

  • PDF

Development of a Simulation Model for Supply Chain Management of Precast Concrete (프리캐스트 콘크리트 공급사슬 관리를 위한 시뮬레이션 모형 개발)

  • Kwon, Hyeonju;Jeon, Sangwon;Lee, Jaeil;Jeong, Keunchae
    • Korean Journal of Construction Engineering and Management
    • /
    • v.22 no.5
    • /
    • pp.86-98
    • /
    • 2021
  • In this study, we developed a simulation model for supply chain management of Precast Concrete (PC) based construction. To this end, information on the Factory Production/Site Construction system was collected through literature review and field research, and based on this information, a simulation model was defined by describing the supply chain, entities, resources, and processes. Next, using the Arena simulation software, a simulation model for the PC supply chain was developed by setting model frameworks, data modules, flowchart modules, and animation modules. Finally, verification and validation were performed using five review methodologies such as model check, animation check, extreme value test, average value test, and actual case test to the developed model. As a result, it was found that the model adequately represented the flows and characteristics of the PC supply chain without any logical errors and provided accurate performance evaluation values for the target supply chains. It is expected that the proposed simulation model will faithfully play a role as a performance evaluation platform in the future for developing management techniques in order to optimally operate the PC supply chain.

Analysis of Feature Map Compression Efficiency and Machine Task Performance According to Feature Frame Configuration Method (피처 프레임 구성 방안에 따른 피처 맵 압축 효율 및 머신 태스크 성능 분석)

  • Rhee, Seongbae;Lee, Minseok;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.318-331
    • /
    • 2022
  • With the recent development of hardware computing devices and software based frameworks, machine tasks using deep learning networks are expected to be utilized in various industrial fields and personal IoT devices. However, in order to overcome the limitations of high cost device for utilizing the deep learning network and that the user may not receive the results requested when only the machine task results are transmitted from the server, Collaborative Intelligence (CI) proposed the transmission of feature maps as a solution. In this paper, an efficient compression method for feature maps with vast data sizes to support the CI paradigm was analyzed and presented through experiments. This method increases redundancy by applying feature map reordering to improve compression efficiency in traditional video codecs, and proposes a feature map method that improves compression efficiency and maintains the performance of machine tasks by simultaneously utilizing image compression format and video compression format. As a result of the experiment, the proposed method shows 14.29% gain in BD-rate of BPP and mAP compared to the feature compression anchor of MPEG-VCM.

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Applying Meta-model Formalization of Part-Whole Relationship to UML: Experiment on Classification of Aggregation and Composition (UML의 부분-전체 관계에 대한 메타모델 형식화 이론의 적용: 집합연관 및 복합연관 판별 실험)

  • Kim, Taekyung
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.99-118
    • /
    • 2015
  • Object-oriented programming languages have been widely selected for developing modern information systems. The use of concepts relating to object-oriented (OO, in short) programming has reduced efforts of reusing pre-existing codes, and the OO concepts have been proved to be a useful in interpreting system requirements. In line with this, we have witnessed that a modern conceptual modeling approach supports features of object-oriented programming. Unified Modeling Language or UML becomes one of de-facto standards for information system designers since the language provides a set of visual diagrams, comprehensive frameworks and flexible expressions. In a modeling process, UML users need to consider relationships between classes. Based on an explicit and clear representation of classes, the conceptual model from UML garners necessarily attributes and methods for guiding software engineers. Especially, identifying an association between a class of part and a class of whole is included in the standard grammar of UML. The representation of part-whole relationship is natural in a real world domain since many physical objects are perceived as part-whole relationship. In addition, even abstract concepts such as roles are easily identified by part-whole perception. It seems that a representation of part-whole in UML is reasonable and useful. However, it should be admitted that the use of UML is limited due to the lack of practical guidelines on how to identify a part-whole relationship and how to classify it into an aggregate- or a composite-association. Research efforts on developing the procedure knowledge is meaningful and timely in that misleading perception to part-whole relationship is hard to be filtered out in an initial conceptual modeling thus resulting in deterioration of system usability. The current method on identifying and classifying part-whole relationships is mainly counting on linguistic expression. This simple approach is rooted in the idea that a phrase of representing has-a constructs a par-whole perception between objects. If the relationship is strong, the association is classified as a composite association of part-whole relationship. In other cases, the relationship is an aggregate association. Admittedly, linguistic expressions contain clues for part-whole relationships; therefore, the approach is reasonable and cost-effective in general. Nevertheless, it does not cover concerns on accuracy and theoretical legitimacy. Research efforts on developing guidelines for part-whole identification and classification has not been accumulated sufficient achievements to solve this issue. The purpose of this study is to provide step-by-step guidelines for identifying and classifying part-whole relationships in the context of UML use. Based on the theoretical work on Meta-model Formalization, self-check forms that help conceptual modelers work on part-whole classes are developed. To evaluate the performance of suggested idea, an experiment approach was adopted. The findings show that UML users obtain better results with the guidelines based on Meta-model Formalization compared to a natural language classification scheme conventionally recommended by UML theorists. This study contributed to the stream of research effort about part-whole relationships by extending applicability of Meta-model Formalization. Compared to traditional approaches that target to establish criterion for evaluating a result of conceptual modeling, this study expands the scope to a process of modeling. Traditional theories on evaluation of part-whole relationship in the context of conceptual modeling aim to rule out incomplete or wrong representations. It is posed that qualification is still important; but, the lack of consideration on providing a practical alternative may reduce appropriateness of posterior inspection for modelers who want to reduce errors or misperceptions about part-whole identification and classification. The findings of this study can be further developed by introducing more comprehensive variables and real-world settings. In addition, it is highly recommended to replicate and extend the suggested idea of utilizing Meta-model formalization by creating different alternative forms of guidelines including plugins for integrated development environments.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.