• Title/Summary/Keyword: 시스템 모델링 언어

Search Result 312, Processing Time 0.03 seconds

USN Metadata Managements Agent based on XMDR-DAI for Sensor Network (센서 네트워크를 위한 XMDR-DAI 기반의 USN 메타데이터 관리 에이전트)

  • Moon, Seok-Jae;Hwang, Chi-Gon;Yoon, Chang-Pyo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.247-249
    • /
    • 2014
  • Ubiquitous Sensor Network (USN) environments, sensors and sensor nodes, and coming from heterogeneous sensor networks consist of one another, the characteristics of each component are also very diverse. Thus the sensor and the sensor nodes to interoperability between metadata for a single definition, management is very important. For this, the standard language for modeling sensor SensorML (Sensor Model Language) has. In this paper, sensor devices, sensor nodes and sensor networks for information technology in the application stage XMDR-DAI -based metadata to define the USN. The proposed XMDR-DAI USN based store and retrieve metadata for a method for effectively agent technology. Metadata of the proposed sensor is based SensorML USN environment by maintaining interoperability 50-200 USN middleware or a metadata management system for managing metadata in applications can be utilized directly.

  • PDF

Design of Initial Decision-Making Support Interface for Crop Facility Cultivation (작물 시설재배 초기 의사결정 지원 인터페이스 설계)

  • Kim, Kuk-Jong;Cho, Yong-Yoon
    • Journal of Internet of Things and Convergence
    • /
    • v.8 no.2
    • /
    • pp.71-78
    • /
    • 2022
  • Recently, the number of people wishing to return to farming is increasing, However, the lack of farming experience and management information of returnees is one of the main reasons for increasing the probability of agricultural failure. This study proposes an interface to support early facility cultivation management decision-making for returnees who want facility cultivation. The proposed interface is designed with UML(Unified Modeling Language) and provides key decision-making information such as land/crop suitability, land/facility costs, and management costs according to input data such as cultivation areas, selected crops, and cultivation types selected by the user. Through the proposed interface, facility cultivators can effectively and quickly acquire initial decision-making information for facility cultivation in the desired target area.

An Analysis of Soil Pressure Gauge Result from KHC Test Road (시험도로 토압계 계측결과 분석)

  • In Byeong-Eock;Kim Ji-Won;Kim Kyong-Ha;Lee Kwang-Ho
    • International Journal of Highway Engineering
    • /
    • v.8 no.3 s.29
    • /
    • pp.129-141
    • /
    • 2006
  • The vertical soil pressure developed in the granular layer of asphalt pavement system is influenced by various factors, including the wheel load magnitude, the loading speed, and asphalt pavement temperature. This research observed the distribution of vertical soil pressure in pavement supporting layer by investigating measured data from soil pressure gage in the KHC Test Road. The existing specification of subbase and subgrade compaction was also evaluated with measured vertical pressure. The finite element analysis was conducted to verify the accuracy of results with measured data because it can maximize research capacity without significant field test. The test data was collected from A5, A7, A14, and A15 test sections at August, September, and November 2004 and August 2005. Those test sections and test data were selected because they had best quality. The size of influence area was evaluated and the vertical pressure variation was investigated with respect to load level, load speed, and pavement temperature. The lower speed, higher load level, and higher pavement temperature increased the vertical pressure and reduced the area of influence. The finite element result showed the similar trend of vertical pressure variation in comparison with measured data. The specification of compaction quality for subbase and subgrade is higher than the level of vertical pressure measured with truck load so that it should be lurker investigated.

  • PDF

A Study on Performance Evaluation of Hidden Markov Network Speech Recognition System (Hidden Markov Network 음성인식 시스템의 성능평가에 관한 연구)

  • 오세진;김광동;노덕규;위석오;송민규;정현열
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.4 no.4
    • /
    • pp.30-39
    • /
    • 2003
  • In this paper, we carried out the performance evaluation of HM-Net(Hidden Markov Network) speech recognition system for Korean speech databases. We adopted to construct acoustic models using the HM-Nets modified by HMMs(Hidden Markov Models), which are widely used as the statistical modeling methods. HM-Nets are carried out the state splitting for contextual and temporal domain by PDT-SSS(Phonetic Decision Tree-based Successive State Splitting) algorithm, which is modified the original SSS algorithm. Especially it adopted the phonetic decision tree to effectively express the context information not appear in training speech data on contextual domain state splitting. In case of temporal domain state splitting, to effectively represent information of each phoneme maintenance in the state splitting is carried out, and then the optimal model network of triphone types are constructed by in the parameter. Speech recognition was performed using the one-pass Viterbi beam search algorithm with phone-pair/word-pair grammar for phoneme/word recognition, respectively and using the multi-pass search algorithm with n-gram language models for sentence recognition. The tree-structured lexicon was used in order to decrease the number of nodes by sharing the same prefixes among words. In this paper, the performance evaluation of HM-Net speech recognition system is carried out for various recognition conditions. Through the experiments, we verified that it has very superior recognition performance compared with the previous introduced recognition system.

  • PDF

An Object-Oriented Analysis and Design Methodology for Secure Database Design -focused on Role Based Access Control- (안전한 데이터베이스 설계를 위한 객체지향 분석·설계 방법론 -역할기반 접근제어를 중심으로-)

  • Joo, Kyung-Soo;Woo, Jung-Woong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.6
    • /
    • pp.63-70
    • /
    • 2013
  • In accordance with the advancement of IT, application systems with various and complex functions are being required. Such application systems are typically built based on database in order to manage data efficiently. But most object-oriented analysis design methodologies for developing web application systems have not been providing interconnections with the database. Since the requirements regarding the security issues increased, the importance of security has become emphasized. However, since the security is usually considered at the last step of development, it is difficult to apply the security during the whole process of system development, from the requirement analysis to implementation. Therefore, this paper suggests an object-oriented analysis and design methodology for secure database design from the requirement analysis to implementation. This object-oriented analysis and design methodology for secure database design offers correlations with database that most existing object-oriented analysis and design methodologies could not provide. It also uses UMLsec, the modeling language, to apply security into database design. In addition, in order to implement security, RBAC (Role Based Access Control) of relational database is used.

Semi-supervised domain adaptation using unlabeled data for end-to-end speech recognition (라벨이 없는 데이터를 사용한 종단간 음성인식기의 준교사 방식 도메인 적응)

  • Jeong, Hyeonjae;Goo, Jahyun;Kim, Hoirin
    • Phonetics and Speech Sciences
    • /
    • v.12 no.2
    • /
    • pp.29-37
    • /
    • 2020
  • Recently, the neural network-based deep learning algorithm has dramatically improved performance compared to the classical Gaussian mixture model based hidden Markov model (GMM-HMM) automatic speech recognition (ASR) system. In addition, researches on end-to-end (E2E) speech recognition systems integrating language modeling and decoding processes have been actively conducted to better utilize the advantages of deep learning techniques. In general, E2E ASR systems consist of multiple layers of encoder-decoder structure with attention. Therefore, E2E ASR systems require data with a large amount of speech-text paired data in order to achieve good performance. Obtaining speech-text paired data requires a lot of human labor and time, and is a high barrier to building E2E ASR system. Therefore, there are previous studies that improve the performance of E2E ASR system using relatively small amount of speech-text paired data, but most studies have been conducted by using only speech-only data or text-only data. In this study, we proposed a semi-supervised training method that enables E2E ASR system to perform well in corpus in different domains by using both speech or text only data. The proposed method works effectively by adapting to different domains, showing good performance in the target domain and not degrading much in the source domain.

A Construction of the C_MDR(Component_MetaData Registry) for the Environment of Exchanging the Component (컴포넌트 유통환경을 위한 컴포넌트 메타데이타 레지스트리 구축 : C_MDR)

  • Song, Chee-Yang;Yim, Sung-Bin;Baik, Doo-Kwon;Kim, Chul-Hong
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.614-629
    • /
    • 2001
  • As the information-intensive society in 21c based on the environment of global internet is promoted, the software is getting more large and complex, and the demand for the software is increasing briskly. So, it becomes an important issue in academic and industrial field to activate reuse by developing and exchanging the standardized component. Currently, the information services as a product type of each company are provided in foreign market place for reusing a commercial component, but the components which are serviced in each market place are different, insufficient and unstandardized. That is, construction for Component Data Registry based on ISO 11179, is not accomplished. Hence, the national government has stepped up the plan for sending out public component at 2001. Therefore, the systems as a tool for sharing and exchange of data, have to support the meta-information of standardized component. In this paper, we will propose the C_MDR system: a tool to register and manage the standardized meta-information, based upon ISO 11179, for the commercialized common component. The purpose of this system is to systemically share and exchange the data in chain of acceleration of reusing the component. So, we will show the platform of specification for the component meta-information, then define the meta-information according to this platform, also represent the meta-information using XML for enhancing the interoperability of information with other system. Moreover, we will show that three-layered expression make modeling to be simple and understandable. The implementation of this system is to construct a prototype system of the component meta-information through the internet on www, this system uses ASP as a development language and RDBMS Oracle for PC. Thus, we may expect the standardization of the exchanged component metadata, and be able to apply to the exchanged reuse tool.

  • PDF

A Hierarchical Group-Based CAVLC Decoder (계층적 그룹 기반의 CAVLC 복호기)

  • Ham, Dong-Hyeon;Lee, Hyoung-Pyo;Lee, Yong-Surk
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.26-32
    • /
    • 2008
  • Video compression schemes have been developed and used for many years. Currently, H.264/AVC is the most efficient video coding standard. The H.264/AVC baseline profile adopts CAVLC(Context-Adaptive Variable Length Coding) method as an entropy coding method. CAVLC gives better performance in compression ratios than conventional VLC(Variable Length Coding). However, because CAVLC decoder uses a lot of VLC tables, the CAVLC decoder requires a lot of area in terms of hardware. Conversely, since it must look up the VLC tables, it gives a worse performance in terms of software. In this paper, we propose a new hierarchical grouping method for the VLC tables. We can obtain an index of codes in the reconstructed VLC tables by simple arithmetic operations. In this method, the VLC tables are accessed just once in decoding a symbol. We modeled the proposed algorithm in C language, compiled under ARM ADS1.2 and simulated it with Armulator. Experimental results show that the proposed algorithm reduces execution time by about 80% and 15% compared with the H.264/AVC reference program JM(Joint Model) 10.2 and the arithmetic operation algorithm which is recently proposed, respectively.

Preference-based Supply Chain Partner Selection Using Fuzzy Ontology (퍼지 온톨로지를 이용한 선호도 기반 공급사슬 파트너 선정)

  • Lee, Hae-Kyung;Ko, Chang-Seong;Kim, Tai-Oun
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.37-52
    • /
    • 2011
  • Supply chain management is a strategic thinking which enhances the value of supply chain and adapts more promptly for the changing environment. For the seamless partnership and value creation in supply chains, information and knowledge sharing and proper partner selection criteria must be applied. Thus, the partner selection criteria are critical to maintain product quality and reliability. Each part of a product is supplied by an appropriate supply partner. The criteria for selecting partners are technological capability, quality, price, consistency, etc. In reality, the criteria for partner selection may change according to the characteristics of the components. When the part is a core component, quality factor is the top priority compared to the price. For a standardized component, lower price has a higher priority. Sometimes, unexpected case occurs such as emergency order in which the preference may shift on the top. Thus, SCM partner selection criteria must be determined dynamically according to the characteristics of part and its context. The purpose of this research is to develop an OWL model for the supply chain partnership depending on its context and characteristics of the parts. The uncertainty of variable is tackled through fuzzy logic. The parts with preference of numerical value and context are represented using OWL. Part preference is converted into fuzzy membership function using fuzzy logic. For the ontology reasoning, SWRL (Semantic Web Rule Language) is applied. For the implementation of proposed model, starter motor of an automobile is adopted. After the fuzzy ontology is constructed, the process of selecting preference-based supply partner for each part is presented.

Unified Design Methodology and Verification Platform for Giga-scale System on Chip (기가 스케일 SoC를 위한 통합 설계 방법론 및 검증 플랫폼)

  • Kim, Jeong-Hun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.47 no.2
    • /
    • pp.106-114
    • /
    • 2010
  • We proposed an unified design methodology and verification platform for giga-scale System on Chip (SoC). According to the growth of VLSI integration, the existing RTL design methodology has a limitation of a production gap because a design complexity increases. A verification methodology need an evolution to overcome a verification gap. The proposed platform includes a high level synthesis, and we develop a power-aware verification platform for low power design and verification automation using it's results. We developed a verification automation and power-aware verification methodology based on control and data flow graph (CDFG) and an abstract level language and RTL. The verification platform includes self-checking and the coverage driven verification methodology. Especially, the number of the random vector decreases minimum 5.75 times with the constrained random vector algorithm which is developed for the power-aware verification. This platform can verify a low power design with a general logic simulator using a power and power cell modeling method. This unified design and verification platform allow automatically to verify, design and synthesis the giga-scale design from the system level to RTL level in the whole design flow.