• Title/Summary/Keyword: automatically

Search Result 6,815, Processing Time 0.036 seconds

VKOSPI Forecasting and Option Trading Application Using SVM (SVM을 이용한 VKOSPI 일 중 변화 예측과 실제 옵션 매매에의 적용)

  • Ra, Yun Seon;Choi, Heung Sik;Kim, Sun Woong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.177-192
    • /
    • 2016
  • Machine learning is a field of artificial intelligence. It refers to an area of computer science related to providing machines the ability to perform their own data analysis, decision making and forecasting. For example, one of the representative machine learning models is artificial neural network, which is a statistical learning algorithm inspired by the neural network structure of biology. In addition, there are other machine learning models such as decision tree model, naive bayes model and SVM(support vector machine) model. Among the machine learning models, we use SVM model in this study because it is mainly used for classification and regression analysis that fits well to our study. The core principle of SVM is to find a reasonable hyperplane that distinguishes different group in the data space. Given information about the data in any two groups, the SVM model judges to which group the new data belongs based on the hyperplane obtained from the given data set. Thus, the more the amount of meaningful data, the better the machine learning ability. In recent years, many financial experts have focused on machine learning, seeing the possibility of combining with machine learning and the financial field where vast amounts of financial data exist. Machine learning techniques have been proved to be powerful in describing the non-stationary and chaotic stock price dynamics. A lot of researches have been successfully conducted on forecasting of stock prices using machine learning algorithms. Recently, financial companies have begun to provide Robo-Advisor service, a compound word of Robot and Advisor, which can perform various financial tasks through advanced algorithms using rapidly changing huge amount of data. Robo-Adviser's main task is to advise the investors about the investor's personal investment propensity and to provide the service to manage the portfolio automatically. In this study, we propose a method of forecasting the Korean volatility index, VKOSPI, using the SVM model, which is one of the machine learning methods, and applying it to real option trading to increase the trading performance. VKOSPI is a measure of the future volatility of the KOSPI 200 index based on KOSPI 200 index option prices. VKOSPI is similar to the VIX index, which is based on S&P 500 option price in the United States. The Korea Exchange(KRX) calculates and announce the real-time VKOSPI index. VKOSPI is the same as the usual volatility and affects the option prices. The direction of VKOSPI and option prices show positive relation regardless of the option type (call and put options with various striking prices). If the volatility increases, all of the call and put option premium increases because the probability of the option's exercise possibility increases. The investor can know the rising value of the option price with respect to the volatility rising value in real time through Vega, a Black-Scholes's measurement index of an option's sensitivity to changes in the volatility. Therefore, accurate forecasting of VKOSPI movements is one of the important factors that can generate profit in option trading. In this study, we verified through real option data that the accurate forecast of VKOSPI is able to make a big profit in real option trading. To the best of our knowledge, there have been no studies on the idea of predicting the direction of VKOSPI based on machine learning and introducing the idea of applying it to actual option trading. In this study predicted daily VKOSPI changes through SVM model and then made intraday option strangle position, which gives profit as option prices reduce, only when VKOSPI is expected to decline during daytime. We analyzed the results and tested whether it is applicable to real option trading based on SVM's prediction. The results showed the prediction accuracy of VKOSPI was 57.83% on average, and the number of position entry times was 43.2 times, which is less than half of the benchmark (100 times). A small number of trading is an indicator of trading efficiency. In addition, the experiment proved that the trading performance was significantly higher than the benchmark.

Supplemental Lighting by HPS and PLS Lamps Affects Growth and Yield of Cucumber during Low Radiation Period (약광기 HPS와 PLS lamp를 이용한 오이의 보광재배효과)

  • Kwon, Joon-Kook;Yu, In-Ho;Park, Kyoung-Sub;Lee, Jae-Han;Kim, Jin-Hyun;Lee, Jung-Sup;Lee, Dong-Soo
    • Journal of Bio-Environment Control
    • /
    • v.27 no.4
    • /
    • pp.400-406
    • /
    • 2018
  • In this experiment the effect of supplemental lighting on the growth and yield of cucumber (Cucumis sativus L. 'Fresh') plants during low radiation period of winter season were investigated in glasshouses using common high-pressure sodium (HPS) lamps and newly developed plasma lighting system (PLS) lamps. Plants grown without supplemental lighting were considered as a control. Supplemental lighting was provided from November 20th, 2015 to March 15th, 2016 to ensure 14-hour photoperiod (natural+supplemental light), also lamps were operated automatically when the outside sun radiation levels were less than $100W{\cdot}m^{-2}$. Spectral analysis showed that HPS lamp had a discrete spectrum, lacked of the radiation in the 400-550 nm wave band (blue-green light), but had a high output in the orange-red region (550-650 nm). A higher red light output resulted in an increased red to far-red (R/FR) ratio in HPS lamp. PLS had a continuous spectrum and had a peak radiation in green region (490-550 nm). HPS has 12.6% lower output in photosynthetically active radiation (PAR) but 12.6% higher output in near infra-red (NIR) spectral regions compared to PLS. Both HPS and PLS lamps emitted very low levels of ultra-violet radiation (300-400 nm). Supplemental lighting both from HPS and PLS lamps increased plant height, leaf number, internode number and dry weight of cucumber plants compared to control. Photosynthetic activity of cucumber plants grown under two supplemental lighting systems was comparable. Number of fruits per cucumber plant (fruit weight per plant) in control, PLS, and HPS plots were 21.2 (2.9 kg), 38.7 (5.5 kg), and 40.4 (5.6 kg), respectively, thereby increasing yield by 1.8-1.9 times in comparison with control. An analysis of the economic feasibility of supplemental lighting in cucumber cultivation showed that considering lamp installation and electricity costs the income from supplemental lighting increased by 37% and 62% for PLS and HPS lamps, respectively.

Development of a deep-learning based tunnel incident detection system on CCTVs (딥러닝 기반 터널 영상유고감지 시스템 개발 연구)

  • Shin, Hyu-Soung;Lee, Kyu-Beom;Yim, Min-Jin;Kim, Dong-Gyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.6
    • /
    • pp.915-936
    • /
    • 2017
  • In this study, current status of Korean hazard mitigation guideline for tunnel operation is summarized. It shows that requirement for CCTV installation has been gradually stricted and needs for tunnel incident detection system in conjunction with the CCTV in tunnels have been highly increased. Despite of this, it is noticed that mathematical algorithm based incident detection system, which are commonly applied in current tunnel operation, show very low detectable rates by less than 50%. The putative major reasons seem to be (1) very weak intensity of illumination (2) dust in tunnel (3) low installation height of CCTV to about 3.5 m, etc. Therefore, an attempt in this study is made to develop an deep-learning based tunnel incident detection system, which is relatively insensitive to very poor visibility conditions. Its theoretical background is given and validating investigation are undertaken focused on the moving vehicles and person out of vehicle in tunnel, which are the official major objects to be detected. Two scenarios are set up: (1) training and prediction in the same tunnel (2) training in a tunnel and prediction in the other tunnel. From the both cases, targeted object detection in prediction mode are achieved to detectable rate to higher than 80% in case of similar time period between training and prediction but it shows a bit low detectable rate to 40% when the prediction times are far from the training time without further training taking place. However, it is believed that the AI based system would be enhanced in its predictability automatically as further training are followed with accumulated CCTV BigData without any revision or calibration of the incident detection system.

Analysis of Acquisition Parameters That Caused Artifacts in Four-dimensional (4D) CT Images of Targets Undergoing Regular Motion (표적이 규칙적으로 움직일 때 생기는 4DCT 영상의 모션 아티팩트(Motion Artifact) 관련된 원인분석)

  • Sheen, Heesoon;Han, Youngyih;Shin, Eunhyuk
    • Progress in Medical Physics
    • /
    • v.24 no.4
    • /
    • pp.243-252
    • /
    • 2013
  • The aim of this study was to clarify the impacts of acquisition parameters on artifacts in four-dimensional computed tomography (4D CT) images, such as the partial volume effect (PVE), partial projection effect (PPE), and mis-matching of initial motion phases between adjacent beds (MMimph) in cine mode scanning. A thoracic phantom and two cylindrical phantoms (2 cm diameter and heights of 0.5 cm for No.1 and 10 cm for No.2) were scanned using 4D CT. For the thoracic phantom, acquisition was started automatically in the first scan with 5 sec and 8 sec of gantry rotation, thereby allowing a different phase at the initial projection of each bed. In the second scan, the initial projection at each bed was manually synchronized with the inhalation phase to minimize the MMimph. The third scan was intentionally un-synchronized with the inhalation phase. In the cylindrical phantom scan, one bed (2 cm) and three beds (6 cm) were used for 2 and 6 sec motion periods. Measured target volume to true volume ratios (MsTrueV) were computed. The relationships among MMimph, MsTrueV, and velocity were investigated. In the thoracic phantom, shorter gantry rotation provided more precise volume and was highly correlated with velocity when MMimph was minimal. MMimph reduced the correlation. For moving cylinder No. 1, MsTrueV was correlated with velocity, but the larger MMimph for 2 sec of motion removed the correlation. The volume of No. 2 was similar to the static volume due to the small PVE, PPE, and MMimph. Smaller target velocity and faster gantry rotation resulted in a more accurate volume description. The MMimph was the main parameter weakening the correlation between MsTrueV and velocity. Without reducing the MMimph, controlling target velocity and gantry rotation will not guarantee accurate image presentation given current 4D CT technology.

Enhancement of Inter-Image Statistical Correlation for Accurate Multi-Sensor Image Registration (정밀한 다중센서 영상정합을 위한 통계적 상관성의 증대기법)

  • Kim, Kyoung-Soo;Lee, Jin-Hak;Ra, Jong-Beom
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.4 s.304
    • /
    • pp.1-12
    • /
    • 2005
  • Image registration is a process to establish the spatial correspondence between images of the same scene, which are acquired at different view points, at different times, or by different sensors. This paper presents a new algorithm for robust registration of the images acquired by multiple sensors having different modalities; the EO (electro-optic) and IR(infrared) ones in the paper. The two feature-based and intensity-based approaches are usually possible for image registration. In the former selection of accurate common features is crucial for high performance, but features in the EO image are often not the same as those in the R image. Hence, this approach is inadequate to register the E0/IR images. In the latter normalized mutual Information (nHr) has been widely used as a similarity measure due to its high accuracy and robustness, and NMI-based image registration methods assume that statistical correlation between two images should be global. Unfortunately, since we find out that EO and IR images don't often satisfy this assumption, registration accuracy is not high enough to apply to some applications. In this paper, we propose a two-stage NMI-based registration method based on the analysis of statistical correlation between E0/1R images. In the first stage, for robust registration, we propose two preprocessing schemes: extraction of statistically correlated regions (ESCR) and enhancement of statistical correlation by filtering (ESCF). For each image, ESCR automatically extracts the regions that are highly correlated to the corresponding regions in the other image. And ESCF adaptively filters out each image to enhance statistical correlation between them. In the second stage, two output images are registered by using NMI-based algorithm. The proposed method provides prospective results for various E0/1R sensor image pairs in terms of accuracy, robustness, and speed.

Effects of Temperature on the Activity of Pulmonary Surfactant of the Rabbit (온도(溫度)가 가토(家兎) 폐포표면(肺胞表面) 활성물질(活性物質)의 활성도(活性度)에 미치는 영향(影響))

  • Kwon, Koing-Bo
    • The Korean Journal of Physiology
    • /
    • v.7 no.2
    • /
    • pp.1-8
    • /
    • 1973
  • Though it has been reported by Clements et al. and Avery et al. that the activity of the pulmonary surfactant can be altered by the temperature changes, a conclusive evidence of the effects of temperature on the surfactant system of the lung is yet to come. In the present study, an attempt was made to observe possible effects of a few different degrees of temperature on the activity of the pulmonary surfactant of the rabbit in vivo and in vitro. The rabbit was sacrificed by blood shedding and both lungs were completely removed. The lung washings, obtained by gently lavaging the left lung with saline, was placed at 1) 4C for 1, 5, 10, 15, 30 and 40 days, and 2) 20C for 1, 2, 3, 4, 5 and 7 days for in vitro experiment. For in vivo experiment, the rabbit was placed at 4C for 4, 8, 12 and 24 hours, and the lung lavage was prepared as described above in the in vitro experiment. Tension-area (T-A) diagram of the lung lavage was recorded automatically by a modified. Langmuir-Wilhelmy balance with a synchronized recording system. The surface tensions thus obtained were compared with those of the normal rabbit, and the results are summarized as follows: 1. The maximal surface tension, minimal surface tension and stability index of the normal rabbit lung lavage were $52.5{\pm}2.3\;dynes/cm,\;4.9{\pm}2.3\;dynes/cm$ and 1.65, respectively. 2. In the group where the lung lavage was placed at 4C in vitro, the maximal and minimal surface tensions, and stability index did not show any noticeable changes comparing with the normal values up to 30 days. On the 40th day of the experiment, a tendency of a slight increase in the surface tensions was observed but the change was not significant. 3. When the lung lavage was placed at 20 C in vitro, the maximal surface tension did not show any appreciable change comparing with the normal except on the 7 th day with a slight increase. The minimal surface tension showed an increased value from the 2nd day, and on the 5 th and 7 th experimental day, markedly increased value was observed. The stability index, on the other hand. showed a marked decrease throughout the entire experiment with the value of 0.71 and 0.53 on the 5th and 7 th day, respectively. 4. In the group where the rabbit was placed at 4 C in vivo, the maximal surface tensions and stability index of the lung lavage showed little change from the normal. The minimal surface tension at 12 experimental hour showed a slight increase, but it returned to the normal value at 24 hour.

  • PDF

Processing and Quality Control of Flux Data at Gwangneung Forest (광릉 산림의 플럭스 자료 처리와 품질 관리)

  • Lim, Hee-Jeong;Lee, Young-Hee
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.10 no.3
    • /
    • pp.82-93
    • /
    • 2008
  • In order to ensure a standardized data analysis of the eddy covariance measurements, Hong and Kim's quality control program has been updated and used to process eddy covariance data measured at two levels on the main flux tower at Gwangneung site from January to May in 2005. The updated program was allowed to remove outliers automatically for $CO_2$ and latent heat fluxes. The flag system consists of four quality groups(G, D, B and M). During the study period, the missing data were about 25% of the total records. About 60% of the good quality data were obtained after the quality control. The number of record in G group was larger at 40m than at 20m. It is due that the level of 20m was within the roughness sublayer where the presence of the canopy influences directly on the character of the turbulence. About 60% of the bad data were due to low wind speed. Energy balance closure at this site was about 40% during the study period. Large imbalance is attributed partly to the combined effects of the neglected heat storage terms, inaccuracy of ground heat flux and advection due to local wind system near the surface. The analysis of wind direction indicates that the frequent occurrence of positive momentum flux was closely associated with mountain valley wind system at this site. The negative $CO_2$ flux at night was examined in terms of averaging time. The results show that when averaging time is larger than 10min, the magnitude of calculated $CO_2$ fluxes increases rapidly, suggesting that the 30min $CO_2$ flux is influenced severely by the mesoscale motion or nonstationarity. A proper choice of averaging time needs to be considered to get accurate turbulent fluxes during nighttime.

Annotation Method based on Face Area for Efficient Interactive Video Authoring (효과적인 인터랙티브 비디오 저작을 위한 얼굴영역 기반의 어노테이션 방법)

  • Yoon, Ui Nyoung;Ga, Myeong Hyeon;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.83-98
    • /
    • 2015
  • Many TV viewers use mainly portal sites in order to retrieve information related to broadcast while watching TV. However retrieving information that people wanted needs a lot of time to retrieve the information because current internet presents too much information which is not required. Consequentially, this process can't satisfy users who want to consume information immediately. Interactive video is being actively investigated to solve this problem. An interactive video provides clickable objects, areas or hotspots to interact with users. When users click object on the interactive video, they can see additional information, related to video, instantly. The following shows the three basic procedures to make an interactive video using interactive video authoring tool: (1) Create an augmented object; (2) Set an object's area and time to be displayed on the video; (3) Set an interactive action which is related to pages or hyperlink; However users who use existing authoring tools such as Popcorn Maker and Zentrick spend a lot of time in step (2). If users use wireWAX then they can save sufficient time to set object's location and time to be displayed because wireWAX uses vision based annotation method. But they need to wait for time to detect and track object. Therefore, it is required to reduce the process time in step (2) using benefits of manual annotation method and vision-based annotation method effectively. This paper proposes a novel annotation method allows annotator to easily annotate based on face area. For proposing new annotation method, this paper presents two steps: pre-processing step and annotation step. The pre-processing is necessary because system detects shots for users who want to find contents of video easily. Pre-processing step is as follow: 1) Extract shots using color histogram based shot boundary detection method from frames of video; 2) Make shot clusters using similarities of shots and aligns as shot sequences; and 3) Detect and track faces from all shots of shot sequence metadata and save into the shot sequence metadata with each shot. After pre-processing, user can annotates object as follow: 1) Annotator selects a shot sequence, and then selects keyframe of shot in the shot sequence; 2) Annotator annotates objects on the relative position of the actor's face on the selected keyframe. Then same objects will be annotated automatically until the end of shot sequence which has detected face area; and 3) User assigns additional information to the annotated object. In addition, this paper designs the feedback model in order to compensate the defects which are wrong aligned shots, wrong detected faces problem and inaccurate location problem might occur after object annotation. Furthermore, users can use interpolation method to interpolate position of objects which is deleted by feedback. After feedback user can save annotated object data to the interactive object metadata. Finally, this paper shows interactive video authoring system implemented for verifying performance of proposed annotation method which uses presented models. In the experiment presents analysis of object annotation time, and user evaluation. First, result of object annotation average time shows our proposed tool is 2 times faster than existing authoring tools for object annotation. Sometimes, annotation time of proposed tool took longer than existing authoring tools, because wrong shots are detected in the pre-processing. The usefulness and convenience of the system were measured through the user evaluation which was aimed at users who have experienced in interactive video authoring system. Recruited 19 experts evaluates of 11 questions which is out of CSUQ(Computer System Usability Questionnaire). CSUQ is designed by IBM for evaluating system. Through the user evaluation, showed that proposed tool is useful for authoring interactive video than about 10% of the other interactive video authoring systems.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Ontology-Based Process-Oriented Knowledge Map Enabling Referential Navigation between Knowledge (지식 간 상호참조적 네비게이션이 가능한 온톨로지 기반 프로세스 중심 지식지도)

  • Yoo, Kee-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.61-83
    • /
    • 2012
  • A knowledge map describes the network of related knowledge into the form of a diagram, and therefore underpins the structure of knowledge categorizing and archiving by defining the relationship of the referential navigation between knowledge. The referential navigation between knowledge means the relationship of cross-referencing exhibited when a piece of knowledge is utilized by a user. To understand the contents of the knowledge, a user usually requires additionally information or knowledge related with each other in the relation of cause and effect. This relation can be expanded as the effective connection between knowledge increases, and finally forms the network of knowledge. A network display of knowledge using nodes and links to arrange and to represent the relationship between concepts can provide a more complex knowledge structure than a hierarchical display. Moreover, it can facilitate a user to infer through the links shown on the network. For this reason, building a knowledge map based on the ontology technology has been emphasized to formally as well as objectively describe the knowledge and its relationships. As the necessity to build a knowledge map based on the structure of the ontology has been emphasized, not a few researches have been proposed to fulfill the needs. However, most of those researches to apply the ontology to build the knowledge map just focused on formally expressing knowledge and its relationships with other knowledge to promote the possibility of knowledge reuse. Although many types of knowledge maps based on the structure of the ontology were proposed, no researches have tried to design and implement the referential navigation-enabled knowledge map. This paper addresses a methodology to build the ontology-based knowledge map enabling the referential navigation between knowledge. The ontology-based knowledge map resulted from the proposed methodology can not only express the referential navigation between knowledge but also infer additional relationships among knowledge based on the referential relationships. The most highlighted benefits that can be delivered by applying the ontology technology to the knowledge map include; formal expression about knowledge and its relationships with others, automatic identification of the knowledge network based on the function of self-inference on the referential relationships, and automatic expansion of the knowledge-base designed to categorize and store knowledge according to the network between knowledge. To enable the referential navigation between knowledge included in the knowledge map, and therefore to form the knowledge map in the format of a network, the ontology must describe knowledge according to the relation with the process and task. A process is composed of component tasks, while a task is activated after any required knowledge is inputted. Since the relation of cause and effect between knowledge can be inherently determined by the sequence of tasks, the referential relationship between knowledge can be circuitously implemented if the knowledge is modeled to be one of input or output of each task. To describe the knowledge with respect to related process and task, the Protege-OWL, an editor that enables users to build ontologies for the Semantic Web, is used. An OWL ontology-based knowledge map includes descriptions of classes (process, task, and knowledge), properties (relationships between process and task, task and knowledge), and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. Therefore a knowledge network can be automatically formulated based on the defined relationships, and the referential navigation between knowledge is enabled. To verify the validity of the proposed concepts, two real business process-oriented knowledge maps are exemplified: the knowledge map of the process of 'Business Trip Application' and 'Purchase Management'. By applying the 'DL-Query' provided by the Protege-OWL as a plug-in module, the performance of the implemented ontology-based knowledge map has been examined. Two kinds of queries to check whether the knowledge is networked with respect to the referential relations as well as the ontology-based knowledge network can infer further facts that are not literally described were tested. The test results show that not only the referential navigation between knowledge has been correctly realized, but also the additional inference has been accurately performed.