• Title/Summary/Keyword: Reference objects

Search Result 311, Processing Time 0.03 seconds

A Study on Iinsim(人心;ren-hsin)-Tosim(道心;dao-hsin) Thought of Lee Je-ma (동무(東武)의 인심(人心)·도심론(道心論))

  • Kang, Tae-Gon;Park, Seong-Sik
    • Journal of Sasang Constitutional Medicine
    • /
    • v.16 no.2
    • /
    • pp.1-16
    • /
    • 2004
  • 1. Objectives Theory of Iinsim Tosim(the Human - Mind and the Moral - Mind) that originated from "Sang-Seo(尙書)" "Dae-Woo-Mo(大禹謨)" is a relative importance point of dispute at the existing confucianism and chosun's neo confucianism. Our consideration point is that a close examination of relation between Iinsim-Tosim theory of existing confucianism's and Lee Je-ma's thought of Iinsim Tosim and his creative exposition that is related to view of human being contained theory of sasang(四象), knowledge & conduction(Jihang, 知行). 2. Methods of Research We analyze "Gyukchigo(格致藁)" that is contained confucianism's contents and Lee Je-ma's view point of human being, and consult " Dongyi Suse Bowon chobongeun(東醫壽世保元草本卷)", "Dongyi Suse Bowon(東醫壽世保元)", reference about confucianism and some other thesis. 3. Results (1) Lee Je-ma's thought of Iinsim Tosim is closely connected each other. There is close correlation between Iinsim and Tosim such as both ends. And Iinsim Tosim is not virtue and vice relation but all good concept, though each have fragility, Iinsim have weakness of laziness and Tosim have that of desire. (2) Lee Je-ma insist on having lisim(理心)-kyungsim(敬心) for overcoming each fragility that weakness of laziness and desire. And he present learning(學) and speculation(思) for acquirement of lisim(理心) kyungsim(敬心). (3) Lee Je-ma's thought of Iinsim Tosim's containing meaning is not one side but both side for nature(性) & emotion(情). therefore Iinsim Tosim is closely connected Lee Je-ma's view of human being and theory of knowledge & conduction(Ji-Hang, 知行論). That is very different from existing confucianism & MyungSunlock(Book of Illuminating Goodnes)'s thought. (4) Iinsim is connected with knowledge(知), Tosim is connected with conduction(行) (5) Lee Je-ma's thought of Affairs - Mind - Body - Objects(事心身物) is closely connected with Iinsim-Tosim. 4. Conclusions There are no parts that directly criticized by LEE JE-MA, comparing with he's thought and the existing confucianism's theory of Iinsim Tosim, a specific person's theory. but LEE JE-MA approached the theory of Iinsim Tosim with thought that is not an extension of existing confucianism's thought, but original with himself, therefore LEE JE-MA's thought of Iinsim Tosim is closely connected with the most important conceptions (theory of nature & emotion, theory of knowledge & conduction) at the Sasang Constitutional Medicine. and Iinsim Tosim is an important clue for understanding the thought of LEE JE-MA.

  • PDF

Updating Building Data in Digital Topographic Map Based on Matching and Generation of Update History Record (수치지도 건물데이터의 매칭 기반 갱신 및 이력 데이터 생성)

  • Park, Seul A;Yu, Ki Yun;Park, Woo Jin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.4_1
    • /
    • pp.311-318
    • /
    • 2014
  • The data of buildings and structures take over large portions of the mapping database with large numbers. Furthermore, those shapes and attributes of building data continuously change over time. Due to those factors, the efficient methodology of updating database for following the most recent data become necessarily. This study has purposed on extracting needed data, which has been changed, by using overlaying analysis of new and old dataset, during updating processes. Following to procedures, we firstly searched for matching pairs of objects from each dataset, and defined the classification algorithm for building updating cases by comparing; those of shape updating cases are divided into 8 cases, while those of attribute updating cases are divided into 4 cases. Also, two updated dataset are set to be automatically saved. For the study, we selected few guidelines; the layer of digital topographic map 1:5000 for the targeted updating data, the building layer of Korea Address Information System map for the reference data, as well as build-up areas in Gwanak-gu, Seoul for the test area. The result of study updated 82.1% in shape and 34.5% in attribute building objects among all.

No-Reference Visibility Prediction Model of Foggy Images Using Perceptual Fog-Aware Statistical Features (시지각적 통계 특성을 활용한 안개 영상의 가시성 예측 모델)

  • Choi, Lark Kwon;You, Jaehee;Bovik, Alan C.
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.4
    • /
    • pp.131-143
    • /
    • 2014
  • We propose a no-reference perceptual fog density and visibility prediction model in a single foggy scene based on natural scene statistics (NSS) and perceptual "fog aware" statistical features. Unlike previous studies, the proposed model predicts fog density without multiple foggy images, without salient objects in a scene including lane markings or traffic signs, without supplementary geographical information using an onboard camera, and without training on human-rated judgments. The proposed fog density and visibility predictor makes use of only measurable deviations from statistical regularities observed in natural foggy and fog-free images. Perceptual "fog aware" statistical features are derived from a corpus of natural foggy and fog-free images by using a spatial NSS model and observed fog characteristics including low contrast, faint color, and shifted luminance. The proposed model not only predicts perceptual fog density for the entire image but also provides local fog density for each patch size. To evaluate the performance of the proposed model against human judgments regarding fog visibility, we executed a human subjective study using a variety of 100 foggy images. Results show that the predicted fog density of the model correlates well with human judgments. The proposed model is a new fog density assessment work based on human visual perceptions. We hope that the proposed model will provide fertile ground for future research not only to enhance the visibility of foggy scenes but also to accurately evaluate the performance of defog algorithms.

Object Detection Algorithm Using Edge Information on the Sea Environment (해양 환경에서 에지 정보를 이용한 물표 추출 알고리즘)

  • Jeong, Jong-Myeon;Park, Gyei-Kark
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.9
    • /
    • pp.69-76
    • /
    • 2011
  • According to the related reports, about 60 percents of ship collisions have resulted from operating mistake caused by human factor. Specially, the report said that negligence of observation caused 66.8 percents of the accidents due to a human factor. Hence automatic detection and tracking of an object from an IR images are crucial for safety navigation because it can relieve officer's burden and remedies imperfections of human visual system. In this paper, we present a method to detect an object such as ship, rock and buoy from a sea IR image. Most edge directions of the sea image are horizontal and most vertical edges come out from the object areas. The presented method uses them as a characteristic for the object detection. Vertical edges are extracted from the input image and isolated edges are eliminated. Then morphological closing operation is performed on the vertical edges. This caused vertical edges that actually compose an object be connected and become an object candidate region. Next, reference object regions are extracted using horizontal edges, which appear on the boundaries between surface of the sea and the objects. Finally, object regions are acquired by sequentially integrating reference region and object candidate regions.

A study on vision system based on Generalized Hough Transform 2-D object recognition (Generalized Hough Transform을 이용한 이차원 물체인식 비젼 시스템 구현에 대한 연구)

  • Koo, Bon-Cheol;Park, Jin-Soo;Chien Sung-Il
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.67-78
    • /
    • 1996
  • The purpose of this paper is object recognition even in the presence of occlusion by using generalized Hough transform(GHT). The GHT can be considered as a kind of model based object recognition algorithm and is executed in the following two stages. The first stage is to store the information of the model in the form of R-table (Reference table). The next stage is to identify the existence of the objects in the image by using the R-table. The improved GHT method is proposed for the practical vision system. First, in constructing the R-table, we extracted the partial arc from the portion of the whole object boundary, and this partial arc can be used for constructing the R-table. Also, clustering algorithm is employed for compensating an error arised by digitizing an object image. Second, an efficient method is introduced to avoid Ballard's use of 4-D array which is necessary for estimating position, orientation and scale change of an object. Only 2-D array is enough for recognizing an object. Especially, scale token method is introduced for calculating the scale change which is easily affected by camera zoom. The results of our test show that the improved hierarchical GHT method operates stably in the realistic vision situation, even in the case of object occlusion.

  • PDF

Geometrically and Topographically Consistent Map Conflation for Federal and Local Governments (Geometry 및 Topology측면에서 일관성을 유지한 방법을 이용한 연방과 지방정부의 공간데이터 융합)

  • Kang, Ho-Seok
    • Journal of the Korean Geographical Society
    • /
    • v.39 no.5 s.104
    • /
    • pp.804-818
    • /
    • 2004
  • As spatial data resources become more abundant, the potential for conflict among them increases. Those conflicts can exist between two or many spatial datasets covering the same area and categories. Therefore, it becomes increasingly important to be able to effectively relate these spatial data sources with others then create new spatial datasets with matching geometry and topology. One extensive spatial dataset is US Census Bureau's TIGER file, which includes census tracts, block groups, and blocks. At present, however, census maps often carry information that conflicts with municipally-maintained detailed spatial information. Therefore, in order to fully utilize census maps and their valuable demographic and economic information, the locational information of the census maps must be reconciled with the more accurate municipally-maintained reference maps and imagery. This paper formulates a conceptual framework and two map models of map conflation to make geometrically and topologically consistent source maps according to the reference maps. The first model is based on the cell model of map in which a map is a cell complex consisting of 0-cells, 1-cells, and 2-cells. The second map model is based on a different set of primitive objects that remain homeomorphic even after map generalization. A new hierarchical based map conflation is also presented to be incorporated with physical, logical, and mathematical boundary and to reduce the complexity and computational load. Map conflation principles with iteration are formulated and census maps are used as a conflation example. They consist of attribute embedding, find meaning node, cartographic 0-cell match, cartographic 1-cell match, and map transformation.

Efficient Methods for Detecting Frame Characteristics and Objects in Video Sequences (내용기반 비디오 검색을 위한 움직임 벡터 특징 추출 알고리즘)

  • Lee, Hyun-Chang;Lee, Jae-Hyun;Jang, Ok-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.1
    • /
    • pp.1-11
    • /
    • 2008
  • This paper detected the characteristics of motion vector to support efficient content -based video search of video. Traditionally, the present frame of a video was divided into blocks of equal size and BMA (block matching algorithm) was used, which predicts the motion of each block in the reference frame on the time axis. However, BMA has several restrictions and vectors obtained by BMA are sometimes different from actual motions. To solve this problem, the foil search method was applied but this method is disadvantageous in that it has to make a large volume of calculation. Thus, as an alternative, the present study extracted the Spatio-Temporal characteristics of Motion Vector Spatio-Temporal Correlations (MVSTC). As a result, we could predict motion vectors more accurately using the motion vectors of neighboring blocks. However, because there are multiple reference block vectors, such additional information should be sent to the receiving end. Thus, we need to consider how to predict the motion characteristics of each block and how to define the appropriate scope of search. Based on the proposed algorithm, we examined motion prediction techniques for motion compensation and presented results of applying the techniques.

Optimization Model for the Mixing Ratio of Coatings Based on the Design of Experiments Using Big Data Analysis (빅데이터 분석을 활용한 실험계획법 기반의 코팅제 배합비율 최적화 모형)

  • Noh, Seong Yeo;Kim, Young-Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.3 no.10
    • /
    • pp.383-392
    • /
    • 2014
  • The research for coatings is one of the most popular and active research in the polymer industry. For the coatings, electronics industry, medical and optical fields are growing more important. In particular, the trend is the increasing of the technical requirements for the performance and accuracy of the coatings by the development of automotive and electronic parts. In addition, the industry has a need of more intelligent and automated system in the industry is increasing by introduction of the IoT and big data analysis based on the environmental information and the context information. In this paper, we propose an optimization model for the design of experiments based coating formulation data objects using the Internet technologies and big data analytics. In this paper, the coating formulation was calculated based on the best data analysis is based on the experimental design, modify the operator with respect to the error caused based on the coating formulation used in the actual production site data and the corrected result data. Further optimization model to correct the reference value by leveraging big data analysis and Internet of things technology only existing coating formulation is applied as the reference data using a manufacturing environment and context information retrieval in color and quality, the most important factor in maintaining and was derived. Based on data obtained from an experiment and analysis is improving the accuracy of the combination data and making it possible to give a LOT shorter working hours per data. Also the data shortens the production time due to the reduction in the delivery time per treatment and It can contribute to cost reduction or the like defect rate reduced. Further, it is possible to obtain a standard data in the manufacturing process for the various models.

Implementation of GPM Core Model Using OWL DL (OWL DL을 사용한 GPM 핵심 모델의 구현)

  • Choi, Ji-Woong;Park, Ho-Byung;Kim, Hyung-Jean;Kim, Myung-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.1
    • /
    • pp.31-42
    • /
    • 2010
  • GPM(Generic Product Model) developed by Hitachi in Japan is a common data model to integrate and share life cycle data of nuclear power plants. GPM consists of GPM core model, an abstract model, implementation language for the model and reference library written in the language. GPM core model has a feature that it can construct a semantic network model consisting of relationships among objects. Initial GPM developed and provided GPML as an implementation language to support the feature of the core model, but afterwards the GPML was replaced by GPM-XML based on XML to achieve data interoperability with heterogeneous applications accessing a GPM data model. However, data models written in GPM-XML are insufficient to be used as a semantic network model for lack of studies which support GPM-XML and enable the models to be used as a semantic network model. This paper proposes OWL as the implementation language for GPM core model because OWL can describe ontologies similar to semantic network models and has an abundant supply of technical standards and supporting tools. Also, OWL which can be expressed in terms of RDF/XML based on XML guarantees data interoperability. This paper uses OWL DL, one of three sublanguages of OWL, because it can guarantee complete reasoning and the maximum expressiveness at the same time. The contents of this paper introduce the way how to overcome the difference between GPM and OWL DL, and, base on this way, describe how to convert the reference library written in GPML into ontologies based on OWL DL written in RDF/XML.

Optimal Parameter Analysis and Evaluation of Change Detection for SLIC-based Superpixel Techniques Using KOMPSAT Data (KOMPSAT 영상을 활용한 SLIC 계열 Superpixel 기법의 최적 파라미터 분석 및 변화 탐지 성능 비교)

  • Chung, Minkyung;Han, Youkyung;Choi, Jaewan;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1427-1443
    • /
    • 2018
  • Object-based image analysis (OBIA) allows higher computation efficiency and usability of information inherent in the image, as it reduces the complexity of the image while maintaining the image properties. Superpixel methods oversegment the image with a smaller image unit than an ordinary object segment and well preserve the edges of the image. SLIC (Simple linear iterative clustering) is known for outperforming the previous superpixel methods with high image segmentation quality. Although the input parameter for SLIC, number of superpixels has considerable influence on image segmentation results, impact analysis for SLIC parameter has not been investigated enough. In this study, we performed optimal parameter analysis and evaluation of change detection for SLIC-based superpixel techniques using KOMPSAT data. Forsuperpixel generation, three superpixel methods (SLIC; SLIC0, zero parameter version of SLIC; SNIC, simple non-iterative clustering) were used with superpixel sizes in ranges of $5{\times}5$ (pixels) to $50{\times}50$ (pixels). Then, the image segmentation results were analyzed for how well they preserve the edges of the change detection reference data. Based on the optimal parameter analysis, image segmentation boundaries were obtained from difference image of the bi-temporal images. Then, DBSCAN (Density-based spatial clustering of applications with noise) was applied to cluster the superpixels to a certain size of objects for change detection. The changes of features were detected for each superpixel and compared with reference data for evaluation. From the change detection results, it proved that better change detection can be achieved even with bigger superpixel size if the superpixels were generated with high regularity of size and shape.