• Title/Summary/Keyword: 표준집합

Search Result 180, Processing Time 0.022 seconds

Improvement of Measurement Accuracy by Correcting Systematic Error Associated with the X-ray Diffractometer (X-선 회절 장비의 기계적 오차 수정을 통한 분석 정확도 향상)

  • Choi, Dooho
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.10
    • /
    • pp.97-101
    • /
    • 2017
  • X-ray diffractometers are used to characterize material properties, such as the phase, texture, lattice constant and residual stress, based on the diffracted beams obtained from specimens. Quantitative analyses using X-rays are typically conducted by measuring the peak positions of the diffracted beams. However, the long-term use of the diffractomer, like any other machine, results in errors associated with the mechanical parts, which can deteriorate the accuracy of the quantitative analyses. In this study, the process of correcting systematic errors in the $2{\theta}$ range of $30{\sim}90^{\circ}$ is discussed, for which strain-free Si powders from NIST were used as the standard specimens. For the evaluation of the impact of such error correction, we conducted a quantitative analysis of the true lattice constant for tungsten thin films.

A New Approach for Forest Management Planning : Fuzzy Multiobjective Linear Programming (삼림경영계획(森林經營計劃)을 위한 새로운 접근법(接近法) : 퍼지 다목표선형계획법(多目標線型計劃法))

  • Woo, Jong Choon
    • Journal of Korean Society of Forest Science
    • /
    • v.83 no.3
    • /
    • pp.271-279
    • /
    • 1994
  • This paper descbibes a fuzzy multiobjective linear programming, which is a relatively new approach in forestry in solving forest management problems. At first, the fuzzy set theory is explained briefly and the fuzzy linear programming(FLP) and the fuzzy multiobjective linear programming(FMLP) are introduced conceptionally. With the information obtained from the study area in Thailand, a standard linear programming problem is formulated, and optimal solutions (present net worth) are calculated for four groups of timber price by this LP model, respectively. This LP model is reformulated to a fuzzy multiobjective linear programming model to accommodate uncertain timber values and with this FMLP model a compromise solution is attained. Optimal solutions of four objective functions for four timber price groups and the compromise solution are compared and discussed.

  • PDF

The Method of Data Integration based on Maritime Sensors using USN (USN을 활용한 해양 센서 데이터 집합 방안)

  • Hong, Sung-Hwa;Ko, Jae-Pil;Kwak, Jae-Min
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.3
    • /
    • pp.306-311
    • /
    • 2017
  • In the future ubiquitous network, information will collect data from various sensors in the field. Since the sensor nodes are equipped with small, often irreplaceable, batteries with limited power capacity, it is essential that the network be energy-efficient in order to maximize its lifetime. In this paper, we propose an effective network routing method that can operate with low power as well as the transmission of data and information obtained from sensor networks, and identified the number of sensors with the best connectivity to help with the proper placement of the sensor. These purposes of this research are the development of the sensor middle-ware to integrate the maritime information and the proposal of the routing algorithm for gathering the maritime information of various sensors. In addition, for more secure ship navigation, we proposed a method to construct a sensor network using various electronic equipments that are difficult to access in a ship, and then construct a communication system using NMEA(the national marine electronics association), a ship communication standard, in the future.

Density Scalability of Video Based Point Cloud Compression by Using SHVC Codec (SHVC 비디오 기반 포인트 클라우드 밀도 스케일러빌리티 방안)

  • Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.709-722
    • /
    • 2020
  • Point Cloud which is a cluster of numerous points can express 3D object beyond the 2D plane. Each point contains 3D coordinate and color data basically, reflectance or etc. additionally. Point Cloud demand research and development much higher effective compression technology. Video-based Point Cloud Compression (V-PCC) technology in development and standardization based on the established video codec. Despite its high effective compression technology, point cloud service will be limited by terminal spec and network conditions. 2D video had the same problems. To remedy this kind of problem, 2D video is using Scalable High efficiency Video Coding (SHVC), Dynamic Adaptive Streaming over HTTP (DASH) or diverse technology. This paper proposed a density scalability method using SHVC codec in V-PCC.

Reconstruction of parametrized model using only three vanishing points from a single image (한 영상으로부터 3개의 소실 점들만을 사용한 매개 변수의 재구성)

  • 최종수;윤용인
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.3C
    • /
    • pp.419-425
    • /
    • 2004
  • This paper presents a new method which is calculated to use only three vanishing points in order to compute the dimensions of object and its pose from a single image of perspective projection taken by a camera. Our approach is to only compute three vanishing points without informations such as the focal length and rotation matrix from images in the case of perspective projection. We assume that the object can be modeled as a linear function of a dimension vector v. The input of reconstruction is a set of correspondences between features in the model and features in the image. To minimize each the dimensions of the parameterized models, this reconstruction of optimization can be solved by standard nonlinear optimization techniques with a multi-start method which generates multiple starting points for the optimizer by sampling the parameter space uniformly.

lustering of Categorical Data using Rough Entropy (러프 엔트로피를 이용한 범주형 데이터의 클러스터링)

  • Park, Inkyoo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.5
    • /
    • pp.183-188
    • /
    • 2013
  • A variety of cluster analysis techniques prerequisite to cluster objects having similar characteristics in data mining. But the clustering of those algorithms have lots of difficulties in dealing with categorical data within the databases. The imprecise handling of uncertainty within categorical data in the clustering process stems from the only algebraic logic of rough set, resulting in the degradation of stability and effectiveness. This paper proposes a information-theoretic rough entropy(RE) by taking into account the dependency of attributes and proposes a technique called min-mean-mean roughness(MMMR) for selecting clustering attribute. We analyze and compare the performance of the proposed technique with K-means, fuzzy techniques and other standard deviation roughness methods based on ZOO dataset. The results verify the better performance of the proposed approach.

An Image Segmentation Algorithm using the Shape Space Model (모양공간 모델을 이용한 영상분할 알고리즘)

  • 김대희;안충현;호요성
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.41-50
    • /
    • 2004
  • Since the MPEG-4 visual standard enables content-based functionalities, it is necessary to extract video objects from video sequences. Segmentation algorithms can largely be classified into two different categories: automatic segmentation and user-assisted segmentation. In this paper, we propose a new user-assisted image segmentation method based on the active contour. If we define a shape space as a set of all possible variations from the initial curve and we assume that the shape space is linear, it can be decomposed into the column space and the left null space of the shape matrix. In the proposed method, the shape space vector in the column space describes changes from the initial curve to the imaginary feature curve, and a dynamic graph search algorithm describes the detailed shape of the object in the left null space. Since we employ the shape matrix and the SUSAN operator to outline object boundaries, the proposed algorithm can ignore unwanted feature points generated by low-level image processing operations and is, therefore, applicable to images of complex background. We can also compensate for limitations of the shape matrix with a dynamic graph search algorithm.

A Multi-Agent Message Transport Architecture for Supporting Close Collaboration among Agents (에이전트들 간의 밀접한 협력을 지원하기 위한 다중 에이전트 메시지 전송 구조)

  • Chang, Hai Jin
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.3
    • /
    • pp.125-134
    • /
    • 2014
  • This paper proposes a multi-agent message transport architecture to support application areas which need fast message communications for close collaboration among agents. In the FIPA(Foundation of Intelligent Physical Agents) agent platform, all message transfer services among agents are in charge of a conceptual entity named ACC(Agent Communication Channel). In our multi-agent message transport architecture, the ACC is represented as a set of system agents named MTSA(Message Transfer Service Agent). The MTSA enables close collaboration among agents by supporting asynchronous communication, by using Reactor pattern to handle agent input messages efficiently, and by selecting optimal message transfer protocols according to the relative positional relationships of sender agents and receiver agents. The multi-agent framework SMAF(Small Multi-Agent Framework), which is implemented on the proposed multi-agent message transport architecture, shows better performance on message transfer among agents than JADE(Java Agent Development Environment) which is a well-known FIPA-compliant multi-agent framework. The faster the speed of message transfer of a multi-agent architecture becomes, the wider application areas the architecture can support.

Calibration of Omnidirectional Camera by Considering Inlier Distribution (인라이어 분포를 이용한 전방향 카메라의 보정)

  • Hong, Hyun-Ki;Hwang, Yong-Ho
    • Journal of Korea Game Society
    • /
    • v.7 no.4
    • /
    • pp.63-70
    • /
    • 2007
  • Since the fisheye lens has a wide field of view, it can capture the scene and illumination from all directions from far less number of omnidirectional images. Due to these advantages of the omnidirectional camera, it is widely used in surveillance and reconstruction of 3D structure of the scene In this paper, we present a new self-calibration algorithm of omnidirectional camera from uncalibrated images by considering the inlier distribution. First, one parametric non-linear projection model of omnidirectional camera is estimated with the known rotation and translation parameters. After deriving projection model, we can compute an essential matrix of the camera with unknown motions, and then determine the camera information: rotation and translations. The standard deviations are used as a quantitative measure to select a proper inlier set. The experimental results showed that we can achieve a precise estimation of the omnidirectional camera model and extrinsic parameters including rotation and translation.

  • PDF

A Clustering Technique using Common Structures of XML Documents (XML 문서의 공통 구조를 이용한 클러스터링 기법)

  • Hwang, Jeong-Hee;Ryu, Keun-Ho
    • Journal of KIISE:Databases
    • /
    • v.32 no.6
    • /
    • pp.650-661
    • /
    • 2005
  • As the Internet is growing, the use of XML which is a standard of semi-structured document is increasing. Therefore, there are on going works about integration and retrieval of XML documents. However, the basis of efficient integration and retrieval of documents is to cluster XML documents with similar structure. The conventional XML clustering approaches use the hierarchical clustering algorithm that produces the demanded number of clusters through repeated merge, but it have some problems that it is difficult to compute the similarity between XML documents and it costs much time to compare similarity repeatedly. In order to address this problem, we use clustering algorithm for transactional data that is scale for large size of data. In this paper we use common structures from XML documents that don't have DTD or schema. In order to use common structures of XML document, we extract representative structures by decomposing the structure from a tree model expressing the XML document, and we perform clustering with the extracted structure. Besides, we show efficiency of proposed method by comparing and analyzing with the previous method.